WorldWideScience

Sample records for model-selection process comparing

  1. Model selection for Poisson processes with covariates

    CERN Document Server

    Sart, Mathieu

    2011-01-01

    We observe $n$ inhomogeneous Poisson processes with covariates and aim at estimating their intensities. To handle this problem, we assume that the intensity of each Poisson process is of the form $s (\\cdot, x)$ where $x$ is the covariate and where $s$ is an unknown function. We propose a model selection approach where the models are used to approximate the multivariate function $s$. We show that our estimator satisfies an oracle-type inequality under very weak assumptions both on the intensities and the models. By using an Hellinger-type loss, we establish non-asymptotic risk bounds and specify them under various kind of assumptions on the target function $s$ such as being smooth or composite. Besides, we show that our estimation procedure is robust with respect to these assumptions.

  2. A MODEL SELECTION PROCEDURE IN MIXTURE-PROCESS EXPERIMENTS FOR INDUSTRIAL PROCESS OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Márcio Nascimento de Souza Leão

    2015-08-01

    Full Text Available We present a model selection procedure for use in Mixture and Mixture-Process Experiments. Certain combinations of restrictions on the proportions of the mixture components can result in a very constrained experimental region. This results in collinearity among the covariates of the model, which can make it difficult to fit the model using the traditional method based on the significance of the coefficients. For this reason, a model selection methodology based on information criteria will be proposed for process optimization. Two examples are presented to illustrate this model selection procedure.

  3. Social Influence Interpretation of Interpersonal Processes and Team Performance Over Time Using Bayesian Model Selection

    NARCIS (Netherlands)

    Johnson, Alan R.; van de Schoot, Rens; Delmar, Frédéric; Crano, William D.

    2015-01-01

    The team behavior literature is ambiguous about the relations between members’ interpersonal processes—task debate and task conflict—and team performance. From a social influence perspective, we show why members’ interpersonal processes determine team performance over time in small groups. Together,

  4. Reducing uncertainty in the selection of bi-variate distributions of flood peaks and volumes using copulas and hydrological process-based model selection

    Science.gov (United States)

    Szolgay, Jan; Gaál, Ladislav; Bacigál, Tomáš; Kohnová, Silvia; Blöschl, Günter

    2016-04-01

    and snow-melt floods. First, empirical copulas for the individual processes were compared at each site separately in order to assess whether peak-volume relationships are different for in terms of the respective flood processes. Next, the similarity of empirical distributions was tested in a regional perspective process-wise. In the last step, the goodness-of-fit of frequently used copula types was examined both for process based data samples (the current approach, based on a wider database of flood events) and annual maximum floods (the traditional approach that makes use of a limited number of events). It was concluded, that in order to reduce the uncertainty in model selection and parameter estimation, it is necessary to treat flood processes separately and analyze all available independent floods. Given that usually more than one statistically suitable copula model exists in practice, an uncertainty analysis of the design values in engineering studies resulting from the model selection is necessary. It was shown, that reducing uncertainty in the choice of model can be attempted by a deeper hydrological analysis of the dependence structure/model's suitability in specific hydrological environments or by a more specific distinction of the typical flood generation mechanisms.

  5. Bayesian Evidence and Model Selection

    CERN Document Server

    Knuth, Kevin H; Malakar, Nabin K; Mubeen, Asim M; Placek, Ben

    2014-01-01

    In this paper we review the concept of the Bayesian evidence and its application to model selection. The theory is presented along with a discussion of analytic, approximate and numerical techniques. Application to several practical examples within the context of signal processing are discussed.

  6. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  7. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  8. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates.

  9. Individual Influence on Model Selection

    Science.gov (United States)

    Sterba, Sonya K.; Pek, Jolynn

    2012-01-01

    Researchers in psychology are increasingly using model selection strategies to decide among competing models, rather than evaluating the fit of a given model in isolation. However, such interest in model selection outpaces an awareness that one or a few cases can have disproportionate impact on the model ranking. Though case influence on the fit…

  10. Chemometrics applications in biotech processes: assessing process comparability.

    Science.gov (United States)

    Bhushan, Nitish; Hadpe, Sandip; Rathore, Anurag S

    2012-01-01

    A typical biotech process starts with the vial of the cell bank, ends with the final product and has anywhere from 15 to 30 unit operations in series. The total number of process variables (input and output parameters) and other variables (raw materials) can add up to several hundred variables. As the manufacturing process is widely accepted to have significant impact on the quality of the product, the regulatory agencies require an assessment of process comparability across different phases of manufacturing (Phase I vs. Phase II vs. Phase III vs. Commercial) as well as other key activities during product commercialization (process scale-up, technology transfer, and process improvement). However, assessing comparability for a process with such a large number of variables is nontrivial and often companies resort to qualitative comparisons. In this article, we present a quantitative approach for assessing process comparability via use of chemometrics. To our knowledge this is the first time that such an approach has been published for biotech processing. The approach has been applied to an industrial case study involving evaluation of two processes that are being used for commercial manufacturing of a major biosimilar product. It has been demonstrated that the proposed approach is able to successfully identify the unit operations in the two processes that are operating differently. We expect this approach, which can also be applied toward assessing product comparability, to be of great use to both the regulators and the industry which otherwise struggle to assess comparability.

  11. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  12. Complexity regularized hydrological model selection

    NARCIS (Netherlands)

    Pande, S.; Arkesteijn, L.; Bastidas, L.A.

    2014-01-01

    This paper uses a recently proposed measure of hydrological model complexity in a model selection exercise. It demonstrates that a robust hydrological model is selected by penalizing model complexity while maximizing a model performance measure. This especially holds when limited data is available.

  13. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online....

  14. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  15. Comparing Binaural Pre-processing Strategies II

    Directory of Open Access Journals (Sweden)

    Regina M. Baumgärtel

    2015-12-01

    Full Text Available Several binaural audio signal enhancement algorithms were evaluated with respect to their potential to improve speech intelligibility in noise for users of bilateral cochlear implants (CIs. 50% speech reception thresholds (SRT50 were assessed using an adaptive procedure in three distinct, realistic noise scenarios. All scenarios were highly nonstationary, complex, and included a significant amount of reverberation. Other aspects, such as the perfectly frontal target position, were idealized laboratory settings, allowing the algorithms to perform better than in corresponding real-world conditions. Eight bilaterally implanted CI users, wearing devices from three manufacturers, participated in the study. In all noise conditions, a substantial improvement in SRT50 compared to the unprocessed signal was observed for most of the algorithms tested, with the largest improvements generally provided by binaural minimum variance distortionless response (MVDR beamforming algorithms. The largest overall improvement in speech intelligibility was achieved by an adaptive binaural MVDR in a spatially separated, single competing talker noise scenario. A no-pre-processing condition and adaptive differential microphones without a binaural link served as the two baseline conditions. SRT50 improvements provided by the binaural MVDR beamformers surpassed the performance of the adaptive differential microphones in most cases. Speech intelligibility improvements predicted by instrumental measures were shown to account for some but not all aspects of the perceptually obtained SRT50 improvements measured in bilaterally implanted CI users.

  16. Inflation model selection meets dark radiation

    Science.gov (United States)

    Tram, Thomas; Vallance, Robert; Vennin, Vincent

    2017-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard ΛCDM model and an extension including dark radiation parametrised by its effective number of relativistic species Neff. Using a minimal dataset (Planck low-l polarisation, temperature power spectrum and lensing reconstruction), we find that the observational status of most inflationary models is unchanged. The exceptions are potentials such as power-law inflation that predict large values for the scalar spectral index that can only be realised when Neff is allowed to vary. Adding baryon acoustic oscillations data and the B-mode data from BICEP2/Keck makes power-law inflation disfavoured, while adding local measurements of the Hubble constant H0 makes power-law inflation slightly favoured compared to the best single-field plateau potentials. This illustrates how the dark radiation solution to the H0 tension would have deep consequences for inflation model selection.

  17. Model selection for radiochromic film dosimetry

    CERN Document Server

    Méndez, Ignasi

    2015-01-01

    The purpose of this study was to find the most accurate model for radiochromic film dosimetry by comparing different channel independent perturbation models. A model selection approach based on (algorithmic) information theory was followed, and the results were validated using gamma-index analysis on a set of benchmark test cases. Several questions were addressed: (a) whether incorporating the information of the non-irradiated film, by scanning prior to irradiation, improves the results; (b) whether lateral corrections are necessary when using multichannel models; (c) whether multichannel dosimetry produces better results than single-channel dosimetry; (d) which multichannel perturbation model provides more accurate film doses. It was found that scanning prior to irradiation and applying lateral corrections improved the accuracy of the results. For some perturbation models, increasing the number of color channels did not result in more accurate film doses. Employing Truncated Normal perturbations was found to...

  18. Exploratory Bayesian model selection for serial genetics data.

    Science.gov (United States)

    Zhao, Jing X; Foulkes, Andrea S; George, Edward I

    2005-06-01

    Characterizing the process by which molecular and cellular level changes occur over time will have broad implications for clinical decision making and help further our knowledge of disease etiology across many complex diseases. However, this presents an analytic challenge due to the large number of potentially relevant biomarkers and the complex, uncharacterized relationships among them. We propose an exploratory Bayesian model selection procedure that searches for model simplicity through independence testing of multiple discrete biomarkers measured over time. Bayes factor calculations are used to identify and compare models that are best supported by the data. For large model spaces, i.e., a large number of multi-leveled biomarkers, we propose a Markov chain Monte Carlo (MCMC) stochastic search algorithm for finding promising models. We apply our procedure to explore the extent to which HIV-1 genetic changes occur independently over time.

  19. The Ouroboros Model, selected facets.

    Science.gov (United States)

    Thomsen, Knud

    2011-01-01

    The Ouroboros Model features a biologically inspired cognitive architecture. At its core lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. The activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed 'consumption analysis' is then checking how well such expectations fit with successive activations. Mismatches between anticipations based on previous experience and actual current data are highlighted and used for controlling the allocation of attention. A measure for the goodness of fit provides feedback as (self-) monitoring signal. The basic algorithm works for goal directed movements and memory search as well as during abstract reasoning. It is sketched how the Ouroboros Model can shed light on characteristics of human behavior including attention, emotions, priming, masking, learning, sleep and consciousness.

  20. Model selection for amplitude analysis

    CERN Document Server

    Guegan, Baptiste; Stevens, Justin; Williams, Mike

    2015-01-01

    Model complexity in amplitude analyses is often a priori under-constrained since the underlying theory permits a large number of amplitudes to contribute to most physical processes. The use of an overly complex model results in reduced predictive power and worse resolution on unknown parameters of interest. Therefore, it is common to reduce the complexity by removing from consideration some subset of the allowed amplitudes. This paper studies a data-driven method for limiting model complexity through regularization during regression in the context of a multivariate (Dalitz-plot) analysis. The regularization technique applied greatly improves the performance. A method is also proposed for obtaining the significance of a resonance in a multivariate amplitude analysis.

  1. Efficiency of model selection criteria in flood frequency analysis

    Science.gov (United States)

    Calenda, G.; Volpi, E.

    2009-04-01

    The estimation of high flood quantiles requires the extrapolation of the probability distributions far beyond the usual sample length, involving high estimation uncertainties. The choice of the probability law, traditionally based on the hypothesis testing, is critical to this point. In this study the efficiency of different model selection criteria, seldom applied in flood frequency analysis, is investigated. The efficiency of each criterion in identifying the probability distribution of the hydrological extremes is evaluated by numerical simulations for different parent distributions, coefficients of variation and skewness, and sample sizes. The compared model selection procedures are the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the Anderson Darling Criterion (ADC) recently discussed by Di Baldassarre et al. (2008) and Sample Quantile Criterion (SQC), recently proposed by the authors (Calenda et al., 2009). The SQC is based on the principle of maximising the probability density of the elements of the sample that are considered relevant to the problem, and takes into account both the accuracy and the uncertainty of the estimate. Since the stress is mainly on extreme events, the SQC involves upper-tail probabilities, where the effect of the model assumption is more critical. The proposed index is equal to the sum of logarithms of the inverse of the sample probability density of the observed quantiles. The definition of this index is based on the principle that the more centred is the sample value in respect to its density distribution (accuracy of the estimate) and the less spread is this distribution (uncertainty of the estimate), the greater is the probability density of the sample quantile. Thus, lower values of the index indicate a better performance of the distribution law. This criterion can operate the selection of the optimum distribution among competing probability models that are estimated using different samples. The

  2. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Asensio Ramos, A.; Manso Sainz, R.; Martinez Gonzalez, M. J.; Socas-Navarro, H. [Instituto de Astrofisica de Canarias, E-38205, La Laguna, Tenerife (Spain); Viticchie, B. [ESA/ESTEC RSSD, Keplerlaan 1, 2200 AG Noordwijk (Netherlands); Orozco Suarez, D., E-mail: aasensio@iac.es [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan)

    2012-04-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  3. Comparative analysis of genomic signal processing for microarray data clustering.

    Science.gov (United States)

    Istepanian, Robert S H; Sungoor, Ala; Nebel, Jean-Christophe

    2011-12-01

    Genomic signal processing is a new area of research that combines advanced digital signal processing methodologies for enhanced genetic data analysis. It has many promising applications in bioinformatics and next generation of healthcare systems, in particular, in the field of microarray data clustering. In this paper we present a comparative performance analysis of enhanced digital spectral analysis methods for robust clustering of gene expression across multiple microarray data samples. Three digital signal processing methods: linear predictive coding, wavelet decomposition, and fractal dimension are studied to provide a comparative evaluation of the clustering performance of these methods on several microarray datasets. The results of this study show that the fractal approach provides the best clustering accuracy compared to other digital signal processing and well known statistical methods.

  4. The digital storytelling process: A comparative analysis from various experts

    Science.gov (United States)

    Hussain, Hashiroh; Shiratuddin, Norshuhada

    2016-08-01

    Digital Storytelling (DST) is a method of delivering information to the audience. It combines narrative and digital media content infused with the multimedia elements. In order for the educators (i.e the designers) to create a compelling digital story, there are sets of processes introduced by experts. Nevertheless, the experts suggest varieties of processes to guide them; of which some are redundant. The main aim of this study is to propose a single guide process for the creation of DST. A comparative analysis is employed where ten DST models from various experts are analysed. The process can also be implemented in other multimedia materials that used the concept of DST.

  5. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  6. A comparative study of face processing using scrambled faces

    OpenAIRE

    Taubert, Jessica; Aagten-Murphy, David; Parr, Lisa A.

    2012-01-01

    It is a widespread assumption that all primate species process faces in the same way because the species are closely related and they engage in similar social interactions. However, this approach ignores potentially interesting and informative differences that may exist between species. This paper describes a comparative study of holistic face processing. Twelve subjects (six chimpanzees Pan troglodytes and six rhesus monkeys Macaca mulatta) were trained to discriminate whole faces (faces wit...

  7. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  8. Comparing the Governance of Novel Products and Processes of Biotechnology

    DEFF Research Database (Denmark)

    Hansen, Janus

    to start to fill this gap and develop a conceptual framework for comparing and analysing new and emerging modes of governance affiliated with biotechnology in the light of more general approaches to governance. We aim for a framework that can facilitate comparative inquiries and learning across different......The emergence of novel products and processes of biotechnology in medicine, industry and agriculture has been accompanied by promises of healthier, safer and more productive lives and societies. However, biotechnology has also served as cause and catalyst of social controversy about the physical...

  9. Model selection bias and Freedman's paradox

    Science.gov (United States)

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  10. Comparing Reading Processing Strategies of Second Language Readers

    Directory of Open Access Journals (Sweden)

    Parilah M. Shah

    2010-01-01

    Full Text Available Problem statement: The message that a writer tries to convey in a text would be subjected to several interpretations by readers. Apparently, reading is a complex process of getting input. A well-known researcher offers two views of reading: (i reading is a process of decoding written symbols and (ii reading is a process of reconstructing meaning. It has also been proposed that readers used reading processing strategies in the process of understanding text. Most language educators are not aware of the specific reading strategies that second language readers utilize. Therefore, it is deemed necessary to conduct a study that could explore the specific types of strategies used and to compare the strategies utilized by readers of differing abilities. Approach: A study is conducted to examine the second language readers use of reading strategies at the Malaysian secondary schools. They read a piece of reading material, and then respond to questionnaires concerning reading strategies such as supervising strategies, support strategies and paraphrase strategies. Results: The findings indicate that there are differences in reading strategies used by second language readers of differing abilities for some of the question items. The results suggest the need to address the incorporation of reading strategy instruction in the language curriculum in order to produce more efficient readers. Conclusion: This investigation is another useful contribution to the applied linguistics research since second language educators would gain better insight into the readers comprehension process.

  11. A Comparative Study of Default Reasoning and Epistemic Processes

    Institute of Scientific and Technical Information of China (English)

    李未

    1993-01-01

    A comparative study between the theories of default reasoning and open logic is given.Some concepts of open logic,such as new premises,rejections by facts,reconstructions ,epistemic processes,and its limit are introduced to describe th evolution of hypotheses.An improved version of the limit theorem is given and proved.A model-theoretic interpretation of the closed normal defaults is given using the above concepts and the corresponding completeness is proved.Any extension of a closed normal default theory is proved to be the linit of a δ-partial increasing epistemic process of that theory,and vice versa.It is proved that there exist two distinct extensions of a closed normal default theory iff there is an δ-non-monotonic epistemic process of that theory.The completeness of Reiter's proof is also given and proved,in terms of the epistemic processes.Finally,the work is compared with Gaerdenfors's theory of knowledge in flux.

  12. Frugal innovation process: Comparing between grassroots and elite contexts

    DEFF Research Database (Denmark)

    Hossain, Mokter; Levänen, Jarkko; Lindeman, Sara

    2017-01-01

    processes. Second, we show how individuals have very different understandings of frugal innovations as well as capacities and resources needed for the development of frugal innovations. Two prominent frugal innovation cases are used in this study. One innovation was developed by individuals from the USA......The objective of this paper is to understand the processes of the development of frugal innovation with an entrepreneurial spirit. The contributions of this study are twofold. First, we explore, compare, and contrast two very different contexts - grassroots and elite - of frugal innovation...... and another developed by an individual from the Gujarat, India. Using effectuation theory we find that there are some distinct differences between two categories regarding finance, access to science and technology, the motivation of innovators, options they have, actions they take, etc....

  13. Electrocoagulation in wastewater containing arsenic: Comparing different process designs

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Henrik K. [Departamento de Procesos Quimicos, Biotecnologicos y Ambientales, Universidad Tecnica Federico Santa Maria, Avenida Espana 1680, Valparaiso (Chile)]. E-mail: henrik.hansen@usm.cl; Nunez, Patricio [Departamento de Procesos Quimicos, Biotecnologicos y Ambientales, Universidad Tecnica Federico Santa Maria, Avenida Espana 1680, Valparaiso (Chile); Raboy, Deborah [Departamento de Procesos Quimicos, Biotecnologicos y Ambientales, Universidad Tecnica Federico Santa Maria, Avenida Espana 1680, Valparaiso (Chile); Schippacasse, Italo [Departamento de Procesos Quimicos, Biotecnologicos y Ambientales, Universidad Tecnica Federico Santa Maria, Avenida Espana 1680, Valparaiso (Chile); Grandon, Rodrigo [Departamento de Procesos Quimicos, Biotecnologicos y Ambientales, Universidad Tecnica Federico Santa Maria, Avenida Espana 1680, Valparaiso (Chile)

    2007-02-25

    Arsenic removal from wastewater is a key problem for copper smelters. This work shows results of electrocoagulation of aqueous solutions containing arsenic with three different process designs and operating parameters. Three types of electrocoagulation reactors were tested and compared: (a) a modified flow continuous reactor, (b) a turbulent flow reactor and (c) an airlift reactor. All used iron as sacrificial anodes. The results showed that the electrocoagulation process of a 100 mg/L As(V) solution could decrease the arsenic concentration to less than 2 mg/L in the effluent with a current density of 1.2 A/dm{sup 2} with both the modified flow and the airlift reactor. The removal of arsenic with the turbulent flow reactor did not reach the same level but the Fe-to-As ratio (mol/mol) achieved in the coagulation process was in this case lower (approximately 7) than with the other two reactors. In addition, it seems that increasing the current density beyond a maximum value, the electrocoagulation process would not improve any further. This could probably be explained by passivation of the anode.

  14. Bayesian model selection in Gaussian regression

    CERN Document Server

    Abramovich, Felix

    2009-01-01

    We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for "nearly-orthogonal" and "multicollinear" designs.

  15. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    Science.gov (United States)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  16. Comparing the Efficacy of Arbitration and Family Counseling Process on

    Directory of Open Access Journals (Sweden)

    Sedegheh Alimardani

    2010-07-01

    Full Text Available AbstractThe purpose of this study is to investigate the effect of arbitration on decrease of matrimonial conflicts todivorce of married couple who asked for divorce and to compare it with the process of consultation.In this study, 15 pairs for arbitration and 15 pairs for consultation are selected among married couple fromsocial services of "adoption institution" in Isfahan, who had come to the interfering in family center, todecrease the divorce.The research procedure was consisted of two methods:1 Descriptive (survey2 Quasi experimental with pre-test and post –test.To collect datum, Sanaei and Barati matrimonial conflicts questionaire and Ghalili andFatehizadeh.matrimonial conflicts questionnaire have been used. To analyze the results SPSS software hasbeen used.The results show that the process of arbitration doesn’t have influence on decrease of matrimonialconflicts and its dimensions in married couple who ask for divorce (p>0.05, whereas the process ofconsultation has influence on decease of matrimonial conflicts and its dimensions in married couple who askfor divorce (p0.05.

  17. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  18. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  19. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  20. A comparative kinetic study of SNCR process using ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Javed, M. Tayyeb; Ahmed, Z.; Ibrahim, M. Asim; Irfan, N.

    2008-07-01

    The paper presents comparative kinetic modelling of nitrogen oxides (NOx) removal from flue gases by selective non-catalytic reduction process using ammonia as reducing agent. The computer code SENKIN is used in this study with the three published chemical kinetic mechanisms; Zanoelo, Kilpinen and Skreiberg. Kinetic modeling was performed for an isothermal plug flow reactor at atmospheric pressure so as to compare it with the experimental results. A 500 ppm NOx background in the flue gas is considered and kept constant throughout the investigation. The ammonia performance was modeled in the range of 750 to 1250{sup o}C using the molar ratios NH{sub 3}/NOx from 0.25 to 3.0 and residence times up to 1.5 seconds. The modeling using all the mechanisms exhibits and confirms a temperature window of NOx reduction with ammonia. It was observed that 80% of NOx reduction efficiency could be achieved if the flue gas is given 300 msec to react with ammonia, while it is passing through a section within a temperature range of 910 to 1060{sup o}C (Kilpinen mechanism) or within a temperature range of 925 to 1030{sup o}C (Zanoelo mechanism) or within a temperature range of 890 to 1090{sup o}C (Skreiberg mechanism). 20 refs., 6 figs.

  1. Information criteria for astrophysical model selection

    CERN Document Server

    Liddle, A R

    2007-01-01

    Model selection is the problem of distinguishing competing models, perhaps featuring different numbers of parameters. The statistics literature contains two distinct sets of tools, those based on information theory such as the Akaike Information Criterion (AIC), and those on Bayesian inference such as the Bayesian evidence and Bayesian Information Criterion (BIC). The Deviance Information Criterion combines ideas from both heritages; it is readily computed from Monte Carlo posterior samples and, unlike the AIC and BIC, allows for parameter degeneracy. I describe the properties of the information criteria, and as an example compute them from WMAP3 data for several cosmological models. I find that at present the information theory and Bayesian approaches give significantly different conclusions from that data.

  2. Improving randomness characterization through Bayesian model selection

    CERN Document Server

    R., Rafael Díaz-H; Martínez, Alí M Angulo; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Castillo, Isaac Pérez

    2016-01-01

    Nowadays random number generation plays an essential role in technology with important applications in areas ranging from cryptography, which lies at the core of current communication protocols, to Monte Carlo methods, and other probabilistic algorithms. In this context, a crucial scientific endeavour is to develop effective methods that allow the characterization of random number generators. However, commonly employed methods either lack formality (e.g. the NIST test suite), or are inapplicable in principle (e.g. the characterization derived from the Algorithmic Theory of Information (ATI)). In this letter we present a novel method based on Bayesian model selection, which is both rigorous and effective, for characterizing randomness in a bit sequence. We derive analytic expressions for a model's likelihood which is then used to compute its posterior probability distribution. Our method proves to be more rigorous than NIST's suite and the Borel-Normality criterion and its implementation is straightforward. We...

  3. On Model Selection Criteria in Multimodel Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ye, Ming; Meyer, Philip D.; Neuman, Shlomo P.

    2008-03-21

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC [Akaike, 1974], AICc [Hurvich and Tsai, 1989], BIC [Schwartz, 1978] and KIC [Kashyap, 1982] and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] and Poeter and Hill [2007] have voiced a preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true

  4. Entropic Priors and Bayesian Model Selection

    CERN Document Server

    Brewer, Brendon J

    2009-01-01

    We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...

  5. ModelOMatic: fast and automated model selection between RY, nucleotide, amino acid, and codon substitution models.

    Science.gov (United States)

    Whelan, Simon; Allen, James E; Blackburne, Benjamin P; Talavera, David

    2015-01-01

    Molecular phylogenetics is a powerful tool for inferring both the process and pattern of evolution from genomic sequence data. Statistical approaches, such as maximum likelihood and Bayesian inference, are now established as the preferred methods of inference. The choice of models that a researcher uses for inference is of critical importance, and there are established methods for model selection conditioned on a particular type of data, such as nucleotides, amino acids, or codons. A major limitation of existing model selection approaches is that they can only compare models acting upon a single type of data. Here, we extend model selection to allow comparisons between models describing different types of data by introducing the idea of adapter functions, which project aggregated models onto the originally observed sequence data. These projections are implemented in the program ModelOMatic and used to perform model selection on 3722 families from the PANDIT database, 68 genes from an arthropod phylogenomic data set, and 248 genes from a vertebrate phylogenomic data set. For the PANDIT and arthropod data, we find that amino acid models are selected for the overwhelming majority of alignments; with progressively smaller numbers of alignments selecting codon and nucleotide models, and no families selecting RY-based models. In contrast, nearly all alignments from the vertebrate data set select codon-based models. The sequence divergence, the number of sequences, and the degree of selection acting upon the protein sequences may contribute to explaining this variation in model selection. Our ModelOMatic program is fast, with most families from PANDIT taking fewer than 150 s to complete, and should therefore be easily incorporated into existing phylogenetic pipelines. ModelOMatic is available at https://code.google.com/p/modelomatic/.

  6. Comparative Analysis between Fuzzy and Traditional Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Mulubrhan Freselam

    2014-07-01

    Full Text Available Analytic Hierarchy Process (AHP is one of the techniques commonly used for prioritizing different alternatives, by using complex criteria. In real applications, conventional AHP assumes the expert judgment as it is exact and use crisp number leading to inconsideration of the uncertainty that came from linguistic variable. Fuzzy logic deals with situations which are vague or unwell defined and gives a quantify value. In this study a comparison is made between traditional AHP and fuzzy AHP by taking a case of selecting an effective oil refinery. The selection is conducted using system effectiveness as a criterion. The two approaches have been compared on the same hierarchy structure and criteria set and the result show that in both case dual drum scheme (DDS has the highest priority but different value that is 0.51 and 0.36 for AHP and FAHP respectively which shows that if the expert opinion is certain AHP should be used if not FAHP should be preferred

  7. A comparative study on microwave and routine tissue processing

    Directory of Open Access Journals (Sweden)

    T Mahesh Babu

    2011-01-01

    Conclusions: The individual scores by different observers regarding the various parameters included in the study were statistically insignificant, the overall quality of microwave-processed and microwave-stained slides appeared slightly better than conventionally processed and stained slides.

  8. RESEARCH ON THE PLANT MODEL SELECTION AND PRODUCTION PROCESS FLOW OF CALCIUM-SILICON POWDER PRODUCING%硅钙制粉的设备选型和生产工艺流程研究

    Institute of Scientific and Technical Information of China (English)

    席增宏; 徐鹿鸣

    2015-01-01

    In order to improve the application of injection of powder metallurgical technology, the paper in details introduced the properties of some equipment for producing calcium-silicon powder, the powder process effect, the production process flow and explosion-proof measure. All mentioned in the paper has great reference value for the development of calcium-silicon powder producing of China in future.%为推广喷射冶金技术的应用,文章对各种机械制粉用设备的性能,制粉效果及生产工艺流程,防爆安全措施等均进行了详细介绍。文中所论及的制粉设备选型工艺流程和防爆措施对今后我国硅钙制粉技术的发展进步具有较大的参考价值。

  9. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  10. Inflation Model Selection meets Dark Radiation

    CERN Document Server

    Tram, Thomas; Vennin, Vincent

    2016-01-01

    We investigate how inflation model selection is affected by the presence of additional free-streaming relativistic degrees of freedom, i.e. dark radiation. We perform a full Bayesian analysis of both inflation parameters and cosmological parameters taking reheating into account self-consistently. We compute the Bayesian evidence for a few representative inflation scenarios in both the standard $\\Lambda\\mathrm{CDM}$ model and an extension including dark radiation parametrised by its effective number of relativistic species $N_\\mathrm{eff}$. We find that the observational status of most inflationary models is unchanged, with the exception of potentials such as power-law inflation that predict a value for the scalar spectral index that is too large in $\\Lambda\\mathrm{CDM}$ but which can be accommodated when $N_\\mathrm{eff}$ is allowed to vary. In this case, cosmic microwave background data indicate that power-law inflation is one of the best models together with plateau potentials. However, contrary to plateau p...

  11. A model selection approach to analysis of variance and covariance.

    Science.gov (United States)

    Alber, Susan A; Weiss, Robert E

    2009-06-15

    An alternative to analysis of variance is a model selection approach where every partition of the treatment means into clusters with equal value is treated as a separate model. The null hypothesis that all treatments are equal corresponds to the partition with all means in a single cluster. The alternative hypothesis correspond to the set of all other partitions of treatment means. A model selection approach can also be used for a treatment by covariate interaction, where the null hypothesis and each alternative correspond to a partition of treatments into clusters with equal covariate effects. We extend the partition-as-model approach to simultaneous inference for both treatment main effect and treatment interaction with a continuous covariate with separate partitions for the intercepts and treatment-specific slopes. The model space is the Cartesian product of the intercept partition and the slope partition, and we develop five joint priors for this model space. In four of these priors the intercept and slope partition are dependent. We advise on setting priors over models, and we use the model to analyze an orthodontic data set that compares the frictional resistance created by orthodontic fixtures. Copyright (c) 2009 John Wiley & Sons, Ltd.

  12. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  13. Comparative analysis of using intellectual games in the educational process

    Directory of Open Access Journals (Sweden)

    Romanova I.A.

    2014-02-01

    Full Text Available Purpose: to explore the experience of World Mind Games in the educational process. Material: 26 literary sources have been analyzed. Results: there is a degree of intellectual development of sports in the world. The kinds of intellectual games, which are used in the educational process in schools. Found that in accordance with the cultural traditions of the East and the West the greatest distribution in the school environment received Go and chess. The directions of application of chess, checkers, Go, bekgemmona, tangram in teaching practice and especially the impact of these games on the development of the child. Conclusions: Identifying promising implementation of intellectual games in the learning process as a means of enhancing cognitive mental processes and the general culture of the individual. Subject attempts to introduce the teaching of intellectual games in school educational process in many countries.

  14. NetDiff - Bayesian model selection for differential gene regulatory network inference.

    Science.gov (United States)

    Thorne, Thomas

    2016-12-16

    Differential networks allow us to better understand the changes in cellular processes that are exhibited in conditions of interest, identifying variations in gene regulation or protein interaction between, for example, cases and controls, or in response to external stimuli. Here we present a novel methodology for the inference of differential gene regulatory networks from gene expression microarray data. Specifically we apply a Bayesian model selection approach to compare models of conserved and varying network structure, and use Gaussian graphical models to represent the network structures. We apply a variational inference approach to the learning of Gaussian graphical models of gene regulatory networks, that enables us to perform Bayesian model selection that is significantly more computationally efficient than Markov Chain Monte Carlo approaches. Our method is demonstrated to be more robust than independent analysis of data from multiple conditions when applied to synthetic network data, generating fewer false positive predictions of differential edges. We demonstrate the utility of our approach on real world gene expression microarray data by applying it to existing data from amyotrophic lateral sclerosis cases with and without mutations in C9orf72, and controls, where we are able to identify differential network interactions for further investigation.

  15. Artificial Neural Networks approach to pharmacokinetic model selection in DCE-MRI studies.

    Science.gov (United States)

    Mohammadian-Behbahani, Mohammad-Reza; Kamali-Asl, Ali-Reza

    2016-12-01

    In pharmacokinetic analysis of Dynamic Contrast Enhanced MRI data, a descriptive physiological model should be selected properly out of a set of candidate models. Classical techniques suggested for this purpose suffer from issues like computation time and general fitting problems. This article proposes an approach based on Artificial Neural Networks (ANNs) for solving these problems. A set of three physiologically and mathematically nested models generated from the Tofts model were assumed: Model I, II and III. These models cover three possible tissue types from normal to malignant. Using 21 experimental arterial input functions and 12 levels of noise, a set of 27,216 time traces were generated. ANN was validated and optimized by the k-fold cross validation technique. An experimental dataset of 20 patients with glioblastoma was applied to ANN and the results were compared to outputs of F-test using Dice index. Optimum neuronal architecture ([6:7:1]) and number of training epochs (50) of the ANN were determined. ANN correctly classified more than 99% of the dataset. Confusion matrices for both ANN and F-test results showed the superior performance of the ANN classifier. The average Dice index (over 20 patients) indicated a 75% similarity between model selection maps of ANN and F-test. ANN improves the model selection process by removing the need for time-consuming, problematic fitting algorithms; as well as the need for hypothesis testing. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti

    2014-05-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  17. Comparative Study of Image Denoising Algorithms in Digital Image Processing

    Directory of Open Access Journals (Sweden)

    Aarti Kumari

    2015-11-01

    Full Text Available This paper proposes a basic scheme for understanding the fundamentals of digital image processing and the image denising algorithm. There are three basic operation categorized on during image processing i.e. image rectification and restoration, enhancement and information extraction. Image denoising is the basic problem in digital image processing. The main task is to make the image free from Noise. Salt & pepper (Impulse noise and the additive white Gaussian noise and blurredness are the types of noise that occur during transmission and capturing. For denoising the image there are some algorithms which denoise the image.

  18. Comparative analysis of business rules and business process modeling languages

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2013-03-01

    Full Text Available During developing an information system is important to create clear models and choose suitable modeling languages. The article analyzes the SRML, SBVR, PRR, SWRL, OCL rules specifying language and UML, DFD, CPN, EPC and IDEF3 BPMN business process modeling language. The article presents business rules and business process modeling languages theoretical comparison. The article according to selected modeling aspects of the comparison between different business process modeling languages ​​and business rules representation languages sets. Also, it is selected the best fit of language set for three layer framework for business rule based software modeling.

  19. Autoregressive model selection with simultaneous sparse coefficient estimation

    CERN Document Server

    Sang, Hailin

    2011-01-01

    In this paper we propose a sparse coefficient estimation procedure for autoregressive (AR) models based on penalized conditional maximum likelihood. The penalized conditional maximum likelihood estimator (PCMLE) thus developed has the advantage of performing simultaneous coefficient estimation and model selection. Mild conditions are given on the penalty function and the innovation process, under which the PCMLE satisfies a strong consistency, local $N^{-1/2}$ consistency, and oracle property, respectively, where N is sample size. Two penalty functions, least absolute shrinkage and selection operator (LASSO) and smoothly clipped average deviation (SCAD), are considered as examples, and SCAD is shown to have better performances than LASSO. A simulation study confirms our theoretical results. At the end, we provide an application of our method to a historical price data of the US Industrial Production Index for consumer goods, and the result is very promising.

  20. Bayesian Constrained-Model Selection for Factor Analytic Modeling

    OpenAIRE

    Peeters, Carel F.W.

    2016-01-01

    My dissertation revolves around Bayesian approaches towards constrained statistical inference in the factor analysis (FA) model. Two interconnected types of restricted-model selection are considered. These types have a natural connection to selection problems in the exploratory FA (EFA) and confirmatory FA (CFA) model and are termed Type I and Type II model selection. Type I constrained-model selection is taken to mean the determination of the appropriate dimensionality of a model. This type ...

  1. A Comparative Study of Point Cloud Data Collection and Processing

    Science.gov (United States)

    Pippin, J. E.; Matheney, M.; Gentle, J. N., Jr.; Pierce, S. A.; Fuentes-Pineda, G.

    2016-12-01

    Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic data for scientific, environmental, engineering and planning purposes. These data sets are valuable for applications of interest across a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies and expense of aerial data collection, it is often difficult to collect and distribute these datasets. Furthermore, the data can be technically challenging to process, requiring software and computing resources not readily available to many users. This study presents a comparison of advanced computing hardware and software that is used to collect and process point cloud datasets, such as LIDAR scans. Activities included implementation and testing of open source libraries and applications for point cloud data processing such as, Meshlab, Blender, PDAL, and PCL. Additionally, a suite of commercial scale applications, Skanect and Cloudcompare, were applied to raw datasets. Handheld hardware solutions, a Structure Scanner and Xbox 360 Kinect V1, were tested for their ability to scan at three field locations. The resultant data projects successfully scanned and processed subsurface karst features ranging from small stalactites to large rooms, as well as a surface waterfall feature. Outcomes support the feasibility of rapid sensing in 3D at field scales.

  2. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  3. Comparative Proteomic Analysis of the Fiber Elongating Process in Cotton

    Institute of Scientific and Technical Information of China (English)

    LIU Jin-yuan; YANG Yi-wei; BIAN Shao-min

    2008-01-01

    @@ A comparative proteomic analysis was performed to explore the mechanism of cell elongation in developing cotton fibers.The temporal changes of global proteomes at five representative development stages (5~25 days post-anthesis [DPA]) were examined using 2-D electrophoresis.

  4. The Bologna Process in Portugal and Poland: A comparative study

    Directory of Open Access Journals (Sweden)

    Eduardo Tomé

    2016-04-01

    Full Text Available We analyze the consequences of the introduction of the EU directed Bologna Process in Portuguese and Polish Universities. Specifically, we study how the Bologna Process has impacted in the employment situations of graduates in Portugal and Poland. Concerning methodology, we use available official data on the implementation of the Bologna Process in Poland and Portugal. We have found that the investment in Higher Education (HE stalled in both countries in the years since the implementation of the Bologna Process due to massive budgetary restrictions. Nevertheless, the stock of HE graduates increased massively, seemingly because the authorities thought that the free market should lead the HE market in the two countries. Employment prospects, unemployment prospects and wages of graduates continued to be much higher than those of non-graduates. But an unexpected divide appeared between graduates and Masters/PhDs, with important social consequences. While the first “saved” themselves and prospered going into high skilled jobs, the later had to endure minimum wage and underskilled occupations. The low payment for these youngsters was also justified because the supply of HE with Bologna increased but the demand by companies did not match. In fact, both Portugal and Poland have stronger needs in the demand side of the market than in the supply side. Finally, both markets continue to be essentially public and the experiences of privatization did not succeed to much. In terms of social implications, the Bologna Process faces in both countries the massive and decisive challenge of eliminating youth unemployment and emigration but this can only be done with the cooperation of companies that should create high paid and high skilled jobs. Only when this occurs the Bologna Process will achieve its ultimate goal of transforming Portugal and Poland in high skilled equibriuns. Let us hope it happens, for the good of the two countries and particularly for the

  5. FIST and the Analytical Hierarchy Process: Comparative Modeling

    Science.gov (United States)

    2013-03-01

    higher capital construction costs - think Portland Cement Concrete (PCC) pavement versus asphalt pavement. PCC will cost more to install, but last...combining AHP with integer programming to optimize a portfolio of defense programs. While AHP is a stand- alone technique that derives an overall...efficiently executing a program using program management techniques (Lepore et al., 2012). Summary The DoD acquisition processes is too lengthy, costs

  6. N-terminal Protein Processing: A Comparative Proteogenomic Analysis*

    OpenAIRE

    Bonissone, Stefano; Gupta, Nitin; Romine, Margaret; Bradshaw, Ralph A.; Pavel A Pevzner

    2013-01-01

    N-terminal methionine excision (NME) and N-terminal acetylation (NTA) are two of the most common protein post-translational modifications. NME is a universally conserved activity and a highly specific mechanism across all life forms. NTA is very common in eukaryotes but occurs rarely in prokaryotes. By analyzing data sets from yeast, mammals and bacteria (including 112 million spectra from 57 bacterial species), the largest comparative proteogenomics study to date, it is shown that previous a...

  7. The Pace of Aesthetic Process: A Comparative Approach

    OpenAIRE

    Luque Moya, Gloria

    2013-01-01

    Traditionally, western theory of art has been equipped with a set of dualisms such as subject-object, artistic process-artistic product, active artist-passive spectator, that has supposed a rejection of a plenary notion of the human integrated with nature and the cosmos. In this context, John Dewey, who presented his theory of art in 1934 in Art as Experience, showed how the “art has been set in a remote pedestal”, separated from the low activities which we realized in our ordinary lives. He ...

  8. Homology and ontogeny: pattern and process in comparative developmental biology.

    Science.gov (United States)

    Scholtz, Gerhard

    2005-11-01

    In this article the interface between development and homology is discussed. Development is here interpreted as a sequence of evolutionarily independent stages. Any approach stressing the importance of specific developmental stages is rejected. A homology definition is favoured which includes similarity, and complexity serves as a test for homology. Complexity is seen as the possibility of subdividing a character into evolutionarily independent corresponding substructures. Topology as a test for homology is critically discussed because corresponding positions are not necessarily indicative of homology. Complexity can be used twofold for homology assessments of development: either stages or processes of development are homologized. These two approaches must not be con-flated. This distinction leads to the conclusion that there is no ontogenetic homology "criterion".

  9. The detection of observations possibly influential for model selection

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans)

    1991-01-01

    textabstractModel selection can involve several variables and selection criteria. A simple method to detect observations possibly influential for model selection is proposed. The potentials of this method are illustrated with three examples, each of which is taken from related studies.

  10. Comparative Study between Programming Systems for Incremental Sheet Forming Process

    Directory of Open Access Journals (Sweden)

    Moayedfar Majid

    2014-07-01

    Full Text Available Incremental Sheet Forming (ISF is a method developed to form a desired surface feature on sheet metals in batch production series. Due to a lack of dedicated programming system to execute, control and monitor the whole ISF, researchers tried to utilize programming systems designed for chip making process to suits for ISF. In this work, experiments were conducted to find suitability and quality of ISF parts produced by using manual CNC part programming. Therefore, ISF was carried out on stainless steel sheets using Computer Numerical Control (CNC milling machines. Prior to running the experiments, a ball-point shaped tool made of bronze alloy was fabricated due to its superior ability to reduce the amount of friction and improve the surface quality of the stainless steel sheet metal. The experiments also employed the method of forming in negative direction with a blank mould and the tool which helped to shape the desired part quickly. The programming was generated using the MasterCAM software for the CNC milling machine and edited before transferring to the machine. However, the programming for the machine was written manually to show the differences of output date between software programming and manual programming. From the results, best method of programming was found and minimum amount of contact area between tool and sheet metal achieved.

  11. Comparative cost estimates of five coal utilization processes

    Energy Technology Data Exchange (ETDEWEB)

    1979-01-01

    Detailed capital and operating cost estimates were prepared for the generation of electric power in a new, net 500 MW (e), coal-burning facility by five alternative processes: conventional boiler with no control of SO/sub 2/ emissions, atmospheric fluidized bed steam generator (AFB), conventional boiler equipped with a limestone FGD system, conventional boiler equipped with magnesia FGD system, and coal beneficiation followed by a conventional boiler quipped with limestone FGD for part of the flue gas stream. For a coal containing 3.5% sulfur, meeting SO/sub 2/ emission limits of 1.2 pounds per million Btu fired was most economical with the limestone FGD system. This result was unchanged for a coal containing 5% sulfur; however, for 2% sulfur, limestone FGD and AFB were competitive methods of controlling SO/sub 2/ emissions. Brief consideration of 90% reduction of SO/sub 2/ emissions led to the choice of limestone FGD as the most economical method. Byproduct credit for the sulfuric acid produced in regenerating the magnesia could make that system competitive with the limestone FGD system, depending upon local markets. The cost of sludge fixation and disposal would make limestone FGD noneconomic in many situations, if these steps are necessary.

  12. Ultra High Temperature Ceramics' Processing Routes and Microstructures Compared

    Science.gov (United States)

    Gusman, Michael; Stackpoole, Mairead; Johnson, Sylvia; Gasch, Matt; Lau, Kai-Hung; Sanjurjo, Angel

    2009-01-01

    Ultra High Temperature Ceramics (UHTCs), such as HfB2 and ZrB2 composites containing SiC, are known to have good thermal shock resistance and high thermal conductivity at elevated temperatures. These UHTCs have been proposed for a number of structural applications in hypersonic vehicles, nozzles, and sharp leading edges. NASA Ames is working on controlling UHTC properties (especially, mechanical properties, thermal conductivity, and oxidation resistance) through processing, composition, and microstructure. In addition to using traditional methods of combining additives to boride powders, we are preparing UHTCs using coat ing powders to produce both borides and additives. These coatings and additions to the powders are used to manipulate and control grain-boundary composition and second- and third-phase variations within the UHTCs. Controlling the composition of high temperature oxidation by-products is also an important consideration. The powders are consolidated by hot-pressing or field-assisted sintering (FAS). Comparisons of microstructures and hardness data will be presented.

  13. Comparative Cloud Deployment and Service Orchestration Process Using Juju Charms

    Directory of Open Access Journals (Sweden)

    Gaurav Raj

    2013-04-01

    Full Text Available This age is known as a service oriented age due to the globalization and advancement in technology day by day which lead business developers to deploy their services over the cloud. It lead to the development of new platform that has the capability to easily cope with the business expectations and has introduced a tough competition between the platform providers. These days Services like IaaS hasbeen provided by many cloud service providers along with PaaS and SaaS. We provide a comparative study in between the types of platform (open source cloud platform as OpenStack and proprietary basedplatform as Eucalyptus for deployment of IaaS , which has taken into consideration the size of deployment, manageability and fault tolerance, API provisioning /support, performance, compatibility withother platforms and types of services to be hosted. We discussed here about two high demanding IaaS platform provided by OpenStack and Eucalyptus. Both of the platform providers are competitive in termsof deployment of IaaS and service provisioning to its big clients. We also discussed here about the tools that can be used with these cloud platforms to easily install services on these clouds.

  14. Comparative Analysis of Radiometer Systems Using Non-Stationary Processes

    Science.gov (United States)

    Racette, Paul; Lang, Roger; Krebs, Carolyn A. (Technical Monitor)

    2002-01-01

    Radiometers require periodic calibration to correct for instabilities in the receiver response. Various calibration techniques exist that minimize the effect of instabilities in the receivers. The optimal technique depends upon many parameters. Some parameters are constrained by the particular application and others can be chosen in the system design. For example, the measurement uncertainty may be reduced to the limits of the resolution of the measurement (sensitivity) if periodic absolute calibration can be performed with sufficient frequency. However if the period between calibrations is long, a reference-differencing technique, i.e. Dicke-type design, can yield better performance. The measurement uncertainty not only depends upon the detection scheme but also on the number of pixels between calibrations, the integration time per pixel, integration time per calibration reference measurement, calibration reference temperature, and the brightness temperature of what is being measured. The best scheme for reducing the measurement uncertainty also depends, in large part, on the stability of the receiver electronics. In this presentation a framework for evaluating calibration schemes for a wide range of system architectures is presented. Two methods for treating receiver non-stationarity are compared with radiometer measurements.

  15. COMPARATIVE ANALYSIS OF SATELLITE IMAGE PRE-PROCESSING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    T. Sree Sharmila

    2013-01-01

    Full Text Available Satellite images are corrupted by noise in its acquisition and transmission. The removal of noise from the image by attenuating the high frequency image components, removes some important details as well. In order to retain the useful information and improve the visual appearance, an effective denoising and resolution enhancement techniques are required. In this research, Hybrid Directional Lifting (HDL technique is proposed to retain the important details of the image and improve the visual appearance. The Discrete Wavelet Transform (DWT based interpolation technique is developed for enhancing the resolution of the denoised image. The performance of the proposed techniques are tested by Land Remote-Sensing Satellite (LANDSAT images, using the quantitative performance measure, Peak Signal to Noise Ratio (PSNR and computation time to show the significance of the proposed techniques. The PSNR of the HDL technique increases 1.02 dB compared to the standard denoising technique and the DWT based interpolation technique increases 3.94 dB. From the experimental results it reveals that newly developed image denoising and resolution enhancement techniques improve the image visual quality with rich textures.

  16. Adaptive Covariance Estimation with model selection

    CERN Document Server

    Biscay, Rolando; Loubes, Jean-Michel

    2012-01-01

    We provide in this paper a fully adaptive penalized procedure to select a covariance among a collection of models observing i.i.d replications of the process at fixed observation points. For this we generalize previous results of Bigot and al. and propose to use a data driven penalty to obtain an oracle inequality for the estimator. We prove that this method is an extension to the matricial regression model of the work by Baraud.

  17. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection.

    Science.gov (United States)

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection.

  18. A new class of indicators for the model selection of scaling laws in nuclear fusion

    CERN Document Server

    Lupelli, I; Gaudio, P; Gelfusa, M; Mazon, D; Vega, J

    2013-01-01

    The development of computationally efficient model selection strategies represents an important problem facing the analysis of Nuclear Fusion experimental data, in particular in the field of scaling laws for the extrapolation to future machines, and image processing. In this paper, a new model selection indicator, named Model Falsification Criterion (MFC), will be presented and applied to the problem of choosing the most generalizable scaling laws for the power threshold to access the H-mode of confinement in Tokamaks. The proposed indicator is based on the properties of the model residuals, their entropy and an implementation of the data falsification principle. The model selection ability of the proposed criterion will be demonstrated in comparison with the most widely used frequentist (Akaike Information Criterion) and bayesian (Bayesian Information Criterion) indicators.

  19. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  20. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  1. A qualitative model structure sensitivity analysis method to support model selection

    Science.gov (United States)

    Van Hoey, S.; Seuntjens, P.; van der Kwast, J.; Nopens, I.

    2014-11-01

    The selection and identification of a suitable hydrological model structure is a more challenging task than fitting parameters of a fixed model structure to reproduce a measured hydrograph. The suitable model structure is highly dependent on various criteria, i.e. the modeling objective, the characteristics and the scale of the system under investigation and the available data. Flexible environments for model building are available, but need to be assisted by proper diagnostic tools for model structure selection. This paper introduces a qualitative method for model component sensitivity analysis. Traditionally, model sensitivity is evaluated for model parameters. In this paper, the concept is translated into an evaluation of model structure sensitivity. Similarly to the one-factor-at-a-time (OAT) methods for parameter sensitivity, this method varies the model structure components one at a time and evaluates the change in sensitivity towards the output variables. As such, the effect of model component variations can be evaluated towards different objective functions or output variables. The methodology is presented for a simple lumped hydrological model environment, introducing different possible model building variations. By comparing the effect of changes in model structure for different model objectives, model selection can be better evaluated. Based on the presented component sensitivity analysis of a case study, some suggestions with regard to model selection are formulated for the system under study: (1) a non-linear storage component is recommended, since it ensures more sensitive (identifiable) parameters for this component and less parameter interaction; (2) interflow is mainly important for the low flow criteria; (3) excess infiltration process is most influencing when focussing on the lower flows; (4) a more simple routing component is advisable; and (5) baseflow parameters have in general low sensitivity values, except for the low flow criteria.

  2. Chain-Wise Generalization of Road Networks Using Model Selection

    Science.gov (United States)

    Bulatov, D.; Wenzel, S.; Häufel, G.; Meidow, J.

    2017-05-01

    Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  3. Accurate model selection of relaxed molecular clocks in bayesian phylogenetics.

    Science.gov (United States)

    Baele, Guy; Li, Wai Lok Sibon; Drummond, Alexei J; Suchard, Marc A; Lemey, Philippe

    2013-02-01

    Recent implementations of path sampling (PS) and stepping-stone sampling (SS) have been shown to outperform the harmonic mean estimator (HME) and a posterior simulation-based analog of Akaike's information criterion through Markov chain Monte Carlo (AICM), in bayesian model selection of demographic and molecular clock models. Almost simultaneously, a bayesian model averaging approach was developed that avoids conditioning on a single model but averages over a set of relaxed clock models. This approach returns estimates of the posterior probability of each clock model through which one can estimate the Bayes factor in favor of the maximum a posteriori (MAP) clock model; however, this Bayes factor estimate may suffer when the posterior probability of the MAP model approaches 1. Here, we compare these two recent developments with the HME, stabilized/smoothed HME (sHME), and AICM, using both synthetic and empirical data. Our comparison shows reassuringly that MAP identification and its Bayes factor provide similar performance to PS and SS and that these approaches considerably outperform HME, sHME, and AICM in selecting the correct underlying clock model. We also illustrate the importance of using proper priors on a large set of empirical data sets.

  4. The Governance of Higher Education Regionalisation: Comparative Analysis of the Bologna Process and MERCOSUR-Educativo

    Science.gov (United States)

    Verger, Antoni; Hermo, Javier Pablo

    2010-01-01

    The article analyses two processes of higher education regionalisation, MERCOSUR-Educativo in Latin America and the Bologna Process in Europe, from a comparative perspective. The comparative analysis is centered on the content and the governance of both processes and, specifically, on the reasons of their uneven evolution and implementation. We…

  5. The governance of higher education regionalisation: comparative analysis of the Bologna Process and MERCOSUR-Educativo

    NARCIS (Netherlands)

    Verger, A.; Hermo, J.P.

    2010-01-01

    The article analyses two processes of higher education regionalisation, MERCOSUR‐Educativo in Latin America and the Bologna Process in Europe, from a comparative perspective. The comparative analysis is centered on the content and the governance of both processes and, specifically, on the reasons of

  6. A model selection method for nonlinear system identification based FMRI effective connectivity analysis.

    Science.gov (United States)

    Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas M; Benali, Habib

    2011-07-01

    In this paper a model selection algorithm for a nonlinear system identification method is proposed to study functional magnetic resonance imaging (fMRI) effective connectivity. Unlike most other methods, this method does not need a pre-defined structure/model for effective connectivity analysis. Instead, it relies on selecting significant nonlinear or linear covariates for the differential equations to describe the mapping relationship between brain output (fMRI response) and input (experiment design). These covariates, as well as their coefficients, are estimated based on a least angle regression (LARS) method. In the implementation of the LARS method, Akaike's information criterion corrected (AICc) algorithm and the leave-one-out (LOO) cross-validation method were employed and compared for model selection. Simulation comparison between the dynamic causal model (DCM), nonlinear identification method, and model selection method for modelling the single-input-single-output (SISO) and multiple-input multiple-output (MIMO) systems were conducted. Results show that the LARS model selection method is faster than DCM and achieves a compact and economic nonlinear model simultaneously. To verify the efficacy of the proposed approach, an analysis of the dorsal and ventral visual pathway networks was carried out based on three real datasets. The results show that LARS can be used for model selection in an fMRI effective connectivity study with phase-encoded, standard block, and random block designs. It is also shown that the LOO cross-validation method for nonlinear model selection has less residual sum squares than the AICc algorithm for the study.

  7. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  8. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  9. Bayesian model selection applied to artificial neural networks used for water resources modeling

    Science.gov (United States)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  10. Identification of Distillation Process Dynamics Comparing Process Knowledge and Black Box Based Approaches

    DEFF Research Database (Denmark)

    Rasmussen, Knud H; Nielsen, C. S.; Jørgensen, Sten Bay

    1990-01-01

    A distillation plant equipped with a heat pump separates a mixture of isopropanol and methanol. The mixture contains some water as impurity. The model development aims at dual composition control design, where top and bottom compositions should follow the setpoints, and disturbances should...... be rejected. Disturbances may occur in feed low rate and feed composition. Identification is performed using multivariable linear discrete time model structure development tools: a process knowledge based and a black box approach. In the process knowledge based approach, the model structure is developed from...... qualitative process knowledge which presently may require modification to guarantee identifiability. The black box approach is based on pseudocanonical MFD model representation, where the model stracture is determined by a set of structure indices. The identifications are performed on experimental data...

  11. Parametric or nonparametric? A parametricness index for model selection

    CERN Document Server

    Liu, Wei; 10.1214/11-AOS899

    2012-01-01

    In model selection literature, two classes of criteria perform well asymptotically in different situations: Bayesian information criterion (BIC) (as a representative) is consistent in selection when the true model is finite dimensional (parametric scenario); Akaike's information criterion (AIC) performs well in an asymptotic efficiency when the true model is infinite dimensional (nonparametric scenario). But there is little work that addresses if it is possible and how to detect the situation that a specific model selection problem is in. In this work, we differentiate the two scenarios theoretically under some conditions. We develop a measure, parametricness index (PI), to assess whether a model selected by a potentially consistent procedure can be practically treated as the true model, which also hints on AIC or BIC is better suited for the data for the goal of estimating the regression function. A consequence is that by switching between AIC and BIC based on the PI, the resulting regression estimator is si...

  12. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  13. A new class of indicators for the model selection of scaling laws in nuclear fusion

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: Ivan.Lupelli@ccfe.ac.uk [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom); Murari, A. [Consorzio RFX-Associazione EURATOM ENEA per la Fusione, I-35127 Padova (Italy); Gaudio, P.; Gelfusa, M. [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Mazon, D. [Association EURATOM-CEA, CEA Cadarache DSM/IRFM, 13108 Saint-Paul-lez-Durance (France); Vega, J. [Asociación EURATOM-CIEMAT para Fusión, CIEMAT, Madrid (Spain)

    2013-10-15

    Highlights: ► A new model selection indicator, based on the Model Falsification Criterion, has been applied to the problem of choosing the scaling laws for power threshold scaling to access the H-mode in tokamaks. ► The indicators have at least the same selection power of the classic indicators for databases of low dimensionality. ► For the high dimensionality dataset the indicator outperforms the traditional criteria. ► The indicator preserves its advantages up to a noise of 20% of the signal level. -- Abstract: The development of computationally efficient model selection strategies represents an important problem facing the analysis of nuclear fusion experimental data, in particular in the field of scaling laws for the extrapolation to future machines, and image processing. In this paper, a new model selection indicator, named Model Falsification Criterion (MFC), will be presented and applied to the problem of choosing the most generalizable scaling laws for the power threshold (P{sub Thresh}) to access the H-mode of confinement in tokamaks. The proposed indicator is based on the properties of the model residuals, their entropy and an implementation of the data falsification principle. The model selection ability of the proposed criterion will be demonstrated in comparison with the most widely used frequentist (Akaike information criterion) and bayesian (Bayesian information criterion) indicators.

  14. Estimating the predictive quality of dose-response after model selection.

    Science.gov (United States)

    Hu, Chuanpu; Dong, Yingwen

    2007-07-20

    Prediction of dose-response is important in dose selection in drug development. As the true dose-response shape is generally unknown, model selection is frequently used, and predictions based on the final selected model. Correctly assessing the quality of the predictions requires accounting for the uncertainties caused by the model selection process, which has been difficult. Recently, a new approach called data perturbation has emerged. It allows important predictive characteristics be computed while taking model selection into consideration. We study, through simulation, the performance of data perturbation in estimating standard error of parameter estimates and prediction errors. Data perturbation was found to give excellent prediction error estimates, although at times large Monte Carlo sizes were needed to obtain good standard error estimates. Overall, it is a useful tool to characterize uncertainties in dose-response predictions, with the potential of allowing more accurate dose selection in drug development. We also look at the influence of model selection on estimation bias. This leads to insights into candidate model choices that enable good dose-response prediction.

  15. Ecological niche modeling in Maxent: the importance of model complexity and the performance of model selection criteria.

    Science.gov (United States)

    Warren, Dan L; Seifert, Stephanie N

    2011-03-01

    Maxent, one of the most commonly used methods for inferring species distributions and environmental tolerances from occurrence data, allows users to fit models of arbitrary complexity. Model complexity is typically constrained via a process known as L1 regularization, but at present little guidance is available for setting the appropriate level of regularization, and the effects of inappropriately complex or simple models are largely unknown. In this study, we demonstrate the use of information criterion approaches to setting regularization in Maxent, and we compare models selected using information criteria to models selected using other criteria that are common in the literature. We evaluate model performance using occurrence data generated from a known "true" initial Maxent model, using several different metrics for model quality and transferability. We demonstrate that models that are inappropriately complex or inappropriately simple show reduced ability to infer habitat quality, reduced ability to infer the relative importance of variables in constraining species' distributions, and reduced transferability to other time periods. We also demonstrate that information criteria may offer significant advantages over the methods commonly used in the literature.

  16. Measuring balance and model selection in propensity score methods

    NARCIS (Netherlands)

    Belitser, S.; Martens, Edwin P.; Pestman, Wiebe R.; Groenwold, Rolf H.H.; De Boer, Anthonius; Klungel, Olaf H.

    2011-01-01

    Background: Propensity score (PS) methods focus on balancing confounders between groups to estimate an unbiased treatment or exposure effect. However, there is lack of attention in actually measuring, reporting and using the information on balance, for instance for model selection. Objectives: To de

  17. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; van den Berg, Stéphanie Martine

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  18. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2016-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  19. Assessing Model Selection Uncertainty Using a Bootstrap Approach: An Update

    NARCIS (Netherlands)

    Lubke, Gitta H.; Campbell, Ian; McArtor, Dan; Miller, Patrick; Luningham, Justin; Berg, van den Stephanie M.

    2017-01-01

    Model comparisons in the behavioral sciences often aim at selecting the model that best describes the structure in the population. Model selection is usually based on fit indexes such as Akaike’s information criterion (AIC) or Bayesian information criterion (BIC), and inference is done based on the

  20. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...

  1. Periodic Integration: Further Results on Model Selection and Forecasting

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1996-01-01

    textabstractThis paper considers model selection and forecasting issues in two closely related models for nonstationary periodic autoregressive time series [PAR]. Periodically integrated seasonal time series [PIAR] need a periodic differencing filter to remove the stochastic trend. On the other

  2. Model selection for SVM using mutative scale chaos optimization algorithm%变尺度混沌优化支持向量机模型选择

    Institute of Scientific and Technical Information of China (English)

    刘清坤; 阙沛文; 费春国; 宋寿鹏

    2006-01-01

    This paper proposes a new search strategy using mutative scale chaos optimization algorithm (MSCO) for model selection of support vector machine (SVM). It searches the parameter space of SVM with a very high efficiency and finds the optimum parameter setting for a practical classification problem with very low time cost. To demonstrate the performance of the proposed method it is applied to model selection of SVM in ultrasonic flaw classification and compared with grid search for model selection. Experimental results show that MSCO is a very powerful tool for model selection of SVM, and outperforms grid search in search speed and precision in ultrasonic flaw classification.

  3. [Comparative study on decoction and dissolution of crude and processed corni fructus].

    Science.gov (United States)

    Zhou, Han-Yu; Yang, Pei-Pei; Cong, Xiao-Dong; Zhang, Cheng-Rong; Cai, Bao-Chang

    2013-11-01

    To compare and study the decoction and dissolution of active constituents in crude and processed Corni Fructus. HPLC, the traditional Chinese medicine (TCM) decoction method and the dissolution methods were adopted to compare and study the decoction yield and dissolution rate of loganin and morroniside, active constituents in crude and processed Corni Fructus. The results showed that the content of active constituents loganin and morroniside in crude and processed Corni Fructus did not change significantly; compared with crude Corni Fructus, processed Corni Fructus (decoction) contained much higher loganin, with no obvious change in morroniside; compared with crude Corni Fructus, processed Corni Fructus (extracts) showed no significant difference in loganin dissolution, but notable increase in morroniside dissolution in intestinal fluid; in gastric fluid, processed Corni Fructus showed significant increase in loganin and morroniside dissolutions. However, in comprehensive consideration of the decoction dose in clinical administration, and calculated on the basis of the formula of the decoction yield x dissolution rate = decoction-dissolution product, it showed increase in the decoction-dissolution products of both of the active constituents loganin and morroniside, with significant difference. This suggested that processed Corni Fructus is superior to crude Corni Fructus in clinical application. In this article, we proposed to compare the changes in decoction and dissolution of active constituents in crude and processed Corni Fructus, study the decoction-dissolution product, and then apply it in the quality evaluation of crude and processed Corni Fructus.

  4. How do healthcare consumers process and evaluate comparative healthcare information? A qualitive study using cognitive interviews

    NARCIS (Netherlands)

    Damman, O.C.; Hendriks, M.; Rademakers, J.; Delnoij, D.M.J.; Groenewegen, P.P.

    2009-01-01

    Background: To date, online public healthcare reports have not been effectively used by consumers. Therefore, we qualitatively examined how healthcare consumers process and evaluate comparative healthcare information on the Internet. Methods: Using semi-structured cognitive interviews, interviewees

  5. How do healthcare consumers process and evaluate comparative healthcare information? A qualitative study using cognitive interviews.

    NARCIS (Netherlands)

    Damman, O.C.; Hendriks, M.; Rademakers, J.; Delnoij, D.; Groenewegen, P.

    2009-01-01

    Background: To date, online public healthcare reports have not been effectively used by consumers. Therefore, we qualitatively examined how healthcare consumers process and evaluate comparative healthcare information on the Internet. Methods: Using semi-structured cognitive interviews, interviewees

  6. Comparing the Word Processing and Reading Comprehension of Skilled and Less Skilled Readers

    Science.gov (United States)

    Guldenoglu, I. Birkan; Kargin, Tevhide; Miller, Paul

    2012-01-01

    The purpose of this study was to compare the word processing and reading comprehension skilled in and less skilled readers. Forty-nine, 2nd graders (26 skilled and 23 less skilled readers) participated in this study. They were tested with two experiments assessing their processing of isolated real word and pseudoword pairs as well as their reading…

  7. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  8. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  9. General model selection estimation of a periodic regression with a Gaussian noise

    CERN Document Server

    Konev, Victor; 10.1007/s10463-008-0193-1

    2010-01-01

    This paper considers the problem of estimating a periodic function in a continuous time regression model with an additive stationary gaussian noise having unknown correlation function. A general model selection procedure on the basis of arbitrary projective estimates, which does not need the knowledge of the noise correlation function, is proposed. A non-asymptotic upper bound for quadratic risk (oracle inequality) has been derived under mild conditions on the noise. For the Ornstein-Uhlenbeck noise the risk upper bound is shown to be uniform in the nuisance parameter. In the case of gaussian white noise the constructed procedure has some advantages as compared with the procedure based on the least squares estimates (LSE). The asymptotic minimaxity of the estimates has been proved. The proposed model selection scheme is extended also to the estimation problem based on the discrete data applicably to the situation when high frequency sampling can not be provided.

  10. Bayesian model selection for incomplete data using the posterior predictive distribution.

    Science.gov (United States)

    Daniels, Michael J; Chatterjee, Arkendu S; Wang, Chenguang

    2012-12-01

    We explore the use of a posterior predictive loss criterion for model selection for incomplete longitudinal data. We begin by identifying a property that most model selection criteria for incomplete data should consider. We then show that a straightforward extension of the Gelfand and Ghosh (1998, Biometrika, 85, 1-11) criterion to incomplete data has two problems. First, it introduces an extra term (in addition to the goodness of fit and penalty terms) that compromises the criterion. Second, it does not satisfy the aforementioned property. We propose an alternative and explore its properties via simulations and on a real dataset and compare it to the deviance information criterion (DIC). In general, the DIC outperforms the posterior predictive criterion, but the latter criterion appears to work well overall and is very easy to compute unlike the DIC in certain classes of models for missing data.

  11. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  12. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  13. Model selection in systems biology depends on experimental design.

    Science.gov (United States)

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  14. Waste water processing technology for Space Station Freedom - Comparative test data analysis

    Science.gov (United States)

    Miernik, Janie H.; Shah, Burt H.; Mcgriff, Cindy F.

    1991-01-01

    Comparative tests were conducted to choose the optimum technology for waste water processing on SSF. A thermoelectric integrated membrane evaporation (TIMES) subsystem and a vapor compression distillation subsystem (VCD) were built and tested to compare urine processing capability. Water quality, performance, and specific energy were compared for conceptual designs intended to function as part of the water recovery and management system of SSF. The VCD is considered the most mature and efficient technology and was selected to replace the TIMES as the baseline urine processor for SSF.

  15. Waste water processing technology for Space Station Freedom - Comparative test data analysis

    Science.gov (United States)

    Miernik, Janie H.; Shah, Burt H.; Mcgriff, Cindy F.

    1991-01-01

    Comparative tests were conducted to choose the optimum technology for waste water processing on SSF. A thermoelectric integrated membrane evaporation (TIMES) subsystem and a vapor compression distillation subsystem (VCD) were built and tested to compare urine processing capability. Water quality, performance, and specific energy were compared for conceptual designs intended to function as part of the water recovery and management system of SSF. The VCD is considered the most mature and efficient technology and was selected to replace the TIMES as the baseline urine processor for SSF.

  16. Benchmarking healthcare logistics processes: a comparative case study of Danish and US hospitals

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Andersen, Bjørn; Jacobsen, Peter

    2017-01-01

    of supply and employee engagement. Based on these decision criteria, performance indicators were developed to enable benchmarking of logistics processes in healthcare. The study contributes to the limited literature on healthcare logistics benchmarking. Furthermore, managers in healthcare logistics......Logistics processes in hospitals are vital in the provision of patient care. Improving healthcare logistics processes provides an opportunity for reduced healthcare costs and better support of clinical processes. Hospitals are faced with increasing healthcare costs around the world and improvement...... initiatives prevalent in manufacturing industries such as lean, business process reengineering and benchmarking have seen an increase in use in healthcare. This study investigates how logistics processes in a hospital can be benchmarked to improve process performance. A comparative case study of the bed...

  17. AUTOMATIC MODEL SELECTION FOR 3D RECONSTRUCTION OF BUILDINGS FROM SATELLITE IMAGARY

    Directory of Open Access Journals (Sweden)

    T. Partovi

    2013-09-01

    Full Text Available Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  18. Fusion processing of itraconazole solid dispersions by kinetisol dispersing: a comparative study to hot melt extrusion.

    Science.gov (United States)

    DiNunzio, James C; Brough, Chris; Miller, Dave A; Williams, Robert O; McGinity, James W

    2010-03-01

    KinetiSol Dispersing (KSD) is a novel high energy manufacturing process investigated here for the production of pharmaceutical solid dispersions. Solid dispersions of itraconazole (ITZ) and hypromellose were produced by KSD and compared to identical formulations produced by hot melt extrusion (HME). Materials were characterized for solid state properties by modulated differential scanning calorimetry and X-ray diffraction. Dissolution behavior was studied under supersaturated conditions. Oral bioavailability was determined using a Sprague-Dawley rat model. Results showed that KSD was able to produce amorphous solid dispersions in under 15 s while production by HME required over 300 s. Dispersions produced by KSD exhibited single phase solid state behavior indicated by a single glass transition temperature (T(g)) whereas compositions produced by HME exhibited two T(g)s. Increased dissolution rates for compositions manufactured by KSD were also observed compared to HME processed material. Near complete supersaturation was observed for solid dispersions produced by either manufacturing processes. Oral bioavailability from both processes showed enhanced AUC compared to crystalline ITZ. Based on the results presented from this study, KSD was shown to be a viable manufacturing process for the production of pharmaceutical solid dispersions, providing benefits over conventional techniques including: enhanced mixing for improved homogeneity and reduced processing times.

  19. MODEL SELECTION FOR LOG-LINEAR MODELS OF CONTINGENCY TABLES

    Institute of Scientific and Technical Information of China (English)

    ZHAO Lincheng; ZHANG Hong

    2003-01-01

    In this paper, we propose an information-theoretic-criterion-based model selection procedure for log-linear model of contingency tables under multinomial sampling, and establish the strong consistency of the method under some mild conditions. An exponential bound of miss detection probability is also obtained. The selection procedure is modified so that it can be used in practice. Simulation shows that the modified method is valid. To avoid selecting the penalty coefficient in the information criteria, an alternative selection procedure is given.

  20. High-dimensional model estimation and model selection

    CERN Document Server

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  1. A simple application of FIC to model selection

    CERN Document Server

    Wiggins, Paul A

    2015-01-01

    We have recently proposed a new information-based approach to model selection, the Frequentist Information Criterion (FIC), that reconciles information-based and frequentist inference. The purpose of this current paper is to provide a simple example of the application of this criterion and a demonstration of the natural emergence of model complexities with both AIC-like ($N^0$) and BIC-like ($\\log N$) scaling with observation number $N$. The application developed is deliberately simplified to make the analysis analytically tractable.

  2. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  3. The Comparative Effect of Top-down Processing and Bottom-up Processing through TBLT on Extrovert and Introvert EFL

    Directory of Open Access Journals (Sweden)

    Pezhman Nourzad Haradasht

    2013-09-01

    Full Text Available This research seeks to examine the effect of two models of reading comprehension, namely top-down and bottom-up processing, on the reading comprehension of extrovert and introvert EFL learners’ reading comprehension. To do this, 120 learners out of a total number of 170 intermediate learners being educated at Iran Mehr English Language School were selected all taking a PET (Preliminary English Test first for homogenization prior to the study. They also answered the Eysenck Personality Inventory (EPI which in turn categorized them into two subgroups within each reading models consisting of introverts and extroverts. All in all, there were four subgroups: 30 introverts and 30 extroverts undergoing the top-down processing treatment, and 30 introverts and 30 extroverts experiencing the bottom-up processing treatment. The aforementioned PET was administered as the post test of the study after each group was exposed to the treatment for 18 sessions in six weeks. After the instructions finished, the mean scores of all four groups on this post test were computed and a two-way ANOVA was run to test all the four hypotheses raise in this study. the results showed that while learners generally benefitted more from the bottom-up processing setting compared  to the top-down processing one, the extrovert group was better off receiving top-down instruction. Furthermore, introverts outperformed extroverts in bottom-up group; yet between the two personalities subgroups in the top-down setting no difference was seen. A predictable pattern of benefitting from teaching procedures could not be drawn for introverts as in both top-down and bottom-up settings, they benefitted more than extroverts. Keywords: Reading comprehension, top-down processing, bottom-up processing, extrovert, introvert

  4. Economics of recombinant antibody production processes at various scales: Industry-standard compared to continuous precipitation.

    Science.gov (United States)

    Hammerschmidt, Nikolaus; Tscheliessnig, Anne; Sommer, Ralf; Helk, Bernhard; Jungbauer, Alois

    2014-06-01

    Standard industry processes for recombinant antibody production employ protein A affinity chromatography in combination with other chromatography steps and ultra-/diafiltration. This study compares a generic antibody production process with a recently developed purification process based on a series of selective precipitation steps. The new process makes two of the usual three chromatographic steps obsolete and can be performed in a continuous fashion. Cost of Goods (CoGs) analyses were done for: (i) a generic chromatography-based antibody standard purification; (ii) the continuous precipitation-based purification process coupled to a continuous perfusion production system; and (iii) a hybrid process, coupling the continuous purification process to an upstream batch process. The results of this economic analysis show that the precipitation-based process offers cost reductions at all stages of the life cycle of a therapeutic antibody, (i.e. clinical phase I, II and III, as well as full commercial production). The savings in clinical phase production are largely attributed to the fact that expensive chromatographic resins are omitted. These economic analyses will help to determine the strategies that are best suited for small-scale production in parallel fashion, which is of importance for antibody production in non-privileged countries and for personalized medicine.

  5. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    Science.gov (United States)

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes…

  6. Comparing Simple and Advanced Video Tools as Supports for Complex Collaborative Design Processes

    Science.gov (United States)

    Zahn, Carmen; Pea, Roy; Hesse, Friedrich W.; Rosen, Joe

    2010-01-01

    Working with digital video technologies, particularly advanced video tools with editing capabilities, offers new prospects for meaningful learning through design. However, it is also possible that the additional complexity of such tools does "not" advance learning. We compared in an experiment the design processes and learning outcomes of 24…

  7. Experimental Investigation of Comparative Process Capabilities of Metal and Ceramic Injection Molding for Precision Applications

    DEFF Research Database (Denmark)

    Islam, Aminul; Giannekas, Nikolaos; Marhöfer, David Maximilian;

    2016-01-01

    The purpose of this paper is to make a comparative study on the process capabilities of the two branches of the powder injection molding (PIM) process—metal injection molding (MIM) and ceramic injection molding (CIM), for high-end precision applications. The state-of-the-art literature does not m...

  8. Y-TZP ceramic processing from coprecipitated powders : A comparative study with three commercial dental ceramics

    NARCIS (Netherlands)

    Lazar, Dolores R. R.; Bottino, Marco C.; Ozcan, Mutlu; Valandro, Luiz Felipe; Amaral, Regina; Ussui, Valter; Bressiani, Ana H. A.

    2008-01-01

    Objectives. (1) To synthesize 3 mol% yttria-stabilized zirconia (3Y-TZP) powders via coprecipitation route, (2) to obtain zirconia ceramic specimens, analyze surface characteristics, and mechanical properties, and (3) to compare the processed material with three reinforced dental ceramics. Methods.

  9. Different Gestalt Processing for Different Actions? Comparing Object-Directed Reaching and Looking Time Measures

    Science.gov (United States)

    Vishton, P.M.; Ware, E.A.; Badger, A.N.

    2005-01-01

    Six experiments compared the Gestalt processing that mediates infant reaching and looking behaviors. Experiment 1 demonstrated that the positioning and timing of 8- and 9-month-olds' reaching was influenced by remembered relative motion. Experiment 2 suggested that a visible gap, without this relative motion, was not sufficient to produce these…

  10. A comparative review of recovery processes in rivers, lakes, estuarine and coastal waters

    NARCIS (Netherlands)

    Verdonschot, P.F.M.; Spears, B.M.; Feld, C.K.; Brucet, S.; Keizer-Vlek, H.E.; Borja, A.; Elliot, M.; Kernan, M.; Johnson, R.K.

    2013-01-01

    The European Water Framework Directive aims to improve ecological status within river basins. This requires knowledge of responses of aquatic assemblages to recovery processes that occur after measures have been taken to reduce major stressors. A systematic literature review comparatively assesses

  11. Y-TZP ceramic processing from coprecipitated powders : A comparative study with three commercial dental ceramics

    NARCIS (Netherlands)

    Lazar, Dolores R. R.; Bottino, Marco C.; Ozcan, Mutlu; Valandro, Luiz Felipe; Amaral, Regina; Ussui, Valter; Bressiani, Ana H. A.

    2008-01-01

    Objectives. (1) To synthesize 3 mol% yttria-stabilized zirconia (3Y-TZP) powders via coprecipitation route, (2) to obtain zirconia ceramic specimens, analyze surface characteristics, and mechanical properties, and (3) to compare the processed material with three reinforced dental ceramics. Methods.

  12. A comparative review of recovery processes in rivers, lakes, estuarine and coastal waters

    NARCIS (Netherlands)

    Verdonschot, P.F.M.; Spears, B.M.; Feld, C.K.; Brucet, S.; Keizer-Vlek, H.E.; Borja, A.; Elliot, M.; Kernan, M.; Johnson, R.K.

    2013-01-01

    The European Water Framework Directive aims to improve ecological status within river basins. This requires knowledge of responses of aquatic assemblages to recovery processes that occur after measures have been taken to reduce major stressors. A systematic literature review comparatively assesses r

  13. Different Gestalt Processing for Different Actions? Comparing Object-Directed Reaching and Looking Time Measures

    Science.gov (United States)

    Vishton, P.M.; Ware, E.A.; Badger, A.N.

    2005-01-01

    Six experiments compared the Gestalt processing that mediates infant reaching and looking behaviors. Experiment 1 demonstrated that the positioning and timing of 8- and 9-month-olds' reaching was influenced by remembered relative motion. Experiment 2 suggested that a visible gap, without this relative motion, was not sufficient to produce these…

  14. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    Science.gov (United States)

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  15. Comparative analyses of diffusion coefficients for different extraction processes from thyme

    Directory of Open Access Journals (Sweden)

    Petrovic Slobodan S.

    2012-01-01

    Full Text Available This work was aimed to analyze kinetics and mass transfer phenomena for different extraction processes from thyme (Thymus vulgaris L. leaves. Different extraction processes with ethanol were studied: Soxhlet extraction and ultrasound-assisted batch extraction on the laboratory scale as well as pilot plant batch extraction with mixing. The extraction processes with ethanol were compared to the process of supercritical carbon dioxide extraction performed at 10 MPa and 40°C. Experimental data were analyzed by mathematical model derived from the Fick’s second law to determine and compare diffusion coefficients in the periods of constant and decreasing extraction rate. In the fast extraction period, values of diffusion coefficients were one to three orders of magnitude higher compared to those determined for the period of slow extraction. The highest diffusion coefficient was reported for the fast extraction period of supercritical fluid extraction. In the case of extraction processes with ethanol, ultrasound, stirring and extraction temperature increase enhanced mass transfer rate in the washing phase. On the other hand, ultrasound contributed the most to the increase of mass transfer rate in the period of slow extraction.

  16. How many separable sources? Model selection in independent components analysis.

    Science.gov (United States)

    Woods, Roger P; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.

  17. Rank-based model selection for multiple ions quantum tomography

    Science.gov (United States)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-10-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements.

  18. Optimization of a micro-scale, high throughput process development tool and the demonstration of comparable process performance and product quality with biopharmaceutical manufacturing processes.

    Science.gov (United States)

    Evans, Steven T; Stewart, Kevin D; Afdahl, Chris; Patel, Rohan; Newell, Kelcy J

    2017-07-14

    In this paper, we discuss the optimization and implementation of a high throughput process development (HTPD) tool that utilizes commercially available micro-liter sized column technology for the purification of multiple clinically significant monoclonal antibodies. Chromatographic profiles generated using this optimized tool are shown to overlay with comparable profiles from the conventional bench-scale and clinical manufacturing scale. Further, all product quality attributes measured are comparable across scales for the mAb purifications. In addition to supporting chromatography process development efforts (e.g., optimization screening), comparable product quality results at all scales makes this tool is an appropriate scale model to enable purification and product quality comparisons of HTPD bioreactors conditions. The ability to perform up to 8 chromatography purifications in parallel with reduced material requirements per run creates opportunities for gathering more process knowledge in less time. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Comparative Study of Ozonation and Catalyzed Ozonation Processes for Drinking Water Treatment by Pilot Test

    Institute of Scientific and Technical Information of China (English)

    GUAN Chun-yu; MA Jun; SHI Feng-hua; ZHANG Xiao-lan

    2009-01-01

    The ozone consumption effect and organic removal ability of metal coated cordierite ceramic honeycombs catalytic ozonation (catazone) process and ozonation process were comparatively studied by pilot-scale experiments. By Scan Electron Microscope (SEM), Atomic Force Microscope (AFM), BET, and X-ray photoelectron spectroscopy (XPS) analysis, metal oxides attached to ceramic surface were found to be in the form of crystal cluster, and the pore structure of ceramics was less developed. The air flow statuses of vacant catazone and ozone contactors were inclined to be plug-flow and mixed flow, respectively. Comparing with ozonation process, the ozone mass transfer efficiency of catazone process is lower, and the ozone decomposition efficiency of catazone is higher. The former effect is more obvious in semi-batch experiment, and the latter effect is more obvious in continuous-flow experiment. Unsaturated organics removal efficiencies of the two oxidation processes are similar, and are less affected by dissolved ozone concentration when it is higher than 1 mg/L. More dissolved organics were detected in catazone process in continuous-flow reaction, and more CH3Cl3formation potential (CH3Cl3FP) was removed by catazone in semi-batch mode, especially in the water with lower UV254.

  20. Hazardous waste characterization among various thermal processes in South Korea: a comparative analysis.

    Science.gov (United States)

    Shin, Sun Kyoung; Kim, Woo-Il; Jeon, Tae-Wan; Kang, Young-Yeul; Jeong, Seong-Kyeong; Yeon, Jin-Mo; Somasundaram, Swarnalatha

    2013-09-15

    Ministry of Environment, Republic of Korea (South Korea) is in progress of converting its current hazardous waste classification system to harmonize it with the international standard and to set-up the regulatory standards for toxic substances present in the hazardous waste. In the present work, the concentrations along with the trend of 13 heavy metals, F(-), CN(-) and 19 PAH present in the hazardous waste generated among various thermal processes (11 processes) in South Korea were analyzed along with their leaching characteristics. In all thermal processes, the median concentrations of Cu (3.58-209,000 mg/kg), Ni (BDL-1560 mg/kg), Pb (7.22-5132.25mg/kg) and Zn (83.02-31419 mg/kg) were comparatively higher than the other heavy metals. Iron & Steel thermal process showed the highest median value of the heavy metals Cd (14.76 mg/kg), Cr (166.15 mg/kg) and Hg (2.38 mg/kg). Low molecular weight PAH (BDL-37.59 mg/kg) was predominant in sludge & filter cake samples present in most of the thermal processes. Comparatively flue gas dust present in most of the thermal processing units resulted in the higher leaching of the heavy metals.

  1. Comparative Analysis of Processes for Recovery of Rare Earths from Bauxite Residue

    Science.gov (United States)

    Borra, Chenna Rao; Blanpain, Bart; Pontikes, Yiannis; Binnemans, Koen; Van Gerven, Tom

    2016-09-01

    Environmental concerns and lack of space suggest that the management of bauxite residue needs to be re-adressed. The utilization of the residue has thus become a topic high on the agenda for both academia and industry, yet, up to date, it is only rarely used. Nonetheless, recovery of rare earth elements (REEs) with or without other metals from bauxite residue, and utilization of the left-over residue in other applications like building materials may be a viable alternative to storage. Hence, different processes developed by the authors for recovery of REEs and other metals from bauxite residue were compared. In this study, preliminary energy and cost analyses were carried out to assess the feasibility of the processes. These analyses show that the combination of alkali roasting-smelting-quenching-leaching is a promising process for the treatment of bauxite residue and that it is justified to study this process at a pilot scale.

  2. Comparative Analysis of Processes for Recovery of Rare Earths from Bauxite Residue

    Science.gov (United States)

    Borra, Chenna Rao; Blanpain, Bart; Pontikes, Yiannis; Binnemans, Koen; Van Gerven, Tom

    2016-11-01

    Environmental concerns and lack of space suggest that the management of bauxite residue needs to be re-adressed. The utilization of the residue has thus become a topic high on the agenda for both academia and industry, yet, up to date, it is only rarely used. Nonetheless, recovery of rare earth elements (REEs) with or without other metals from bauxite residue, and utilization of the left-over residue in other applications like building materials may be a viable alternative to storage. Hence, different processes developed by the authors for recovery of REEs and other metals from bauxite residue were compared. In this study, preliminary energy and cost analyses were carried out to assess the feasibility of the processes. These analyses show that the combination of alkali roasting-smelting-quenching-leaching is a promising process for the treatment of bauxite residue and that it is justified to study this process at a pilot scale.

  3. Measurement of comparative advantages of processed food sector of Serbia in the increasing the export

    Directory of Open Access Journals (Sweden)

    Ignjatijević Svetlana

    2014-01-01

    Full Text Available The subject of this research is to analyse the comparative advantages of export of processed food sector, in order to define the position of the processed food sector in Serbia in compare to the Danube region and highlight the products that were and will be the main exporting agricultural product of Serbia. In this research, we have applied the following indexes: RXA, RTA, ln RXA, RC, RCA, LFI, GL, Sm. We have examined the movement of the index for the period 2005 - 2011th year. We have investigated the existence of correlations RCA index of processed food sectors with the application of the Pearson and Spearman index determined as RCA variable mutual co-variant. We found that following products showed an increase of comparative advantage in export as measured by the Balassa index: milk products, cheese and curd, groats and meal of other cereals, preparations of cereals, flour, starch, vegetables, roots and tubers, processed, prepared and Fruit products, sugar, molasses and honey, chocolate and other food preparations with cocoa, animal food (including un milled cereals, edible products and preparations, alcoholic beverages, non-alcoholic beverages, solid vegetable fats, oils, 'soft' and animal and vegetable fats.

  4. Optimization of Multiple Responses of Ultrasonic Machining (USM Process: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Rina Chakravorty

    2013-04-01

    Full Text Available Ultrasonic machining (USM process has multiple performance measures, e.g. material removal rate (MRR, tool wear rate (TWR, surface roughness (SR etc., which are affected by several process parameters. The researchers commonly attempted to optimize USM process with respect to individual responses, separately. In the recent past, several systematic procedures for dealing with the multi-response optimization problems have been proposed in the literature. Although most of these methods use complex mathematics or statistics, there are some simple methods, which can be comprehended and implemented by the engineers to optimize the multiple responses of USM processes. However, the relative optimization performance of these approaches is unknown because the effectiveness of different methods has been demonstrated using different sets of process data. In this paper, the computational requirements for four simple methods are presented, and two sets of past experimental data on USM processes are analysed using these methods. The relative performances of these methods are then compared. The results show that weighted signal-to-noise (WSN ratio method and utility theory (UT method usually give better overall optimisation performance for the USM process than the other approaches.

  5. Estimation and model selection of semiparametric multivariate survival functions under general censorship.

    Science.gov (United States)

    Chen, Xiaohong; Fan, Yanqin; Pouzo, Demian; Ying, Zhiliang

    2010-07-01

    We study estimation and model selection of semiparametric models of multivariate survival functions for censored data, which are characterized by possibly misspecified parametric copulas and nonparametric marginal survivals. We obtain the consistency and root-n asymptotic normality of a two-step copula estimator to the pseudo-true copula parameter value according to KLIC, and provide a simple consistent estimator of its asymptotic variance, allowing for a first-step nonparametric estimation of the marginal survivals. We establish the asymptotic distribution of the penalized pseudo-likelihood ratio statistic for comparing multiple semiparametric multivariate survival functions subject to copula misspecification and general censorship. An empirical application is provided.

  6. Forecasting macroeconomic variables using neural network models and three automated model selection techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2016-01-01

    When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. To alleviate the problem, White (2006) presented a solution (Quick......Net) that converts the specification and nonlinear estimation problem into a linear model selection and estimation problem. We shall compare its performance to that of two other procedures building on the linearization idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting...

  7. Detecting Location Shifts during Model Selection by Step-Indicator Saturation

    Directory of Open Access Journals (Sweden)

    Jennifer L. Castle

    2015-04-01

    Full Text Available To capture location shifts in the context of model selection, we propose selecting significant step indicators from a saturating set added to the union of all of the candidate variables. The null retention frequency and approximate non-centrality of a selection test are derived using a ‘split-half’ analysis, the simplest specialization of a multiple-path block-search algorithm. Monte Carlo simulations, extended to sequential reduction, confirm the accuracy of nominal significance levels under the null and show retentions when location shifts occur, improving the non-null retention frequency compared to the corresponding impulse-indicator saturation (IIS-based method and the lasso.

  8. Linear regression model selection using p-values when the model dimension grows

    CERN Document Server

    Pokarowski, Piotr; Teisseyre, Paweł

    2012-01-01

    We consider a new criterion-based approach to model selection in linear regression. Properties of selection criteria based on p-values of a likelihood ratio statistic are studied for families of linear regression models. We prove that such procedures are consistent i.e. the minimal true model is chosen with probability tending to 1 even when the number of models under consideration slowly increases with a sample size. The simulation study indicates that introduced methods perform promisingly when compared with Akaike and Bayesian Information Criteria.

  9. Model selection by LASSO methods in a change-point model

    CERN Document Server

    Ciuperca, Gabriela

    2011-01-01

    The paper considers a linear regression model with multiple change-points occurring at unknown times. The LASSO technique is very interesting since it allows the parametric estimation, including the change-points, and automatic variable selection simultaneously. The asymptotic properties of the LASSO-type (which has as particular case the LASSO estimator) and of the adaptive LASSO estimators are studied. For this last estimator the oracle properties are proved. In both cases, a model selection criterion is proposed. Numerical examples are provided showing the performances of the adaptive LASSO estimator compared to the LS estimator.

  10. Assessing environmental impacts using a comparative LCA of industrial and artisanal production processes: "Minas Cheese" case

    Directory of Open Access Journals (Sweden)

    Elbert Muller Nigri

    2014-09-01

    Full Text Available This study uses the Life Cycle Assessment (LCA methodology to evaluate and compare the environmental impacts caused by both the artisanal and the industrial manufacturing processes of "Minas cheese". This is a traditional cheese produced in the state of Minas Gerais (Brazil, and it is considered a "cultural patrimony" in the country. The high participation of artisanal producers in the market justifies this research, and this analysis can help the identification of opportunities to improve the environmental performance of several stages of the production system. The environmental impacts caused were also assessed and compared. The functional unit adopted was 1 kilogram (Kg of cheese. The system boundaries considered were the production process, conservation of product (before sale, and transport to consumer market. The milk production process was considered similar in both cases, and therefore it was not included in the assessment. The data were collected through interviews with the producers, observation, and a literature review; they were ordered and processed using the SimaPro 7 LCA software. According to the impact categories analyzed, the artisanal production exerted lower environmental impacts. This can be justified mainly because the industrial process includes the pasteurization stage, which uses dry wood as an energy source and refrigeration.

  11. Electro-spun organic nanofibers elaboration process investigations using comparative analytical solutions.

    Science.gov (United States)

    Colantoni, A; Boubaker, K

    2014-01-30

    In this paper Enhanced Variational Iteration Method, EVIM is proposed, along with the BPES, for solving Bratu equation which appears in the particular elecotrospun nanofibers fabrication process framework. Elecotrospun organic nanofibers, with diameters less than 1/4 microns have been used in non-wovens and filtration industries for a broad range of filtration applications in the last decade. Electro-spinning process has been associated to Bratu equation through thermo-electro-hydrodynamics balance equations. Analytical solutions have been proposed, discussed and compared.

  12. Comparative process analysis of fullerene production by the arc and the radio-frequency discharge methods.

    Science.gov (United States)

    Marković, Z; Todorović-Marković, B; Mohai, I; Farkas, Z; Kovats, E; Szepvolgyi, J; Otasević, D; Scheier, P; Feil, S; Romcević, N

    2007-01-01

    In this work, comparative analysis of processes in carbon arc and radio frequency (RF) plasma during fullerene synthesis has been presented. The kinetic model of fullerene formation developed earlier has been verified in both types of plasma reactors. The fullerene yield depended on carbon concentration, velocity of plasma flame and rotational temperature of C2 radicals predominantly. When mean rotational temperature of C2 radicals was 3000 K, the fullerene yield was the highest regardless of the type of used reactor. The zone of fullerene formation is larger significantly in RF plasma reactor compared to arc reactor.

  13. Bayesian Model Selection with Network Based Diffusion Analysis.

    Science.gov (United States)

    Whalen, Andrew; Hoppitt, William J E

    2016-01-01

    A number of recent studies have used Network Based Diffusion Analysis (NBDA) to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC) can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA). To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  14. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  15. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysi...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian.......Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...

  16. Model Selection Framework for Graph-based data

    CERN Document Server

    Caceres, Rajmonda S; Schmidt, Matthew C; Miller, Benjamin A; Campbell, William M

    2016-01-01

    Graphs are powerful abstractions for capturing complex relationships in diverse application settings. An active area of research focuses on theoretical models that define the generative mechanism of a graph. Yet given the complexity and inherent noise in real datasets, it is still very challenging to identify the best model for a given observed graph. We discuss a framework for graph model selection that leverages a long list of graph topological properties and a random forest classifier to learn and classify different graph instances. We fully characterize the discriminative power of our approach as we sweep through the parameter space of two generative models, the Erdos-Renyi and the stochastic block model. We show that our approach gets very close to known theoretical bounds and we provide insight on which topological features play a critical discriminating role.

  17. Life Cycle Assessment (LCA) used to compare two different methods of ripe table olive processing

    Energy Technology Data Exchange (ETDEWEB)

    Russo, C.; Cappelletti, G. M.; Nicoletti, G. M.

    2010-07-01

    The aim of the present study is to analyze the most common method used for processing ripe table olives: the California style. Life Cycle Assessment (LCA) was applied to detect the hot spots of the system under examination. The LCA results also allowed us to compare the traditional California style, here called method A, with another California style, here called method B. We were interested in this latter method, because the European Union is considering introducing it into the product specification of the Protected Denomination of Origin (PDO) La Bella della Daunia. It was also possible to compare the environmental impacts of the two California style methods with those of the Spanish style method. From the comparison it is clear that method B has a greater environmental impact than method A because greater amounts of water and electricity are required, whereas Spanish style processing has a lower environmental impact than the California style methods. (Author)

  18. Comparative study on the processing of armour steels with various unconventional technologies

    Science.gov (United States)

    Herghelegiu, E.; Schnakovszky, C.; Radu, M. C.; Tampu, N. C.; Zichil, V.

    2017-08-01

    The aim of the current paper is to analyse the suitability of three unconventional technologies - abrasive water jet (AWJ), plasma and laser - to process armour steels. In view of this, two materials (Ramor 400 and Ramor 550) were selected to carry out the experimental tests and the quality of cuts was quantified by considering the following characteristics: width of the processed surface at the jet inlet (Li), width of the processed surface at the jet outlet (Lo), inclination angle (a), deviation from perpendicularity (u), surface roughness (Ra) and surface hardness. It was fond that in terms of cut quality and environmental impact, the best results are offered by abrasive water jet technology. However, it has the lowest productivity comparing to the other two technologies.

  19. Comparing an FPGA to a Cell for an Image Processing Application

    Directory of Open Access Journals (Sweden)

    Robert W. Ives

    2010-01-01

    Full Text Available Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs, have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3 game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  20. Quantile hydrologic model selection and model structure deficiency assessment: 2. Applications

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    Quantile hydrologic model selection and structure deficiency assessment is applied in three case studies. The performance of quantile model selection problem is rigorously evaluated using a model structure on the French Broad river basin data set. The case study shows that quantile model selection

  1. The treatment of municipal solid waste in Malaysia comparing the biothennal process and mass burning

    Energy Technology Data Exchange (ETDEWEB)

    Fogelholm, C.J.; Iso-Tryykari, M.

    1997-12-31

    Mass burning is the previously much used technology in the combustion of municipal solid waste. In mass burning, unsorted waste is burned on a grate. The Biothermal Process is a new innovative municipal solid waste treatment concept. It consists of front end treatment, the biogasification of the biofraction and the fluidized bed combustion of the combustible fraction. The objective of this work is to compare the technical, environmental and economical features of the Biothermal Process and mass burning, when constructed in Malaysia. Firstly technical descriptions of concepts are presented. Secondly three cases namely Kuala Lumpur, Perai and Johor Bahru are studied. Finally conclusions are drawn. Economic comparisons revealed that the Biothermal Process is more economical than mass burning. The investment cost far the Biothermal Process is about 30 % lower than for mass burning plant. To achieve an 8 % Return on Investment, the treatment fee for the Biothermal Process is 47-95 MYR per tonne and for mass burning 181-215 MYR per tonne depending on the case. The sensibility analysis showed that independent of the variations in feeding values, the treatment fee remains much lower in the Biothermal Process. Technical comparisons show that the Biothermal Process has the better waste reduction and recycling rate in all cases. The Biothermal Process has much better electrical efficiency in the Kuala Lumpur and Johor Bahru cases, while mass burning has slightly better electrical efficiency in the Perai case. Both concepts have postal for phased construction, but phasing increases investment costs more in mass burning. The suitability of each concept to the differences in the quality of waste depends on local conditions, and both methods have merits. The Biothermal Process produces 45-70 % lower air emissions than mass burning, and generates less traffic in Kuala Lumpur and Perai, while traffic generation is equal in the Johor Bahru case. The comparisons show that according

  2. Comparative analysis of cogeneration power plants optimization based on stochastic method using superstructure and process simulator

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Leonardo Rodrigues de [Instituto Federal do Espirito Santo, Vitoria, ES (Brazil)], E-mail: leoaraujo@ifes.edu.br; Donatelli, Joao Luiz Marcon [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil)], E-mail: joaoluiz@npd.ufes.br; Silva, Edmar Alino da Cruz [Instituto Tecnologico de Aeronautica (ITA/CTA), Sao Jose dos Campos, SP (Brazil); Azevedo, Joao Luiz F. [Instituto de Aeronautica e Espaco (CTA/IAE/ALA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    Thermal systems are essential in facilities such as thermoelectric plants, cogeneration plants, refrigeration systems and air conditioning, among others, in which much of the energy consumed by humanity is processed. In a world with finite natural sources of fuels and growing energy demand, issues related with thermal system design, such as cost estimative, design complexity, environmental protection and optimization are becoming increasingly important. Therefore the need to understand the mechanisms that degrade energy, improve energy sources use, reduce environmental impacts and also reduce project, operation and maintenance costs. In recent years, a consistent development of procedures and techniques for computational design of thermal systems has occurred. In this context, the fundamental objective of this study is a performance comparative analysis of structural and parametric optimization of a cogeneration system using stochastic methods: genetic algorithm and simulated annealing. This research work uses a superstructure, modelled in a process simulator, IPSEpro of SimTech, in which the appropriate design case studied options are included. Accordingly, the cogeneration system optimal configuration is determined as a consequence of the optimization process, restricted within the configuration options included in the superstructure. The optimization routines are written in MsExcel Visual Basic, in order to work perfectly coupled to the simulator process. At the end of the optimization process, the system optimal configuration, given the characteristics of each specific problem, should be defined. (author)

  3. Comparative analysis of the processing accuracy of high strength metal sheets by AWJ, laser and plasma

    Science.gov (United States)

    Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.

    2016-08-01

    Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.

  4. Electrocortical evidence for preferential processing of dynamic pain expressions compared to other emotional expressions.

    Science.gov (United States)

    Reicherts, Philipp; Wieser, Matthias J; Gerdes, Antje B M; Likowski, Katja U; Weyers, Peter; Mühlberger, Andreas; Pauli, Paul

    2012-09-01

    Decoding pain in others is of high individual and social benefit in terms of harm avoidance and demands for accurate care and protection. The processing of facial expressions includes both specific neural activation and automatic congruent facial muscle reactions. While a considerable number of studies investigated the processing of emotional faces, few studies specifically focused on facial expressions of pain. Analyses of brain activity and facial responses elicited by the perception of facial pain expressions in contrast to other emotional expressions may unravel the processing specificities of pain-related information in healthy individuals and may contribute to explaining attentional biases in chronic pain patients. In the present study, 23 participants viewed short video clips of neutral, emotional (joy, fear), and painful facial expressions while affective ratings, event-related brain responses, and facial electromyography (Musculus corrugator supercilii, M. orbicularis oculi, M. zygomaticus major, M. levator labii) were recorded. An emotion recognition task indicated that participants accurately decoded all presented facial expressions. Electromyography analysis suggests a distinct pattern of facial response detected in response to happy faces only. However, emotion-modulated late positive potentials revealed a differential processing of pain expressions compared to the other facial expressions, including fear. Moreover, pain faces were rated as most negative and highly arousing. Results suggest a general processing bias in favor of pain expressions. Findings are discussed in light of attentional demands of pain-related information and communicative aspects of pain expressions.

  5. Comparing curvilinear vs Manhattan ILT shape efficacy on EPE and process window

    Science.gov (United States)

    Zhang, Dan; Buck, Peter; Tritchkov, Alexander; Madhusudhan, Saikiran; Word, James

    2016-10-01

    Inverse Lithography Technology (ILT) is gaining acceptance as part of a comprehensive OPC solution especially as a repair technique to locally improve process window where conventional OPC does not have enough degrees of freedom to produce acceptable results. [1] Since ILT is significantly more computationally intensive than conventional OPC, a localized application of ILT does not significantly increase OPC cycle time. As ILT methods mature and become more efficient, combined with the availability of huge compute clusters for post tape out data processing, the possibility of full-field ILT OPC could soon become reality. Full-field ILT OPC may provide improved process window and greater layout flexibility as long as multi-patterning methods with 193 nm exposure wavelength remain the primary lithography strategy for advanced technology nodes. Due to limitations of photomask lithography tools that prevent efficient exposure of non-Manhattan shapes, ILT OPC output is typically post-processed to conform to mask MRC rules, rendering the raw all-angle features to a Manhattanized equivalent. Previous comparisons of raw vs Manhattan ILT OPC at earlier nodes have shown that a Manhattanized output can be made to print on wafer with equivalent process window while conforming to mask manufacturing rules.[2,3,4] In this paper we use wafer-level lithography simulation to compare raw vs Manhattanized ILT output based on current advanced nodes and MRC rules. We expand this study to include a mask model to ensure that mask corner rounding effects are considered.

  6. A process for analysis of microarray comparative genomics hybridisation studies for bacterial genomes

    Directory of Open Access Journals (Sweden)

    Woodward Martin J

    2008-01-01

    Full Text Available Abstract Background Microarray based comparative genomic hybridisation (CGH experiments have been used to study numerous biological problems including understanding genome plasticity in pathogenic bacteria. Typically such experiments produce large data sets that are difficult for biologists to handle. Although there are some programmes available for interpretation of bacterial transcriptomics data and CGH microarray data for looking at genetic stability in oncogenes, there are none specifically to understand the mosaic nature of bacterial genomes. Consequently a bottle neck still persists in accurate processing and mathematical analysis of these data. To address this shortfall we have produced a simple and robust CGH microarray data analysis process that may be automated in the future to understand bacterial genomic diversity. Results The process involves five steps: cleaning, normalisation, estimating gene presence and absence or divergence, validation, and analysis of data from test against three reference strains simultaneously. Each stage of the process is described and we have compared a number of methods available for characterising bacterial genomic diversity, for calculating the cut-off between gene presence and absence or divergence, and shown that a simple dynamic approach using a kernel density estimator performed better than both established, as well as a more sophisticated mixture modelling technique. We have also shown that current methods commonly used for CGH microarray analysis in tumour and cancer cell lines are not appropriate for analysing our data. Conclusion After carrying out the analysis and validation for three sequenced Escherichia coli strains, CGH microarray data from 19 E. coli O157 pathogenic test strains were used to demonstrate the benefits of applying this simple and robust process to CGH microarray studies using bacterial genomes.

  7. Balance of Comparative Advantages in the Processed Food Sector of the Danube Countries

    Directory of Open Access Journals (Sweden)

    Svetlana Ignjatijević

    2015-05-01

    Full Text Available In this paper, we investigated the level of competitiveness of the processed food sector of the Danube region countries, in order to show the existence of comparative advantage and the correlation of exports. We used the Balassa (RCA–revealed comparative advantage index and TPI (trade performance indexes. At first, using the Pearson and Spearman index, we examined the existence of correlations between the processed food sector of the Danube countries. Then, we applied the Least Significant Difference (LSD test to further compare the value and answered the question: between which Danube countries are there significant differences? With the study, we found that the distribution of the RCA index in Bosnia and Herzegovina, Hungary, Moldova and Slovenia deviates from normality. We also found the existence of a strong correlation of the RCA index of the Czech Republic with Romania, Hungary with Moldova and Serbia, Moldova with Serbia and Bulgaria with Ukraine. Finally, we concluded that the development of trade in the countries of the Danube region requires the participation of all relevant interest groups and could play an important role in providing faster economic development, that is in achieving sustainable development of the countries, with the sustainable use of available resources.

  8. Color, TOC and AOX removals from pulp mill effluent by advanced oxidation processes: a comparative study.

    Science.gov (United States)

    Catalkaya, Ebru Cokay; Kargi, Fikret

    2007-01-10

    Pulp mill effluent containing toxic chemicals was treated by different advanced oxidation processes (AOPs) consisting of treatments by hydrogen peroxide, Fenton's reagent (H2O2/Fe2+), UV, UV/H2O2, photo-Fenton (UV/H2O2/Fe2+), ozonation and peroxone (ozone/H2O2) in laboratory-scale reactors for color, total organic carbon (TOC) and adsorbable organic halogens (AOX) removals from the pulp mill effluent. Effects of some operating parameters such as the initial pH, oxidant and catalyst concentrations on TOC, color, AOX removals were investigated. Almost every method used resulted in some degree of color removal from the pulp mill effluent. However, the Fenton's reagent utilizing H2O2/Fe2+ resulted in the highest color, TOC and AOX removals under acidic conditions when compared with the other AOPs tested. Approximately, 88% TOC, 85% color and 89% AOX removals were obtained by the Fenton's reagent at pH 5 within 30 min. Photo-Fenton process yielded comparable TOC (85%), color (82%) and AOX (93%) removals within 5 min due to oxidations by UV light in addition to the Fenton's reagent. Fast oxidation reactions by the photo-Fenton treatment makes this approach more favorable as compared to the others used.

  9. Comparative study of thermochemical processes for hydrogen production from biomass fuels.

    Science.gov (United States)

    Biagini, Enrico; Masoni, Lorenzo; Tognotti, Leonardo

    2010-08-01

    Different thermochemical configurations (gasification, combustion, electrolysis and syngas separation) are studied for producing hydrogen from biomass fuels. The aim is to provide data for the production unit and the following optimization of the "hydrogen chain" (from energy source selection to hydrogen utilization) in the frame of the Italian project "Filiera Idrogeno". The project focuses on a regional scale (Tuscany, Italy), renewable energies and automotive hydrogen. Decentred and small production plants are required to solve the logistic problems of biomass supply and meet the limited hydrogen infrastructures. Different options (gasification with air, oxygen or steam/oxygen mixtures, combustion, electrolysis) and conditions (varying the ratios of biomass and gas input) are studied by developing process models with uniform hypothesis to compare the results. Results obtained in this work concern the operating parameters, process efficiencies, material and energetic needs and are fundamental to optimize the entire hydrogen chain.

  10. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  11. Sensory and Quality Evaluation of Traditional Compared with Power Ultrasound Processed Corn (Zea Mays) Tortilla Chips.

    Science.gov (United States)

    Janve, Bhaskar; Yang, Wade; Sims, Charles

    2015-06-01

    Power ultrasound reduces the traditional corn steeping time from 18 to 1.5 h during tortilla chips dough (masa) processing. This study sought to examine consumer (n = 99) acceptability and quality of tortilla chips made from the masa by traditional compared with ultrasonic methods. Overall appearance, flavor, and texture acceptability scores were evaluated using a 9-point hedonic scale. The baked chips (process intermediate) before and after frying (finished product) were analyzed using a texture analyzer and machine vision. The texture values were determined using the 3-point bend test using breaking force gradient (BFG), peak breaking force (PBF), and breaking distance (BD). The fracturing properties determined by the crisp fracture support rig using fracture force gradient (FFG), peak fracture force (PFF), and fracture distance (FD). The machine vision evaluated the total surface area, lightness (L), color difference (ΔE), Hue (°h), and Chroma (C*). The results were evaluated by analysis of variance and means were separated using Tukey's test. Machine vision values of L, °h, were higher (P BFG, BPD, PFF, and FD. Fried tortilla chips texture were higher significantly (P BFG and PFF for ultra-sonication than traditional processing. However, the instrumental differences were not detected in sensory analysis, concluding possibility of power ultrasound as potential tortilla chips processing aid.

  12. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    Science.gov (United States)

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  13. Early-Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques.

    Science.gov (United States)

    Tsagkari, Mirela; Couturier, Jean-Luc; Kokossis, Antonis; Dubois, Jean-Luc

    2016-09-08

    Biorefineries offer a promising alternative to fossil-based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital-intensive projects that involve state-of-the-art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well-documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early-stage capital cost estimation tool suitable for biorefinery processes. © 2015 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  14. Y-TZP ceramic processing from coprecipitated powders: a comparative study with three commercial dental ceramics.

    Science.gov (United States)

    Lazar, Dolores R R; Bottino, Marco C; Ozcan, Mutlu; Valandro, Luiz Felipe; Amaral, Regina; Ussui, Valter; Bressiani, Ana H A

    2008-12-01

    (1) To synthesize 3mol% yttria-stabilized zirconia (3Y-TZP) powders via coprecipitation route, (2) to obtain zirconia ceramic specimens, analyze surface characteristics, and mechanical properties, and (3) to compare the processed material with three reinforced dental ceramics. A coprecipitation route was used to synthesize a 3mol% yttria-stabilized zirconia ceramic processed by uniaxial compaction and pressureless sintering. Commercially available alumina or alumina/zirconia ceramics, namely Procera AllCeram (PA), In-Ceram Zirconia Block (CAZ) and In-Ceram Zirconia (IZ) were chosen for comparison. All specimens (6mmx5mmx5mm) were polished and ultrasonically cleaned. Qualitative phase analysis was performed by XRD and apparent densities were measured on the basis of Archimedes principle. Ceramics were also characterized using SEM, TEM and EDS. The hardness measurements were made employing Vickers hardness test. Fracture toughness (K(IC)) was calculated. Data were analyzed using one-way analysis of variance (ANOVA) and Tukey's test (alpha=0.05). ANOVA revealed that the Vickers hardness (pceramic materials composition. It was confirmed that the PA ceramic was constituted of a rhombohedral alumina matrix, so-called alpha-alumina. Both CAZ and IZ ceramics presented tetragonal zirconia and alpha-alumina mixture of phases. The SEM/EDS analysis confirmed the presence of aluminum in PA ceramic. In the IZ and CAZ ceramics aluminum, zirconium and cerium in grains involved by a second phase containing aluminum, silicon and lanthanum were identified. PA showed significantly higher mean Vickers hardness values (H(V)) (18.4+/-0.5GPa) compared to vitreous CAZ (10.3+/-0.2GPa) and IZ (10.6+/-0.4GPa) ceramics. Experimental Y-TZP showed significantly lower results than that of the other monophased ceramic (PA) (pceramics (pceramic processing conditions led to ceramics with mechanical properties comparable to commercially available reinforced ceramic materials.

  15. Episodic memory, concentrated attention and processing speed in aging: A comparative study of Brazilian age groups

    Directory of Open Access Journals (Sweden)

    Rochele Paz Fonseca

    Full Text Available Abstract Neuropsychological studies on the processing of some specific cognitive functions throughout aging are essential for the understanding of human cognitive development from ages 19 to 89. Objectives: This study aimed to verify the occurrence of differences in the processing of episodic memory, concentrated attention and speed of attentional processing among four age groups of adults. Methods: A total of 136 neurologically healthy adults, aged 19-89, with 9 or more years of schooling, took part in the study. Participants were divided according to four age groups: young, middle-aged, elderly and oldest old adults. Subtests of the Brief Neuropsychological Evaluation Instrument (NEUPSILIN were applied for the cognitive assessment. Mean score of corrected answers and of response times were compared between groups by means of a one-way ANOVA test with post-hoc Scheffe procedures and ANCOVA including the co-variables of years of schooling and socio-economical scores. Results: In general, differences in performance were observed from 60 years old on. Only the episodic memory task of delayed recall reflected differences from the age of around 40 onwards and processing speed from around the age of 70 onwards. Thus, differences were found between the age groups regarding their cognitive performance, particularly between young adults and elderly adults, and young adults and oldest old adults. Conclusions: Our research indicates that the middle-aged group should be better analyzed and that comparative cross-sectional studies including only extreme groups such as young and elderly adults are not sufficient.

  16. Comparative or competitive advantages of Ljubljana in the European integration process

    Directory of Open Access Journals (Sweden)

    Nataša Pichler Milanović

    2001-01-01

    Full Text Available The article deals with the comparative advantages of Ljubljana, the capital city of Slovenia, in the process of international and European integrations and the competitiveness of cities in the framework of the sustainable development paradigm. The aim of the article is to define the advantages and factors that are important for the future role of Ljubljana in the network of European cities, as well as to devise an empirical basis that can be used for strategists of urban development and researchers in the fields of urban and regional development. Special emphasis is given to the position and role of Ljubljana in comparison to other cities in Slovenia and to the comparative analysis between Ljubljana and chosen (sample and competitive cities within the European Union, Central European capital cities and nearby cross border cities within the framework of the Alpe-Adria union.

  17. From arrest to sentencing: A comparative analysis of the criminal justice system processing for rape crimes

    Directory of Open Access Journals (Sweden)

    Joana Domingues Vargas

    2008-01-01

    Full Text Available The current article is intended to demonstrate the advantages of prioritizing an analysis of court caseload processing for a given type of crime and proceeding to a comparison of the results obtained from empirical studies in different countries. The article draws on a study I performed on rape cases tried by the court system in Campinas, São Paulo State, and the study by Gary LaFree on rape cases in the United States, based on data in Indianapolis, Indiana. The comparative analysis of determinants of victims' and law enforcement agencies' decisions concerning the pursuit of legal action proved to be productive, even when comparing two different systems of justice. This allowed greater knowledge of how the Brazilian criminal justice system operates, both in its capacity to identify, try, and punish sex offenders, and in terms of the importance it ascribes to formal legal rules in trying rape cases, in comparison to the American criminal justice system.

  18. A Comparative Study of Measuring Devices Used During Space Shuttle Processing for Inside Diameters

    Science.gov (United States)

    Rodriguez, Antonio

    2006-01-01

    During Space Shuttle processing, discrepancies between vehicle dimensions and per print dimensions determine if a part should be refurbished, replaced or accepted "as-is." The engineer's job is to address each discrepancy by choosing the most accurate procedure and tool available, sometimes with up to ten thousands of an inch tolerance. Four methods of measurement are commonly used at the Kennedy Space Center: 1) caliper, 2) mold impressions, 3) optical comparator, 4) dial bore gage. During a problem report evaluation, uncertainty arose between methods after measuring diameters with variations of up to 0.0004" inches. The results showed that computer based measuring devices are extremely accurate, but when human factor is involved in determining points of reference, the results may vary widely compared to more traditional methods. iv

  19. A Comparative Analysis of Two Software Development Methodologies: Rational Unified Process and Extreme Programming

    Directory of Open Access Journals (Sweden)

    Marcelo Rafael Borth

    2014-01-01

    Full Text Available Software development methodologies were created to meet the great market demand for innovation, productivity, quality and performance. With the use of a methodology, it is possible to reduce the cost, the risk, the development time, and even increase the quality of the final product. This article compares two of these development methodologies: the Rational Unified Process and the Extreme Programming. The comparison shows the main differences and similarities between the two approaches, and highlights and comments some of their predominant features.

  20. Bayesian Model Selection With Network Based Diffusion Analysis

    Directory of Open Access Journals (Sweden)

    Andrew eWhalen

    2016-04-01

    Full Text Available A number of recent studies have used Network Based Diffusion Analysis (NBDA to detect the role of social transmission in the spread of a novel behavior through a population. In this paper we present a unified framework for performing NBDA in a Bayesian setting, and demonstrate how the Watanabe Akaike Information Criteria (WAIC can be used for model selection. We present a specific example of applying this method to Time to Acquisition Diffusion Analysis (TADA. To examine the robustness of this technique, we performed a large scale simulation study and found that NBDA using WAIC could recover the correct model of social transmission under a wide range of cases, including under the presence of random effects, individual level variables, and alternative models of social transmission. This work suggests that NBDA is an effective and widely applicable tool for uncovering whether social transmission underpins the spread of a novel behavior, and may still provide accurate results even when key model assumptions are relaxed.

  1. Model selection for the extraction of movement primitives.

    Science.gov (United States)

    Endres, Dominik M; Chiovetto, Enrico; Giese, Martin A

    2013-01-01

    A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  2. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  3. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  4. A Multi-Criteria Model Selection Protocol for Practical Applications to Nutrient Transport at the Catchment Scale

    Directory of Open Access Journals (Sweden)

    Ye Tuo

    2015-06-01

    Full Text Available Process-based models are widely used to investigate nutrient dynamics for water management purposes. Simulating nutrient transport and transformation processes from agricultural land into water bodies at the catchment scale are particularly relevant and challenging tasks for water authorities. However, few practical methods guide inexperienced modelers in the selection process of an appropriate model. In particular, data availability is a key aspect in a model selection protocol, since a large number of models contain the functionalities to predict nutrient fate and transport, yet a smaller number is applicable to specific datasets. In our work, we aim at providing a model selection protocol fit for practical application with particular emphasis on data availability, cost-benefit analysis and user’s objectives. We select for illustrative purposes five process-based models with different complexity as “candidates” models: SWAT (Soil and Water Assessment Tool, SWIM (Soil and Water Integrated Model, GWLF (Generalized Watershed Loading Function, AnnAGNPS (Annualized Agricultural Non-Point Source Pollution model and HSPF (Hydrological simulation program-FORTRAN. The models are described in terms of hydrological and chemical output and input requirements. The model selection protocol considers data availability, model characteristics and user’s objectives and it is applied to hypothetical scenarios. This selection method is particularly formulated to choose process-based models for nutrient modeling, but it can be generalized for other applications which are characterized by a similar degree of complexity.

  5. Comparative energy consumption analyses of an ultra high frequency induction heating system for material processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Tastan, M.; Gokozan, H.; Taskin, S.; Cavdar, U.

    2015-07-01

    This study compares an energy consumption results of the TI-6Al-4V based material processing under the 900 kHz induction heating for different cases. By this means, total power consumption and energy consumptions per sample and amount have been analyzed. Experiments have been conducted with 900 kHz, 2.8 kW ultra-high frequency induction system. Two cases are considered in the study. In the first case, TI-6Al-4V samples have been heated up to 900 degree centigrade with classical heating method, which is used in industrial applications, and then they have been cooled down by water. Afterwards, the samples have been heated up to 600 degree centigrade, 650 degree centigrade and 700 degree centigrade respectively and stress relieving process has been applied through natural cooling. During these processes, energy consumptions for each defined process have been measured. In the second case, unlike the first study, can be used five different samples have been heated up to the various temperatures between 600 degree centigrade and 1120 degree centigrade and energy consumptions have been measured for these processes. Thereby, the effect of temperature increase on each sample on energy cost has been analyzed. It has been seen that as a result of heating the titanium bulk materials, which have been used in the experiment, with ultra high frequency induction, temperature increase also increases the energy consumption. But it has been revealed that the increase rate in the energy consumption is more than the increase rate of the temperature. (Author)

  6. Sparsity Is Better with Stability: Combining Accuracy and Stability for Model Selection in Brain Decoding

    Science.gov (United States)

    Baldassarre, Luca; Pontil, Massimiliano; Mourão-Miranda, Janaina

    2017-01-01

    Structured sparse methods have received significant attention in neuroimaging. These methods allow the incorporation of domain knowledge through additional spatial and temporal constraints in the predictive model and carry the promise of being more interpretable than non-structured sparse methods, such as LASSO or Elastic Net methods. However, although sparsity has often been advocated as leading to more interpretable models it can also lead to unstable models under subsampling or slight changes of the experimental conditions. In the present work we investigate the impact of using stability/reproducibility as an additional model selection criterion1 on several different sparse (and structured sparse) methods that have been recently applied for fMRI brain decoding. We compare three different model selection criteria: (i) classification accuracy alone; (ii) classification accuracy and overlap between the solutions; (iii) classification accuracy and correlation between the solutions. The methods we consider include LASSO, Elastic Net, Total Variation, sparse Total Variation, Laplacian and Graph Laplacian Elastic Net (GraphNET). Our results show that explicitly accounting for stability/reproducibility during the model optimization can mitigate some of the instability inherent in sparse methods. In particular, using accuracy and overlap between the solutions as a joint optimization criterion can lead to solutions that are more similar in terms of accuracy, sparsity levels and coefficient maps even when different sparsity methods are considered. PMID:28261042

  7. Building v/s Exploring Models: Comparing Learning of Evolutionary Processes through Agent-based Modeling

    Science.gov (United States)

    Wagh, Aditi

    Two strands of work motivate the three studies in this dissertation. Evolutionary change can be viewed as a computational complex system in which a small set of rules operating at the individual level result in different population level outcomes under different conditions. Extensive research has documented students' difficulties with learning about evolutionary change (Rosengren et al., 2012), particularly in terms of levels slippage (Wilensky & Resnick, 1999). Second, though building and using computational models is becoming increasingly common in K-12 science education, we know little about how these two modalities compare. This dissertation adopts agent-based modeling as a representational system to compare these modalities in the conceptual context of micro-evolutionary processes. Drawing on interviews, Study 1 examines middle-school students' productive ways of reasoning about micro-evolutionary processes to find that the specific framing of traits plays a key role in whether slippage explanations are cued. Study 2, which was conducted in 2 schools with about 150 students, forms the crux of the dissertation. It compares learning processes and outcomes when students build their own models or explore a pre-built model. Analysis of Camtasia videos of student pairs reveals that builders' and explorers' ways of accessing rules, and sense-making of observed trends are of a different character. Builders notice rules through available blocks-based primitives, often bypassing their enactment while explorers attend to rules primarily through the enactment. Moreover, builders' sense-making of observed trends is more rule-driven while explorers' is more enactment-driven. Pre and posttests reveal that builders manifest a greater facility with accessing rules, providing explanations manifesting targeted assembly. Explorers use rules to construct explanations manifesting non-targeted assembly. Interviews reveal varying degrees of shifts away from slippage in both

  8. Mental health policy process: a comparative study of Ghana, South Africa, Uganda and Zambia

    Directory of Open Access Journals (Sweden)

    Kigozi Fred

    2010-08-01

    Full Text Available Abstract Background Mental illnesses are increasingly recognised as a leading cause of disability worldwide, yet many countries lack a mental health policy or have an outdated, inappropriate policy. This paper explores the development of appropriate mental health policies and their effective implementation. It reports comparative findings on the processes for developing and implementing mental health policies in Ghana, South Africa, Uganda and Zambia as part of the Mental Health and Poverty Project. Methods The study countries and respondents were purposively selected to represent different levels of mental health policy and system development to allow comparative analysis of the factors underlying the different forms of mental health policy development and implementation. Data were collected using semi-structured interviews and document analysis. Data analysis was guided by conceptual framework that was developed for this purpose. A framework approach to analysis was used, incorporating themes that emerged from the data and from the conceptual framework. Results Mental health policies in Ghana, South Africa, Uganda and Zambia are weak, in draft form or non-existent. Mental health remained low on the policy agenda due to stigma and a lack of information, as well as low prioritisation by donors, low political priority and grassroots demand. Progress with mental health policy development varied and respondents noted a lack of consultation and insufficient evidence to inform policy development. Furthermore, policies were poorly implemented, due to factors including insufficient dissemination and operationalisation of policies and a lack of resources. Conclusions Mental health policy processes in all four countries were inadequate, leading to either weak or non-existent policies, with an impact on mental health services. Recommendations are provided to strengthen mental health policy processes in these and other African countries.

  9. Comparative Analysis of the Processes of Quality in Physiotherapy / Kinesiology of Colombia and Chile

    Directory of Open Access Journals (Sweden)

    Luis Fernando Rodríguez Ibagué

    2015-05-01

    Full Text Available Introduction: The initiatives in Latin America’s reforms frame a concern to ensure universal coverage and provide quality services, thus the quality management has become one of the most important issues in the XXI century, especially with health issues. Objectives: To characterize the processes of Enabling / health authorization and accreditation of kinesiological services of Colombia and Chile from the perspective of health quality. Methodology: For this we conducted a descriptive comparative analysis between countries (Colombia and Chile in terms of quality processes in Kinesiology showing similarities and differences related to quality assurance. Discussion: Both countries have similar standards in terms of Enabling / Health Authorization and Accreditation. Currently in these countries are discussed issues such as patient safety, rights and duties of patients , infrastructure, access , care assessment , human talent, and others , all for the sake of ensuring the quality of service that is currently in development and has been incorporated into the organizational culture of the services. Conclusions: In the literature review is evidenced poor documentation towards quality processes, specifically for Kinesiology. It is therefore important to provide to the academic community the analysis of standards required for Enabling and Accreditation that contributes to the enrichment of the administration area and management in our profession.

  10. Comparative proteome and transcriptome analysis of lager brewer's yeast in the autolysis process.

    Science.gov (United States)

    Xu, Weina; Wang, Jinjing; Li, Qi

    2014-12-01

    The autolysis of brewer's yeast during beer production has a significant effect on the quality of the final product. In this work, we performed proteome and transcriptome studies on brewer's yeast to examine changes in protein and mRNA levels in the process of autolysis. Protein and RNA samples of the strain Qing2 at two different autolysis stages were obtained for further study. In all, 49 kinds of proteins were considered to be involved in the autolysis response, eight of which were up-regulated and 41 down-regulated. Seven new kinds of proteins emerged during autolysis. Results of comparative analyses showed that important changes had taken place as an adaptive response to autolysis. Functional analysis showed that carbohydrate and energy metabolism, cellular amino acid metabolic processes, cell response to various stresses (such as oxidative stress, salt stress, and osmotic stress), translation and transcription were repressed by the down-regulation of the corresponding proteins, and starvation and DNA damage responses could be induced. The comparison of data on transcriptomes with proteomes demonstrated that most autolysis-response proteins as well as new proteins showed a general correlation between mRNA and protein levels. Thus these proteins were thought to be transcriptionally regulated. These findings provide important information about how brewer's yeast acts to cope with autolysis at molecular levels, which might enhance global understanding of the autolysis process.

  11. A Comparative Study of Two 47 Tuc Giant Stars with Different s-process Enrichment

    Science.gov (United States)

    Cordero, M. J.; Hansen, C. J.; Johnson, C. I.; Pilachowski, C. A.

    2015-07-01

    Here we aim to understand the origin of 47 Tuc’s La-rich star Lee 4710. We report abundances for O, Na, Mg, Al, Si, Ca, Sc, Ti, V, Cr, Co, Ni, Zn, Y, Zr, Ba, La, Ce, Pr, Nd, and Eu and present a detailed abundance analysis of two 47 Tuc stars with similar stellar parameters but different slow neutron-capture (s-)process enrichment. Star Lee 4710 has the highest known La abundance ratio in this cluster ([La/Fe] = 1.14), and star Lee 4626 is known to have normal s-process abundances (e.g., [Ba/Eu] < 0). The nucleosynthetic pattern of elements with Z ≳ 56 for star Lee 4710 agrees with the predicted yields of a 1.3{M}⊙ asymptotic giant branch (AGB) star. Therefore, Lee 4710 may have been enriched by mass transfer from a more massive AGB companion, which is compatible with its location far away from the center of this relatively metal-rich ([Fe/H] ˜ -0.7) globular cluster. A further analysis comparing the abundance pattern of Lee 4710 with data available in the literature reveals that nine out of the ˜200 47 Tuc stars previously studied show strong s-process enhancements that point toward later enrichment by more massive AGB stars.

  12. Comparing estimates of climate change impacts from process-based and statistical crop models

    Science.gov (United States)

    Lobell, David B.; Asseng, Senthold

    2017-01-01

    The potential impacts of climate change on crop productivity are of widespread interest to those concerned with addressing climate change and improving global food security. Two common approaches to assess these impacts are process-based simulation models, which attempt to represent key dynamic processes affecting crop yields, and statistical models, which estimate functional relationships between historical observations of weather and yields. Examples of both approaches are increasingly found in the scientific literature, although often published in different disciplinary journals. Here we compare published sensitivities to changes in temperature, precipitation, carbon dioxide (CO2), and ozone from each approach for the subset of crops, locations, and climate scenarios for which both have been applied. Despite a common perception that statistical models are more pessimistic, we find no systematic differences between the predicted sensitivities to warming from process-based and statistical models up to +2 °C, with limited evidence at higher levels of warming. For precipitation, there are many reasons why estimates could be expected to differ, but few estimates exist to develop robust comparisons, and precipitation changes are rarely the dominant factor for predicting impacts given the prominent role of temperature, CO2, and ozone changes. A common difference between process-based and statistical studies is that the former tend to include the effects of CO2 increases that accompany warming, whereas statistical models typically do not. Major needs moving forward include incorporating CO2 effects into statistical studies, improving both approaches’ treatment of ozone, and increasing the use of both methods within the same study. At the same time, those who fund or use crop model projections should understand that in the short-term, both approaches when done well are likely to provide similar estimates of warming impacts, with statistical models generally

  13. Comparative effectiveness research on patients with acute ischemic stroke using Markov decision processes

    Directory of Open Access Journals (Sweden)

    Wu Darong

    2012-03-01

    Full Text Available Abstract Background Several methodological issues with non-randomized comparative clinical studies have been raised, one of which is whether the methods used can adequately identify uncertainties that evolve dynamically with time in real-world systems. The objective of this study is to compare the effectiveness of different combinations of Traditional Chinese Medicine (TCM treatments and combinations of TCM and Western medicine interventions in patients with acute ischemic stroke (AIS by using Markov decision process (MDP theory. MDP theory appears to be a promising new method for use in comparative effectiveness research. Methods The electronic health records (EHR of patients with AIS hospitalized at the 2nd Affiliated Hospital of Guangzhou University of Chinese Medicine between May 2005 and July 2008 were collected. Each record was portioned into two "state-action-reward" stages divided by three time points: the first, third, and last day of hospital stay. We used the well-developed optimality technique in MDP theory with the finite horizon criterion to make the dynamic comparison of different treatment combinations. Results A total of 1504 records with a primary diagnosis of AIS were identified. Only states with more than 10 (including 10 patients' information were included, which gave 960 records to be enrolled in the MDP model. Optimal combinations were obtained for 30 types of patient condition. Conclusion MDP theory makes it possible to dynamically compare the effectiveness of different combinations of treatments. However, the optimal interventions obtained by the MDP theory here require further validation in clinical practice. Further exploratory studies with MDP theory in other areas in which complex interventions are common would be worthwhile.

  14. Composting on Mars or the Moon: I. Comparative evaluation of process design alternatives

    Science.gov (United States)

    Finstein, M. S.; Strom, P. F.; Hogan, J. A.; Cowan, R. M.; Janes, H. W. (Principal Investigator)

    1999-01-01

    As a candidate technology for treating solid wastes and recovering resources in bioregenerative Advanced Life Support, composting potentially offers such advantages as compactness, low mass, near ambient reactor temperatures and pressures, reliability, flexibility, simplicity, and forgiveness of operational error or neglect. Importantly, the interactions among the physical, chemical, and biological factors that govern composting system behavior are well understood. This article comparatively evaluates five Generic Systems that describe the basic alternatives to composting facility design and control. These are: 1) passive aeration; 2) passive aeration abetted by mechanical agitation; 3) forced aeration--O2 feedback control; 4) forced aeration--temperature feedback control; 5) forced aeration--integrated O2 and temperature feedback control. Each of the five has a distinctive pattern of behavior and process performance characteristics. Only Systems 4 and 5 are judged to be viable candidates for ALS on alien worlds, though which is better suited in this application is yet to be determined.

  15. Comparative effectiveness of intravenous immunoglobulin from different manufacturing processes on Kawasaki disease

    Institute of Scientific and Technical Information of China (English)

    Ming-Chih Lin[

    2014-01-01

    Background: The comparative effectiveness of intravenous immunoglobulin (IVIG) for Kawasaki disease was regarded as inconclusive in the international guidelines. However, several new evidences have been published in recent years. Data sources: A literature search of PubMed was conducted using key words of "Kawasaki disease or mucocutaneous lymph node syndrome" and "immunoglobulin" in combination. Only original articles published after 2004 were selected. A total of 813 papers were found in PubMed. These papers were screened manually by their titles and abstracts. Results: Patients treated with IVIG prepared by betapropiolactonation might have worse outcome (a higher non-responsive rate in one report and a higher rate of coronary aneurysm in two reports). Storage of IVIG in acidic solution might be correlated with a higher rate of coronary aneurysm (two reports). Conclusions: Different processes of preparation and conditions of preservation of IVIG may have profound effects on its clinical effectiveness. Randomized controlled studies are needed to further elucidate this issue.

  16. Comparative effectiveness of intravenous immunoglobulin from different manufacturing processes on Kawasaki disease.

    Science.gov (United States)

    Lin, Ming-Chih

    2014-05-01

    The comparative effectiveness of intravenous immunoglobulin (IVIG) for Kawasaki disease was regarded as inconclusive in the international guidelines. However, several new evidences have been published in recent years. A literature search of PubMed was conducted using key words of "Kawasaki disease or mucocutaneous lymph node syndrome" and "immunoglobulin" in combination. Only original articles published after 2004 were selected. A total of 813 papers were found in PubMed. These papers were screened manually by their titles and abstracts. Patients treated with IVIG prepared by betapropiolactonation might have worse outcome (a higher non-responsive rate in one report and a higher rate of coronary aneurysm in two reports). Storage of IVIG in acidic solution might be correlated with a higher rate of coronary aneurysm (two reports). Different processes of preparation and conditions of preservation of IVIG may have profound effects on its clinical effectiveness. Randomized controlled studies are needed to further elucidate this issue.

  17. Virtually simulated social pressure influences early visual processing more in low compared to high autonomous participants.

    Science.gov (United States)

    Trautmann-Lengsfeld, Sina Alexa; Herrmann, Christoph Siegfried

    2014-02-01

    In a previous study, we showed that virtually simulated social group pressure could influence early stages of perception after only 100  ms. In the present EEG study, we investigated the influence of social pressure on visual perception in participants with high (HA) and low (LA) levels of autonomy. Ten HA and ten LA individuals were asked to accomplish a visual discrimination task in an adapted paradigm of Solomon Asch. Results indicate that LA participants adapted to the incorrect group opinion more often than HA participants (42% vs. 30% of the trials, respectively). LA participants showed a larger posterior P1 component contralateral to targets presented in the right visual field when conforming to the correct compared to conforming to the incorrect group decision. In conclusion, our ERP data suggest that the group context can have early effects on our perception rather than on conscious decision processes in LA, but not HA participants.

  18. Comparing mesophilic and thermophilic anaerobic digestion of chicken manure: Microbial community dynamics and process resilience

    Energy Technology Data Exchange (ETDEWEB)

    Niu, Qigui; Takemura, Yasuyuki; Kubota, Kengo [Department of Civil and Environmental Engineering, Graduate School of Engineering Tohoku University, 6-6-06 Aza-Aoba, Aramaki, Aoba-ku, Sendai, Miyagi 980-8579 (Japan); Li, Yu-You, E-mail: yyli@epl1.civil.tohoku.ac.jp [Department of Civil and Environmental Engineering, Graduate School of Engineering Tohoku University, 6-6-06 Aza-Aoba, Aramaki, Aoba-ku, Sendai, Miyagi 980-8579 (Japan); Key Lab of Northwest Water Resource, Environment and Ecology, MOE, Xi’an University of Architecture and Technology, Xi’an (China)

    2015-09-15

    Highlights: • Microbial community dynamics and process functional resilience were investigated. • The threshold of TAN in mesophilic reactor was higher than the thermophilic reactor. • The recoverable archaeal community dynamic sustained the process resilience. • Methanosarcina was more sensitive than Methanoculleus on ammonia inhibition. • TAN and FA effects the dynamic of hydrolytic and acidogenic bacteria obviously. - Abstract: While methane fermentation is considered as the most successful bioenergy treatment for chicken manure, the relationship between operational performance and the dynamic transition of archaeal and bacterial communities remains poorly understood. Two continuous stirred-tank reactors were investigated under thermophilic and mesophilic conditions feeding with 10%TS. The tolerance of thermophilic reactor on total ammonia nitrogen (TAN) was found to be 8000 mg/L with free ammonia (FA) 2000 mg/L compared to 16,000 mg/L (FA1500 mg/L) of mesophilic reactor. Biomethane production was 0.29 L/gV S{sub in} in the steady stage and decreased following TAN increase. After serious inhibition, the mesophilic reactor was recovered successfully by dilution and washing stratagem compared to the unrecoverable of thermophilic reactor. The relationship between the microbial community structure, the bioreactor performance and inhibitors such as TAN, FA, and volatile fatty acid was evaluated by canonical correspondence analysis. The performance of methanogenic activity and substrate removal efficiency were changed significantly correlating with the community evenness and phylogenetic structure. The resilient archaeal community was found even after serious inhibition in both reactors. Obvious dynamics of bacterial communities were observed in acidogenic and hydrolytic functional bacteria following TAN variation in the different stages.

  19. Comparative analysis of specialization in palliative medicine processes within the World Health Organization European region.

    Science.gov (United States)

    Centeno, Carlos; Bolognesi, Deborah; Biasco, Guido

    2015-05-01

    Palliative medicine (PM), still in the development phase, is a new, growing specialty aimed at caring for both oncology and non-oncology patients. There is still confusion about the training offered in the various European PM certification programs. To provide a detailed, comparative update and analysis of the PM certification process in Europe, including the different training approaches and their main features. Experts from each country completed an online survey addressing historical background, program name, training requirements, length of time in training, characteristic and content, official certifying institution, effectiveness of accreditation, and 2013 workforce capacity. We prepared a comparative analysis of the data provided. In 2014, 18 of 53 European countries had official programs on specialization in PM (POSPM): Czech Republic, Denmark, Finland, France, Georgia, Germany, Hungary, Ireland, Israel, Italy, Latvia, Malta, Norway, Poland, Portugal, Romania, Slovakia, and the U.K. Ten of these programs were begun in the last five years. The PM is recognized as a "specialty," "subspecialty," or "special area of competence," with no substantial differences between the last two designations. The certification contains the term "palliative medicine" in most countries. Clinical training varies, with one to two years being the most frequent duration. There is a clear trend toward establishing the POSPM as a mandatory condition for obtaining a clinical PM position in countries' respective health systems. PM is growing as a specialization field in Europe. Processes leading to certification are generally long and require substantial clinical training. The POSPM education plans are heterogeneous. The European Association for Palliative Care should commit to establishing common learning standards, leading to additional European-based recognition of expertise in PM. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All

  20. Combined compared to dissociated oral and intestinal sucrose stimuli induce different brain hedonic processes

    Science.gov (United States)

    Clouard, Caroline; Meunier-Salaün, Marie-Christine; Meurice, Paul; Malbert, Charles-Henri; Val-Laillet, David

    2014-01-01

    The characterization of brain networks contributing to the processing of oral and/or intestinal sugar signals in a relevant animal model might help to understand the neural mechanisms related to the control of food intake in humans and suggest potential causes for impaired eating behaviors. This study aimed at comparing the brain responses triggered by oral and/or intestinal sucrose sensing in pigs. Seven animals underwent brain single photon emission computed tomography (99mTc-HMPAO) further to oral stimulation with neutral or sucrose artificial saliva paired with saline or sucrose infusion in the duodenum, the proximal part of the intestine. Oral and/or duodenal sucrose sensing induced differential cerebral blood flow changes in brain regions known to be involved in memory, reward processes and hedonic (i.e., pleasure) evaluation of sensory stimuli, including the dorsal striatum, prefrontal cortex, cingulate cortex, insular cortex, hippocampus, and parahippocampal cortex. Sucrose duodenal infusion only and combined sucrose stimulation induced similar activity patterns in the putamen, ventral anterior cingulate cortex and hippocampus. Some brain deactivations in the prefrontal and insular cortices were only detected in the presence of oral sucrose stimulation. Finally, activation of the right insular cortex was only induced by combined oral and duodenal sucrose stimulation, while specific activity patterns were detected in the hippocampus and parahippocampal cortex with oral sucrose dissociated from caloric load. This study sheds new light on the brain hedonic responses to sugar and has potential implications to unravel the neuropsychological mechanisms underlying food pleasure and motivation. PMID:25147536

  1. Comparative assessment of various lipid extraction protocols and optimization of transesterification process for microalgal biodiesel production.

    Science.gov (United States)

    Mandal, Shovon; Patnaik, Reeza; Singh, Amit Kumar; Mallick, Nirupama

    2013-01-01

    Biodiesel, using microalgae as feedstocks, is being explored as the most potent form of alternative diesel fuel for sustainable economic development. A comparative assessment of various protocols for microalgal lipid extraction was carried out using five green algae, six blue-green algae and two diatom species treated with different single and binary solvents both at room temperature and using a soxhlet. Lipid recovery was maximum with chloroform-methanol in the soxhlet extractor. Pretreatments ofbiomass, such as sonication, homogenization, bead-beating, lyophilization, autoclaving, microwave treatment and osmotic shock did not register any significant rise in lipid recovery. As lipid recovery using chloroform-methanol at room temperature demonstrated a marginally lower value than that obtained under the soxhlet extractor, on economical point of view, the former is recommended for microalgal total lipid extraction. Transesterification process enhances the quality of biodiesel. Experiments were designed to determine the effects of catalyst type and quantity, methanol to oil ratio, reaction temperature and time on the transesterification process using response surface methodology. Fatty acid methyl ester yield reached up to 91% with methanol:HCl:oil molar ratio of 82:4:1 at 65 degrees C for 6.4h reaction time. The biodiesel yield relative to the weight of the oil was found to be 69%.

  2. Comprehensive Comparative Genomic and Transcriptomic Analyses of the Legume Genes Controlling the Nodulation Process.

    Science.gov (United States)

    Qiao, Zhenzhen; Pingault, Lise; Nourbakhsh-Rey, Mehrnoush; Libault, Marc

    2016-01-01

    Nitrogen is one of the most essential plant nutrients and one of the major factors limiting crop productivity. Having the goal to perform a more sustainable agriculture, there is a need to maximize biological nitrogen fixation, a feature of legumes. To enhance our understanding of the molecular mechanisms controlling the interaction between legumes and rhizobia, the symbiotic partner fixing and assimilating the atmospheric nitrogen for the plant, researchers took advantage of genetic and genomic resources developed across different legume models (e.g., Medicago truncatula, Lotus japonicus, Glycine max, and Phaseolus vulgaris) to identify key regulatory protein coding genes of the nodulation process. In this study, we are presenting the results of a comprehensive comparative genomic analysis to highlight orthologous and paralogous relationships between the legume genes controlling nodulation. Mining large transcriptomic datasets, we also identified several orthologous and paralogous genes characterized by the induction of their expression during nodulation across legume plant species. This comprehensive study prompts new insights into the evolution of the nodulation process in legume plant and will benefit the scientific community interested in the transfer of functional genomic information between species.

  3. A comparative study of the bacterial community in denitrifying and traditional enhanced biological phosphorus removal processes.

    Science.gov (United States)

    Lv, Xiao-Mei; Shao, Ming-Fei; Li, Chao-Lin; Li, Ji; Gao, Xin-Lei; Sun, Fei-Yun

    2014-09-17

    Denitrifying phosphorus removal is an attractive wastewater treatment process due to its reduced carbon source demand and sludge minimization potential. Two lab-scale sequencing batch reactors (SBRs) were operated in alternating anaerobic-anoxic (A-A) or anaerobic-oxic (A-O) conditions to achieve denitrifying enhanced biological phosphate removal (EBPR) and traditional EBPR. No significant differences were observed in phosphorus removal efficiencies between A-A SBR and A-O SBR, with phosphorus removal rates being 87.9% and 89.0% respectively. The community structures in denitrifying and traditional EBPR processes were evaluated by high-throughput sequencing of the PCR-amplified partial 16S rRNA genes from each sludge. The results obtained showed that the bacterial community was more diverse in A-O sludge than in A-A sludge. Taxonomy and β-diversity analyses indicated that a significant shift occurred in the dominant microbial community in A-A sludge compared with the seed sludge during the whole acclimation phase, while a slight fluctuation was observed in the abundance of the major taxonomies in A-O sludge. One Dechloromonas-related OTU outside the 4 known Candidatus "Accumulibacter" clades was detected as the main OTU in A-A sludge at the stationary operation, while Candidatus "Accumulibacter" dominated in A-O sludge.

  4. Comparing the teaching-learning process with and without the use of computerized technological resources.

    Science.gov (United States)

    Juliani, Carmen Maria Casquel Monti; Corrente, José Eduardo; Dell'Acqua, Magda Cristina Queiroz

    2011-04-01

    Computerized technological resources have become essential in education, particularly for teaching topics that require the performance of specific tasks. These resources can effectively help the execution of such tasks and the teaching-learning process itself. After the development of a Web site on the topic of nursing staff scheduling, this study aimed at comparing the development of students involved in the teaching-learning process of the previously mentioned topic, with and without the use of computer technology. Two random groups of undergraduate nursing students from a public university in São Paulo state, Brazil, were organized: a case group (used the Web site) and a control group (did not use the Web site). Data were collected from 2003 to 2005 after approval by the Research Ethics Committee. Results showed no significant difference in motivation or knowledge acquisition. A similar performance for the two groups was also verified. Other aspects observed were difficulty in doing the nursing staff scheduling exercise and the students' acknowledgment of the topic's importance for their training and professional lives; easy access was considered to be a positive aspect for maintaining the Web site.

  5. Funding Decisions for Newborn Screening: A Comparative Review of 22 Decision Processes in Europe

    Directory of Open Access Journals (Sweden)

    Katharina Elisabeth Fischer

    2014-05-01

    Full Text Available Decision-makers need to make choices to improve public health. Population-based newborn screening (NBS is considered as one strategy to prevent adverse health outcomes and address rare disease patients’ needs. The aim of this study was to describe key characteristics of decisions for funding new NBS programmes in Europe. We analysed past decisions using a conceptual framework. It incorporates indicators that capture the steps of decision processes by health care payers. Based on an internet survey, we compared 22 decisions for which answers among two respondents were validated for each observation. The frequencies of indicators were calculated to elicit key characteristics. All decisions resulted in positive, mostly unrestricted funding. Stakeholder participation was diverse focusing on information provision or voting. Often, decisions were not fully transparent. Assessment of NBS technologies concentrated on expert opinion, literature review and rough cost estimates. Most important appraisal criteria were effectiveness (i.e., health gain from testing for the children being screened, disease severity and availability of treatments. Some common and diverging key characteristics were identified. Although no evidence of explicit healthcare rationing was found, processes may be improved in respect of transparency and scientific rigour of assessment.

  6. Comparing soil biogeochemical processes in novel and natural boreal forest ecosystems

    Science.gov (United States)

    Quideau, S. A.; Swallow, M. J. B.; Prescott, C. E.; Grayston, S. J.; Oh, S.-W.

    2013-08-01

    Emulating the variability that exists in the natural landscape prior to disturbance should be a goal of soil reconstruction and land reclamation efforts following resource extraction. Long-term ecosystem sustainability within reclaimed landscapes can only be achieved with the re-establishment of biogeochemical processes between reconstructed soils and plants. In this study, we assessed key soil biogeochemical attributes (nutrient availability, organic matter composition, and microbial communities) in reconstructed, novel, anthropogenic ecosystems, covering different reclamation treatments following open-cast mining for oil extraction. We compared the attributes to those present in a range of natural soils representative of mature boreal forest ecosystems in the same area of Northern Alberta. Soil nutrient availability was determined in situ with resin probes, organic matter composition was described with 13C nuclear magnetic resonance spectroscopy and soil microbial community structure was characterized using phospholipid fatty acid analysis. Significant differences among natural ecosystems were apparent in nutrient availability and seemed more related to the dominant tree cover than to soil type. When analyzed together, all natural forests differed significantly from the novel ecosystems, in particular with respect to soil organic matter composition. However, there was some overlap between the reconstructed soils and some of the natural ecosystems in nutrient availability and microbial communities, but not in organic matter characteristics. Hence, our results illustrate the importance of considering the range of natural landscape variability and including several soil biogeochemical attributes when comparing novel, anthropogenic ecosystems to the mature ecosystems that constitute ecological targets.

  7. Comparing soil biogeochemical processes in novel and natural boreal forest ecosystems

    Directory of Open Access Journals (Sweden)

    S. A. Quideau

    2013-08-01

    Full Text Available Emulating the variability that exists in the natural landscape prior to disturbance should be a goal of soil reconstruction and land reclamation efforts following resource extraction. Long-term ecosystem sustainability within reclaimed landscapes can only be achieved with the re-establishment of biogeochemical processes between reconstructed soils and plants. In this study, we assessed key soil biogeochemical attributes (nutrient availability, organic matter composition, and microbial communities in reconstructed, novel, anthropogenic ecosystems, covering different reclamation treatments following open-cast mining for oil extraction. We compared the attributes to those present in a range of natural soils representative of mature boreal forest ecosystems in the same area of Northern Alberta. Soil nutrient availability was determined in situ with resin probes, organic matter composition was described with 13C nuclear magnetic resonance spectroscopy and soil microbial community structure was characterized using phospholipid fatty acid analysis. Significant differences among natural ecosystems were apparent in nutrient availability and seemed more related to the dominant tree cover than to soil type. When analyzed together, all natural forests differed significantly from the novel ecosystems, in particular with respect to soil organic matter composition. However, there was some overlap between the reconstructed soils and some of the natural ecosystems in nutrient availability and microbial communities, but not in organic matter characteristics. Hence, our results illustrate the importance of considering the range of natural landscape variability and including several soil biogeochemical attributes when comparing novel, anthropogenic ecosystems to the mature ecosystems that constitute ecological targets.

  8. Comparing soil biogeochemical processes in novel and natural boreal forest ecosystems

    Directory of Open Access Journals (Sweden)

    S. A. Quideau

    2013-04-01

    Full Text Available Emulating the variability that exists in the natural landscape prior to disturbance should be a goal of soil reconstruction and land reclamation efforts following resource extraction. Long-term ecosystem sustainability within reclaimed landscapes can only be achieved with the re-establishment of biogeochemical processes between reconstructed soils and plants. In this study, we assessed key soil biogeochemical attributes (nutrient availability, organic matter composition, and microbial communities in reconstructed, novel, anthropogenic ecosystems covering different reclamation treatments following open-cast mining for oil extraction. We compared the attributes to those present in a range of natural soils representative of mature boreal forest ecosystems in the same area of northern Alberta. Soil nutrient availability was determined in situ with resin probes, organic matter composition was described with 13C nuclear magnetic resonance spectroscopy and soil microbial community structure was characterized using phospholipid fatty acid analysis. Significant differences among natural ecosystems were apparent in nutrient availability and seemed more related to the dominant tree cover than to soil type. When analyzed together, all natural forests differed significantly from the novel ecosystems, in particular with respect to soil organic matter composition. However, there was some overlap between the reconstructed soils and some of the natural ecosystems in nutrient availability and microbial communities, but not in organic matter characteristics. Hence, our results illustrate the importance of considering the range of natural landscape variability, and including several soil biogeochemical attributes when comparing novel, anthropogenic ecosystems to the mature ecosystems that constitute ecological targets.

  9. Agent-Based vs. Equation-based Epidemiological Models:A Model Selection Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Sukumar, Sreenivas R [ORNL; Nutaro, James J [ORNL

    2012-01-01

    This paper is motivated by the need to design model validation strategies for epidemiological disease-spread models. We consider both agent-based and equation-based models of pandemic disease spread and study the nuances and complexities one has to consider from the perspective of model validation. For this purpose, we instantiate an equation based model and an agent based model of the 1918 Spanish flu and we leverage data published in the literature for our case- study. We present our observations from the perspective of each implementation and discuss the application of model-selection criteria to compare the risk in choosing one modeling paradigm to another. We conclude with a discussion of our experience and document future ideas for a model validation framework.

  10. Comparative transcriptomics of elasmobranchs and teleosts highlight important processes in adaptive immunity and regional endothermy.

    Science.gov (United States)

    Marra, Nicholas J; Richards, Vincent P; Early, Angela; Bogdanowicz, Steve M; Pavinski Bitar, Paulina D; Stanhope, Michael J; Shivji, Mahmood S

    2017-01-30

    Comparative genomic and/or transcriptomic analyses involving elasmobranchs remain limited, with genome level comparisons of the elasmobranch immune system to that of higher vertebrates, non-existent. This paper reports a comparative RNA-seq analysis of heart tissue from seven species, including four elasmobranchs and three teleosts, focusing on immunity, but concomitantly seeking to identify genetic similarities shared by the two lamnid sharks and the single billfish in our study, which could be linked to convergent evolution of regional endothermy. Across seven species, we identified an average of 10,877 Swiss-Prot annotated genes from an average of 32,474 open reading frames within each species' heart transcriptome. About half of these genes were shared between all species while the remainder included functional differences between our groups of interest (elasmobranch vs. teleost and endotherms vs. ectotherms) as revealed by Gene Ontology (GO) and selection analyses. A repeatedly represented functional category, in both the uniquely expressed elasmobranch genes (total of 259) and the elasmobranch GO enrichment results, involved antibody-mediated immunity, either in the recruitment of immune cells (Fc receptors) or in antigen presentation, including such terms as "antigen processing and presentation of exogenous peptide antigen via MHC class II", and such genes as MHC class II, HLA-DPB1. Molecular adaptation analyses identified three genes in elasmobranchs with a history of positive selection, including legumain (LGMN), a gene with roles in both innate and adaptive immunity including producing antigens for presentation by MHC class II. Comparisons between the endothermic and ectothermic species revealed an enrichment of GO terms associated with cardiac muscle contraction in endotherms, with 19 genes expressed solely in endotherms, several of which have significant roles in lipid and fat metabolism. This collective comparative evidence provides the first multi

  11. Effects of Parceling on Model Selection: Parcel-Allocation Variability in Model Ranking.

    Science.gov (United States)

    Sterba, Sonya K; Rights, Jason D

    2016-01-25

    Research interest often lies in comparing structural model specifications implying different relationships among latent factors. In this context parceling is commonly accepted, assuming the item-level measurement structure is well known and, conservatively, assuming items are unidimensional in the population. Under these assumptions, researchers compare competing structural models, each specified using the same parcel-level measurement model. However, little is known about consequences of parceling for model selection in this context-including whether and when model ranking could vary across alternative item-to-parcel allocations within-sample. This article first provides a theoretical framework that predicts the occurrence of parcel-allocation variability (PAV) in model selection index values and its consequences for PAV in ranking of competing structural models. These predictions are then investigated via simulation. We show that conditions known to manifest PAV in absolute fit of a single model may or may not manifest PAV in model ranking. Thus, one cannot assume that low PAV in absolute fit implies a lack of PAV in ranking, and vice versa. PAV in ranking is shown to occur under a variety of conditions, including large samples. To provide an empirically supported strategy for selecting a model when PAV in ranking exists, we draw on relationships between structural model rankings in parcel- versus item-solutions. This strategy employs the across-allocation modal ranking. We developed software tools for implementing this strategy in practice, and illustrate them with an example. Even if a researcher has substantive reason to prefer one particular allocation, investigating PAV in ranking within-sample still provides an informative sensitivity analysis.

  12. Convergence on Self - Generated vs. Crowdsourced Ideas in Crisis Response: Comparing Social Exchange Processes and Satisfaction with Process

    DEFF Research Database (Denmark)

    Seeber, Isabella; Merz, Alexander B.; Maier, Ronald

    2017-01-01

    Social media allow crowds to generate many ideas to swiftly respond to events like crises, public policy discourse, or online town hall meetings. This allows organizations and governments to harness the innovative power of the crowd. As part of this setting, teams that process crowd ideas must...... engage in social exchange processes to converge on a few promising ideas. Traditionally, teams work on self-generated ideas. However, in a crowdsourcing scenario, such as public participation in crisis response, teams may have to process crowd-generated ideas. To better understand this new practice......, it is important to investigate how converging on crowdsourced ideas affects the social exchange processes of teams and resulting outcomes. We conducted a laboratory experiment in which small teams working in a crisis response setting converged on self-generated or crowdsourced ideas in an emergency response...

  13. Life Cycle Assessment (LCA used to compare two different methods of ripe table olive processing

    Directory of Open Access Journals (Sweden)

    Russo, Carlo

    2010-06-01

    Full Text Available The aim of the present study is to analyze the most common method used for processing ripe table olives: the “California style”. Life Cycle Assessment (LCA was applied to detect the “hot spots” of the system under examination. The LCA results also allowed us to compare the traditional “California style”, here called “method A”, with another “California style”, here called “method B”. We were interested in this latter method, because the European Union is considering introducing it into the product specification of the Protected Denomination of Origin (PDO “La Bella della Daunia”. It was also possible to compare the environmental impacts of the two “California style” methods with those of the “Spanish style” method. From the comparison it is clear that “method B” has a greater environmental impact than “method A” because greater amounts of water and electricity are required, whereas “Spanish style” processing has a lower environmental impact than the ”California style” methods.

    El objetivo de este estudio es analizar el método más común utilizado para el procesamiento de la aceituna negra de mesa “estilo California” (Californian Style. La metodología LCA se aplicó para detectar los puntos calientes del sistema estudiado. Los resultados LCA también nos permitieron comparar el estilo californiano tradicional, aquí llamado “método A”, con otro estilo californiano, llamado “método B”. Nosotros estábamos interesados en el segundo método, porque la Unión Europea está considerando introducirlo en la Denominación de Origen Protegida (DOP “La Bella della Daunia”. También fue posible comparar los impactos medioambientales de los dos mètodos californianos con los impactos del método español. Observando la comparación, está claro que el “método B” tiene un mejor impacto ambiental que el “método A” porque este último requiere más cantidad de agua y

  14. Comparing ecohydrological processes in alien vs. native ranges: perspectives from the endangered shrub Myricaria germanica

    Science.gov (United States)

    Michielon, Bruno; Campagnaro, Thomas; Porté, Annabel; Hoyle, Jo; Picco, Lorenzo; Sitzia, Tommaso

    2017-04-01

    Comparing the ecology of woody species in their alien and native ranges may provide interesting insights for theoretical ecology, invasion biology, restoration ecology and forestry. The literature which describes the biological evolution of successful plant invaders is rich and increasing. However, no general theories have been developed about the geomorphic settings which may limit or favour the alien woody species expansion along rivers. The aim of this contribution is to explore the research opportunities in the comparison of ecohydrological processes occurring in the alien vs. the native ranges of invasive tree and shrub species along the riverine corridor. We use the endangered shrub Myricaria germanica as an example. Myricaria germanica is an Euro-Asiatic pioneer species that, in the native range, develops along natural rivers, wide and dynamic. These conditions are increasingly limited by anthropogenic constraints in most European rivers. This species has been recently introduced in New Zealand, where it is spreading in some natural rivers of the Canterbury region (South Island). We present the current knowledge about the natural and anthropogenic factors influencing this species in its native range. We compare this information with the current knowledge about the same factors influencing M. germanica invasiveness and invasibility of riparian habitats in New Zealand. We stress the need to identify potential factors which could drive life-traits and growing strategies divergence which may hinder the application to the alien ranges of existing ecohydrological knowledge from native ranges. Moreover, the pattern of expansion of the alien range of species endangered in their native ranges opens new windows for research.

  15. Comparing riparian forest processes on large rivers to inform floodplain management and restoration

    Science.gov (United States)

    Stella, J. C.; Piegay, H.; Gruel, C.; Riddle, J.; Raepple, B.

    2014-12-01

    In populous, water-limited regions, humans have profoundly altered the river and floodplain environment to satisfy society's demands for water, power, navigation and safety. River management also profoundly alters riparian forests, which respond to changes in disturbance regimes and sediment dynamics. In this study, we compare forest and floodplain development along two of the most heavily modified rivers in mediterranean-climate regions, the middle Sacramento (California, USA) and the lower Rhône (SE France). The Sacramento was dammed in 1942 and is now managed for irrigation, hydropower and flood control. The Rhône channel was engineered for navigation prior to 1900, and since then has been dammed and diverted at 18 sites for hydropower and irrigation. We conducted extensive forest inventories and sampled fine sediment depth in regulated reaches within both systems, and compared pre- versus post-dam patterns of deposition and linked forest development. We sampled 441 plots (500 m2 each) along 160 km of the Sacramento, and 88 plots (1256 m2) stratified by management epoch (pre-river engineering, pre-dam, post-dam) along 160 km of the Rhône. On the Sacramento, forest composition showed shifting tree species dominance across a chronosequence of aerial photo dates over 110 years. The transition from willow to cottonwood (Populus) occurred within 20 years, and the transition to mixed forest started after 50-60 years. On the Rhône, the pre- versus post-dam surfaces at each site had distinct geomorphic and floristic characteristics. Floodplain areas that emerged and were forested in the pre-dam period were at higher elevation, and supported 30-50% more basal area, 20-30% more vine cover, and greater plant species diversity than those that emerged in the post-dam period. The shift from Populus dominance to other species began approximately a decade earlier on the Rhône compared to the Sacramento. Both rivers showed a strong understory presence on young floodplains

  16. Processing, validating, and comparing DEMs for geomorphic application on the Puna de Atacama Plateau, northwest Argentina

    Science.gov (United States)

    Purinton, Benjamin; Bookhagen, Bodo

    2016-04-01

    This study analyzes multiple topographic datasets derived from various remote-sensing methods from the Pocitos Basin of the central Puna Plateau in northwest Argentina at the border to Chile. Here, the arid climate and clear atmospheric conditions and lack of vegetation provide ideal conditions for remote sensing and Digital Elevation Model (DEM) comparison. We compare the following freely available DEMs: SRTM-X (spatial resolution of ~30 m), SRTM-C v4.1 (90 m), and ASTER GDEM2 (30 m). Additional DEMs for comparison are generated from optical and radar datasets acquired freely (ASTER Level 1B stereo pairs and Sentinal-1A radar), through research agreements (RapidEye Level 1B scenes, ALOS radar, and ENVISAT radar), and through commercial sources (TerraSAR-X / TanDEM-X radar). DEMs from ASTER (spatial resolution of 15 m) and RapidEye (~5-10 m) optical datasets are produced by standard photogrammetric techniques and have been post-processed for validation and alignment purposes. Because RapidEye scenes are captured at a low incidence angle (validated against over 400,000 differential GPS (dGPS) measurements gathered during four field campaigns in 2012 and 2014 to 2016. Of these points, more than 250,000 lie within the Pocitos Basin with average vertical and horizontal accuracies of 0.95 m and 0.69 m, respectively. Dataset accuracy is judged by the lowest standard deviations of elevation compared with the dGPS data and with the SRTM-X control DEM. Of particular interest in the field of quantitative geomorphology are topometrics (e.g., relief, channel steepness, and hillslope concavity) derived from the DEMs. The accuracy of these metrics is partly dependent on the overall DEM accuracy, but also on the accuracy of the depiction of the river network (a small areal fraction of the DEM). In addition, several topometrics depend on the first and second derivative of elevation (slope and curvature), which are affected by DEM accuracy and noise. In light of these issues

  17. Comparative study of two theoretical models of methane and ethane steam reforming process

    Science.gov (United States)

    Brus, Grzegorz; Kaczmarczyk Marcin Tomiczek, Robert; Mozdzierz, Marcin

    2016-09-01

    From the chemical point of view the reforming process of heavy hydrocarbons such as Associated Petroleum Gas (APG) is very complex. One of the main issue is a set of undesired chemical reactions that causes deposition of solid carbon and consequently block catalytic property of a reactor. The experimental investigation is crucial to design APG reforming reactors. However, the experiment needs to be preceded by careful thermodynamical analysis to design safe operation conditions. In case of small number of reactants and reactions such as in case of steam reforming of pure methane, the problem can be solved by treating each equilibrium reaction constant as an element of the system of non-linear equations. The system of equations can be solved by Newton-Raphson method. However in case of large number of reactants and reaction, such as in case of APG reforming this method is inefficient. A large number of strongly non-linear equations leads often to converge problem. In this paper the authors suggest to use different approach called Parametric Equation Method. In this method a system of non-linear equations is replaced by a set of single non-linear equations solved separately. The methods were used to simulate steam reforming of methane-ethane rich fuel. The results of computations from both methods were juxtaposed and comparative study were conducted. Finally safe operation conditions for steam reforming of methane-ethane fuel were calculated and presented.

  18. Comparing Process-Based Net Primary Productivity Models in a Mediterranean Watershed

    Science.gov (United States)

    Donmez, C.; Berberoglu, S.; Forrest, M.; Cilek, A.; Hickler, T.

    2013-10-01

    The aim of this study was to compare the estimation capability of two different process-based NPP models (CASA and LPJGUESS) in a Mediterranean watershed. Remotely sensed data and climate time series (temperature, precipitation and solar radiation) were input to these models in the example of Goksu River Basin which is located in the Eastern Mediterranean Part of Turkey. The comparison of these models was based on output variables. These variables were divided into three groups; (i) spatiallyinterpolated total NPP estimations, (ii) NPP distribution of land cover classes, (iii) annual and monthly based NPP variations. Different model approaches were evaluated within their capability to prove the relationship between annual / monthly NPP and major climatic variables. The effect of vegetation distribution on the accuracy of models was examined. The uncertainities of the CASA and LPJ-GUESS model were evaluated by incorporating remotely sensed data, percent tree cover and ground measurements. The differences between model outputs were guided to enhance modelling strategies by means of remotely sensed data and other input parameters.

  19. The town in Serbia and Bulgaria: A comparative reading of current processes. Introduction

    Directory of Open Access Journals (Sweden)

    Zlatanović Sanja

    2015-01-01

    Full Text Available The topic of this volume is a result from The Contemporary City in Serbia and Bulgaria: Processes and Changes, a bilateral project of the Institute of Ethnography of the Serbian Academy of Sciences and Arts and the Institute of Ethnology and Folklore Studies with Ethnographic Museum of the Bulgarian Academy of Sciences (2014-2016. The six papers offer a comparative view of current social processes in two neighbouring Balkan countries, linked by numerous historical and political experiences. Comparative research into societal trends enables a more thorough understanding and monitoring of global processes. In today’s increasingly globalised and glocalised world, towns experience sudden changes and it is in the towns that these changes are most vividly to be seen. The focus of our research is on the dynamism of the contemporary town, on processuality and changes in societal practices. Ana Luleva examines life in the small town of Nessebar in southeast Bulgaria, which has been on the UNESCO World Heritage list since 1983. The protection, management and presentation of Nessebar’s cultural heritage are highly complex issues, further complicated by the problem of collision with the interests of the inhabitants. The author analyses the relations between the various factors - the state administration, municipal authorities and the local population. Here the tourist industry, investment interests, corrupt institutions and civil society all play their part. Ivanka Petrova chose to research Belogradchik, a small town in northwest Bulgaria. Petrova investigates how local social and cultural resources are used in the work of a family tourist enterprise. The author looks for answers to questions such as: how its members identify with the town and its culture and how the work of the enterprise fits into the Belogradchik local context. At the focus of her paper are current societal practices: the local urban economy and the production of images and symbols

  20. Model selection, identification and validation in anaerobic digestion: a review.

    Science.gov (United States)

    Donoso-Bravo, Andres; Mailier, Johan; Martin, Cristina; Rodríguez, Jorge; Aceves-Lara, César Arturo; Vande Wouwer, Alain

    2011-11-01

    Anaerobic digestion enables waste (water) treatment and energy production in the form of biogas. The successful implementation of this process has lead to an increasing interest worldwide. However, anaerobic digestion is a complex biological process, where hundreds of microbial populations are involved, and whose start-up and operation are delicate issues. In order to better understand the process dynamics and to optimize the operating conditions, the availability of dynamic models is of paramount importance. Such models have to be inferred from prior knowledge and experimental data collected from real plants. Modeling and parameter identification are vast subjects, offering a realm of approaches and methods, which can be difficult to fully understand by scientists and engineers dedicated to the plant operation and improvements. This review article discusses existing modeling frameworks and methodologies for parameter estimation and model validation in the field of anaerobic digestion processes. The point of view is pragmatic, intentionally focusing on simple but efficient methods.

  1. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  2. Quantile hydrologic model selection and model structure deficiency assessment: 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies structur

  3. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2015-01-01

    the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection. However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned...

  4. Evaluation methodology for comparing memory and communication of analytic processes in visual analytics

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric D [ORNL; Goodall, John R [ORNL

    2014-01-01

    Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit process recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.

  5. Model selection in time series studies of influenza-associated mortality.

    Directory of Open Access Journals (Sweden)

    Xi-Ling Wang

    Full Text Available BACKGROUND: Poisson regression modeling has been widely used to estimate influenza-associated disease burden, as it has the advantage of adjusting for multiple seasonal confounders. However, few studies have discussed how to judge the adequacy of confounding adjustment. This study aims to compare the performance of commonly adopted model selection criteria in terms of providing a reliable and valid estimate for the health impact of influenza. METHODS: We assessed four model selection criteria: quasi Akaike information criterion (QAIC, quasi bayesian information criterion (QBIC, partial autocorrelation functions of residuals (PACF, and generalized cross-validation (GCV, by separately applying them to select the Poisson model best fitted to the mortality datasets that were simulated under the different assumptions of seasonal confounding. The performance of these criteria was evaluated by the bias and root-mean-square error (RMSE of estimates from the pre-determined coefficients of influenza proxy variable. These four criteria were subsequently applied to an empirical hospitalization dataset to confirm the findings of simulation study. RESULTS: GCV consistently provided smaller biases and RMSEs for the influenza coefficient estimates than QAIC, QBIC and PACF, under the different simulation scenarios. Sensitivity analysis of different pre-determined influenza coefficients, study periods and lag weeks showed that GCV consistently outperformed the other criteria. Similar results were found in applying these selection criteria to estimate influenza-associated hospitalization. CONCLUSIONS: GCV criterion is recommended for selection of Poisson models to estimate influenza-associated mortality and morbidity burden with proper adjustment for confounding. These findings shall help standardize the Poisson modeling approach for influenza disease burden studies.

  6. COMPARATIVE STUDY OF THE USE OF ICT IN ENGLISH TEACHING-LEARNING PROCESSES

    Directory of Open Access Journals (Sweden)

    Abbas ZARE-EE

    2010-04-01

    Full Text Available The use of Information Communication Technologies (ICT in cultural, political, social, economic, and academic activities has recently attracted the attention of many researchers and it should now be an important component of the comparative study of education. The present study was conducted to compare the amount and quality of ICT use in English teaching-learning processes among the faculty members of Medical and Non-medical Universities in Kashan, Iran and to explore the dimensions in which the two groups can benefit from one another and from ICT training in this respect. Out of a total of 255 full-time university teachers teaching at medical and no-medical universities in the region, 193 were chosen to participate in the study using a simple random sampling technique and the Morgan & Kritjki table for sample selection. A researcher-made 5-point Likert scale questionnaire containing 50 items was used to collect the necessary data on the amount of access and use ICT in the two environments. The Chronbach Alfa reliability for this instrument was shown to be 0.8. To answer the research questions, t-test and the analysis of variance were used and the differences in ICT use for learning and teaching were analyzed. The results of the analyses showed that there was a significant difference in the amount of ICT use among the faculty members of medical and non-medical universities. For reason considered in length, teachers at medical universities used ICT significantly less than the other group. Results also indicated that there was a significant difference between the two types of universities with regard to the availability of computers and the amount of ICT training and use. No significant effects on the use of ICT in education were observed for age, teaching experience, and university degree. University teachers with different fields of study showed significant differences only in non-medical universities. Based on the findings of the study

  7. COMPARATIVE ANALYSIS OF SOLID INK DENSITY, PRINT CONTRAST AND PRINT GLOSS OF METALIZED BOARD PRINTED WITH SHEET FED OFFSET PRINTING PROCESS AND DRY TONER BASED DIGITAL PRINTING PROCESS

    OpenAIRE

    Aman Bhardwaj*, Vandana

    2016-01-01

    Metalized boards are frequently used in the packaging industry. In our study, we compare the Print properties of metalized board printed with the primer coat on sheet fed offset and dry toner based digital printing process. Metalized boards are give good print properties when printed with digital printing process for short run jobs. Comparatively high contrast is found in less solid ink density in digital printing.  

  8. Earlier timbre processing of instrumental tones compared to equally complex spectrally rotated sounds as revealed by the mismatch negativity.

    Science.gov (United States)

    Christmann, Corinna A; Lachmann, Thomas; Berti, Stefan

    2014-10-03

    Harmonically rich sounds have been shown to be processed more efficiently by the human brain compared to single sinusoidal tones. To control for stimulus complexity as a potentially confounding factor, tones and equally complex spectrally rotated sounds, have been used in the present study to investigate the role of the overtone series in sensory auditory processing in non-musicians. Timbre differences in instrumental tones with equal pitch elicited a MMN which was earlier compared to that elicited by the spectrally rotated sounds, indicating that harmonically rich tones are processed faster compared to non-musical sounds without an overtone series, even when pitch is not the relevant information.

  9. The role of multiple-point statistics and model selection in quantitative hydrogeophysical studies of the critical zone

    Science.gov (United States)

    Linde, N.

    2015-12-01

    Geophysical data are routinely used to provide qualitative insights about the main lithologies and the distribution of soil moisture in the critical zone. Quantitative hydrogeophysical inferences of critical zone properties and processes are much more challenging because of the multitude of interacting physical, biological and chemical gradients that may affect the geophysical measurement response. In this context, it is essential to incorporate the geophysical data within a wider modeling framework that centers on a conceptual model that describes the properties and processes under study together with appropriate boundary conditions. Based on recent groundwater applications, I describe how it is now possible to build geologically meaningful realizations of subsurface structure using multiple-point statistics (MPS) and to make uncertainty estimates. I will demonstrate conditioning of MPS simulations to geophysical tomograms, inclusion of summary statistics derived from MPS simulations within a Markov chain Monte Carlo (MCMC) inversion, and full MPS MCMC inversion based on fast (speed-up of 40 times) model proposal algorithms that we have adapted from computer vision. For future applications in the critical zone, I suggest that MPS simulations should be used to derive and perturb primary lithological properties and that biological, chemical, and hydrological state variables (given appropriate boundary conditions) are subsequently simulated using domain-specific algorithms. The geophysical data (an individual snap shot or time-series) are then used to guide the model update of the primary properties (and nuisance parameters such as petrophysical parameters) that in turn influence the predicted state variables and their associated fluxes. Instead of classical parameter estimation, I argue that it is often more appropriate to focus on model selection, in which alternative conceptual models of the subsurface are compared and ranked given the available data.

  10. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  11. A Comparative Study on Retirement Process in Korea, Germany, and the United States: Identifying Determinants of Retirement Process.

    Science.gov (United States)

    Cho, Joonmo; Lee, Ayoung; Woo, Kwangho

    2016-10-01

    This study classifies the retirement process and empirically identifies the individual and institutional characteristics determining the retirement process of the aged in South Korea, Germany, and the United States. Using data from the Cross-National Equivalent File, we use a multinomial logistic regression with individual factors, public pension, and an interaction term between an occupation and an education level. We found that in Germany, the elderly with a higher education level were more likely to continue work after retirement with a relatively well-developed social support system, while in Korea, the elderly, with a lower education level in almost all occupation sectors, tended to work off and on after retirement. In the United States, the public pension and the interaction terms have no statistically significant impact on work after retirement. In both Germany and Korea, receiving a higher pension decreased the probability of working after retirement, but the influence of a pension in Korea was much greater than that of Germany. In South Korea, the elderly workers, with lower education levels, tended to work off and on repeatedly because there is no proper security in both the labor market and pension system.

  12. METHODS OF COMPARATIVE APPRAISAL OF TITANIUM ALLOYS ABILITY TO THERMAL STRENGTHENING AS A RESULT OF HIGH-TEMPERATURE THERMOMECHANICAL PROCESSING

    Directory of Open Access Journals (Sweden)

    V. N. Fedulov

    2011-01-01

    Full Text Available The methods, enabing to produce the comparative appraisal of the titanium alloy ability to harden as a result of high-temperature thermal-mechanical processing depending on temperature and rate of deformation at forging, is developed.

  13. Hydrothermal carbonization of biomass residuals: A comparative review of the chemistry, processes and applications of wet and dry pyrolysis

    Science.gov (United States)

    This paper reviews chemistry, processes and application of hydrothermcally carbonized biomass wastes. Potential feedstock for the hydrothermal carbonization (HTC) includes variety of the non-traditional renewable wet agricultural and municipal waste streams. Pyrolysis and HTC show a comparable calor...

  14. A comparative study of process mediator components that support behavioral incompatibility

    CERN Document Server

    Munusamy, Kanmani; Ibrahim, Suhaimi; Baba, Mohd Sapiyan

    2011-01-01

    Most businesses these days use the web services technology as a medium to allow interaction between a service provider and a service requestor. However, both the service provider and the requestor would be unable to achieve their business goals when there are miscommunications between their processes. This research focuses on the process incompatibility between the web services and the way to automatically resolve them by using a process mediator. This paper presents an overview of the behavioral incompatibility between web services and the overview of process mediation in order to resolve the complications faced due to the incompatibility. Several state-of the-art approaches have been selected and analyzed to understand the existing process mediation components. This paper aims to provide a valuable gap analysis that identifies the important research areas in process mediation that have yet to be fully explored.

  15. The peace processes of Colombia and El Salvador : a comparative study

    OpenAIRE

    Gantiva Arias, Diego A.; Palacios Luna, Marco A.

    1997-01-01

    Colombia and El Salvador, two Latin American countries, have developed similar counterinsurgency processes and started similar processes of peace negotiations between the insurgent armies and the forces of order. One peace process was concluded in 1992, when El Salvador ended the war through a political solution (Peace Accords). Salvadoran insurgent forces agreed to demobilize its army and to become a legal political party, while the government agreed to make changes in the social and politic...

  16. Learning rates and states from biophysical time series: a Bayesian approach to model selection and single-molecule FRET data.

    Science.gov (United States)

    Bronson, Jonathan E; Fei, Jingyi; Hofman, Jake M; Gonzalez, Ruben L; Wiggins, Chris H

    2009-12-16

    Time series data provided by single-molecule Förster resonance energy transfer (smFRET) experiments offer the opportunity to infer not only model parameters describing molecular complexes, e.g., rate constants, but also information about the model itself, e.g., the number of conformational states. Resolving whether such states exist or how many of them exist requires a careful approach to the problem of model selection, here meaning discrimination among models with differing numbers of states. The most straightforward approach to model selection generalizes the common idea of maximum likelihood--selecting the most likely parameter values--to maximum evidence: selecting the most likely model. In either case, such an inference presents a tremendous computational challenge, which we here address by exploiting an approximation technique termed variational Bayesian expectation maximization. We demonstrate how this technique can be applied to temporal data such as smFRET time series; show superior statistical consistency relative to the maximum likelihood approach; compare its performance on smFRET data generated from experiments on the ribosome; and illustrate how model selection in such probabilistic or generative modeling can facilitate analysis of closely related temporal data currently prevalent in biophysics. Source code used in this analysis, including a graphical user interface, is available open source via http://vbFRET.sourceforge.net.

  17. Comparative Study of Sustained Attentional Bias on Emotional Processing in ADHD Children to Pictures with Eye-Tracking

    OpenAIRE

    Pishyareh, Ebrahim; Mehdi EHRANI-DOOST; MAHMOODI-GHARAIE, Javad; Anahita KHORRAMI; Saeid Reza RAHMDAR

    2015-01-01

    How to Cite This Article: Pishyareh E, Tehrani-doost M, Mahmoodi-gharaie J, Khorrami A, Rahmdar SR. A Comparative Study of SustainedAttentional Bias on Emotional Processing in ADHD Children to Pictures with Eye-Tracking. Iran J Child Neurol. 2015 Winter;9(1):64-70.AbstractObjectiveADHD children have anomalous and negative behavior especially in emotionally related fields when compared to other. Evidence indicates that attention has an impact on emotional processing. The present study evaluate...

  18. Applying comparative fractal analysis to infer origin and process in channels on Earth and Mars

    Science.gov (United States)

    Balakrishnan, A.; Rice-Snow, S.; Hampton, B. A.

    2010-12-01

    Recently there has been a large amount of interest in identifying the nature of channels on (extra terrestrial) bodies. These studies are closely linked to the search for water (and ultimately signs of life) and are unarguably important. Current efforts in this direction rely on identifying geomorphic characteristics of these channels through painstaking analysis of multiple high resolution images. Here we present a new and simple technique that shows significant potential in its ability to distinguish between lava and water channels. Channels formed by water or lava on earth (as depicted in map view) display sinuosity over a large scale of range. Their geometries often point to the fluid dynamics, channel gradient, type of sediments in the river channels and for lava channels, it has been suggested that they are indicative of the thermal characteristics of the flow. The degree of this sinuosity in geometry can be measured using the divider method, and represented by fractal dimension (D) values. The higher D value corresponds to higher degree of sinuosity and channel irregularity and vice versa. Here we apply this fractal analysis to compare channels on Earth and Mars using D values extracted from satellite images. The fractal dimensions computed in this work for terrestrial river channels range from 1.04 - 1.38, terrestrial lava channels range from 1.01-1.10 and Martian channels range from 1.01 - 1.18. For terrestrial channels, preliminary results from river networks attain a fractal dimension greater than or equal to 1.1 while lava channels have fractal dimension less than or equal to 1.1. This analysis demonstrates the higher degree of irregularity present in rivers as opposed to lava channels and ratifies the utility of using fractal dimension to identify the source of channels on earth, and by extension, extra terrestrial bodies. Initial estimates of the fractal dimension from Mars fall within the same ranges as the lava channels on Earth. Based on what has

  19. Extensive separations (CLEAN) processing strategy compared to TRUEX strategy and sludge wash ion exchange

    Energy Technology Data Exchange (ETDEWEB)

    Knutson, B.J.; Jansen, G.; Zimmerman, B.D.; Seeman, S.E. [Westinghouse Hanford Co., Richland, WA (United States); Lauerhass, L.; Hoza, M. [Pacific Northwest Lab., Richland, WA (United States)

    1994-08-01

    Numerous pretreatment flowsheets have been proposed for processing the radioactive wastes in Hanford`s 177 underground storage tanks. The CLEAN Option is examined along with two other flowsheet alternatives to quantify the trade-off of greater capital equipment and operating costs for aggressive separations with the reduced waste disposal costs and decreased environmental/health risks. The effect on the volume of HLW glass product and radiotoxicity of the LLW glass or grout product is predicted with current assumptions about waste characteristics and separations processes using a mass balance model. The prediction is made on three principal processing options: washing of tank wastes with removal of cesium and technetium from the supernatant, with washed solids routed directly to the glass (referred to as the Sludge Wash C processing strategy); the previous steps plus dissolution of the solids and removal of transuranic (TRU) elements, uranium, and strontium using solvent extraction processes (referred to as the Transuranic Extraction Option C (TRUEX-C) processing strategy); and an aggressive yet feasible processing strategy for separating the waste components to meet several main goals or objectives (referred to as the CLEAN Option processing strategy), such as the LLW is required to meet the US Nuclear Regulatory Commission Class A limits; concentrations of technetium, iodine, and uranium are reduced as low as reasonably achievable; and HLW will be contained within 1,000 borosilicate glass canisters that meet current Hanford Waste Vitrification Plant glass specifications.

  20. Regional Higher Education Reform Initiatives in Africa: A Comparative Analysis with the Bologna Process

    Science.gov (United States)

    Woldegiorgis, Emnet Tadesse; Jonck, Petronella; Goujon, Anne

    2015-01-01

    Europe's Bologna Process has been identified as a pioneering approach in regional cooperation with respect to the area of higher education. To address the challenges of African higher education, policymakers are recommending regional cooperation that uses the Bologna Process as a model. Based on these recommendations, the African Union Commission…

  1. PROCESS CAPABILITY ESTIMATION FOR NON-NORMALLY DISTRIBUTED DATA USING ROBUST METHODS - A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Yerriswamy Wooluru

    2016-06-01

    Full Text Available Process capability indices are very important process quality assessment tools in automotive industries. The common process capability indices (PCIs Cp, Cpk, Cpm are widely used in practice. The use of these PCIs based on the assumption that process is in control and its output is normally distributed. In practice, normality is not always fulfilled. Indices developed based on normality assumption are very sensitive to non- normal processes. When distribution of a product quality characteristic is non-normal, Cp and Cpk indices calculated using conventional methods often lead to erroneous interpretation of process capability. In the literature, various methods have been proposed for surrogate process capability indices under non normality but few literature sources offer their comprehensive evaluation and comparison of their ability to capture true capability in non-normal situation. In this paper, five methods have been reviewed and capability evaluation is carried out for the data pertaining to resistivity of silicon wafer. The final results revealed that the Burr based percentile method is better than Clements method. Modelling of non-normal data and Box-Cox transformation method using statistical software (Minitab 14 provides reasonably good result as they are very promising methods for non - normal and moderately skewed data (Skewness <= 1.5.

  2. Comparative Study of the Use of ICT in English Teaching-Learning Processes

    Science.gov (United States)

    Zare-ee, Abbas; Shekary, Abbas

    2010-01-01

    The use of Information Communication Technologies (ICT) in cultural, political, social, economic, and academic activities has recently attracted the attention of many researchers and it should now be an important component of the comparative study of education. The present study was conducted to compare the amount and quality of ICT use in English…

  3. An innovative process for treatment of municipal wastewater with superior charcteristics compared to traditional techologies

    DEFF Research Database (Denmark)

    Schmidt, Jens Ejbye; Fitsios, E.; Angelidaki, Irini

    2002-01-01

    . For nitrogen removal, the anammox process and an innovative physico-chemical method, will be described. These separation technologies have showed promising prospects for cost effective removal of ammonia. For phosphorus removal, biological process will be used. On-line volatile fatty acids (VFA) monitoring...... and control will ensure optimum utilization of VFA's for P removal and biogas production. Thermal hydrolysis for treatment of residual sludge will be used for further decreasing the amount of excess sludge. Finally, socio-economic evaluation of the process relative to the traditional treatment concepts...

  4. An innovative process for treatment of municipal wastewater with superior charcteristics compared to traditional techologies

    DEFF Research Database (Denmark)

    Schmidt, Jens Ejbye; Fitsios, E.; Angelidaki, Irini

    2002-01-01

    . For nitrogen removal, the anammox process and an innovative physico-chemical method, will be described. These separation technologies have showed promising prospects for cost effective removal of ammonia. For phosphorus removal, biological process will be used. On-line volatile fatty acids (VFA) monitoring......An innovative treatment process for municipal sewage, which results in low sludge production, low energy consumption, high COD removal and high energy and nutrients recovery, is described. The organic matter will primarly be removed through anaerobic degradation using high-flow reactors...... and control will ensure optimum utilization of VFA's for P removal and biogas production. Thermal hydrolysis for treatment of residual sludge will be used for further decreasing the amount of excess sludge. Finally, socio-economic evaluation of the process relative to the traditional treatment concepts...

  5. Learning from Each Other: Comparative Analysis of the Acquisition Process of Lithuania and U.S.

    Science.gov (United States)

    2006-06-01

    steps in the process are sometimes inconsistent. Interviewed personnel also complain that the market research and analysis of alternatives is often...and analyses of alternatives are conducted. In the U.S., market research and analysis of alternatives is a more in-depth and structured process...defined. Additionally, emphasis should be placed on two key steps: market research and analysis of alternatives. • Technical specifications: The most

  6. Same but different: Comparative modes of information processing are implicated in the construction of perceptions of autonomy support.

    Science.gov (United States)

    Lee, Rebecca Rachael; Chatzisarantis, Nikos L D

    2017-01-11

    An implicit assumption behind tenets of self-determination theory is that perceptions of autonomy support are a function of absolute modes of information processing. In this study, we examined whether comparative modes of information processing were implicated in the construction of perceptions of autonomy support. In an experimental study, we demonstrated that participants employed comparative modes of information processing in evaluating receipt of small, but not large, amounts of autonomy support. In addition, we found that social comparison processes influenced a number of outcomes that are empirically related to perceived autonomy support such as sense of autonomy, positive affect, perceived usefulness, and effort. Findings shed new light upon the processes underpinning construction of perceptions related to autonomy support and yield new insights into how to increase the predictive validity of models that use autonomy support as a determinant of motivation and psychological well-being.

  7. Sparse model selection in the highly under-sampled regime

    Science.gov (United States)

    Bulso, Nicola; Marsili, Matteo; Roudi, Yasser

    2016-09-01

    We propose a method for recovering the structure of a sparse undirected graphical model when very few samples are available. The method decides about the presence or absence of bonds between pairs of variable by considering one pair at a time and using a closed form formula, analytically derived by calculating the posterior probability for every possible model explaining a two body system using Jeffreys prior. The approach does not rely on the optimization of any cost functions and consequently is much faster than existing algorithms. Despite this time and computational advantage, numerical results show that for several sparse topologies the algorithm is comparable to the best existing algorithms, and is more accurate in the presence of hidden variables. We apply this approach to the analysis of US stock market data and to neural data, in order to show its efficiency in recovering robust statistical dependencies in real data with non-stationary correlations in time and/or space.

  8. A Life Cycle Assessment of Silica Sand: Comparing the Beneficiation Processes

    Directory of Open Access Journals (Sweden)

    Anamarija Grbeš

    2015-12-01

    Full Text Available Silica sand or quartz sand is a mineral resource with a wide variety of application; glass industry, construction and foundry are the most common examples thereof. The Republic of Croatia has reserves of 40 million tons of silica sand and a long tradition of surface mining and processing. The average annual production of raw silica sand in Croatia in the period from 2006 to 2011 amounted to 150 thousand tons. This paper presents cradle to gate LCA results of three different types of beneficiation techniques: electrostatic separation; flotation; gravity concentration. The aim of this research is to identify and quantify the environmental impacts of the silica sand production, to learn the range of the impacts for different processing methods, as well as to identify the major contributors and focus for further process design development.

  9. A single theoretical framework for circular features processing in humans: orientation and direction of motion compared

    Directory of Open Access Journals (Sweden)

    Tzvetomir eTzvetanov

    2012-05-01

    Full Text Available Common computational principles underly processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

  10. "Active" and "Passive" Lava Resurfacing Processes on Io: A Comparative Study of Loki Patera and Prometheus

    Science.gov (United States)

    Davies, A. G.; Matson, D. L.; Leone, G.; Wilson, L.; Keszthelyi, L. P.

    2004-01-01

    Studies of Galileo Near Infrared Mapping Spectrometer (NIMS) data and ground based data of volcanism at Prometheus and Loki Patera on Io reveal very different mechanisms of lava emplacement at these two volcanoes. Data analyses show that the periodic nature of Loki Patera s volcanism from 1990 to 2001 is strong evidence that Loki s resurfacing over this period resulted from the foundering of a crust on a lava lake. This process is designated passive , as there is no reliance on sub-surface processes: the foundering of the crust is inevitable. Prometheus, on the other hand, displays an episodicity in its activity which we designate active . Like Kilauea, a close analog, Prometheus s effusive volcanism is dominated by pulses of magma through the nearsurface plumbing system. Each system affords views of lava resurfacing processes through modelling.

  11. Three column intermittent simulated moving bed chromatography: 1. Process description and comparative assessment.

    Science.gov (United States)

    Jermann, Simon; Mazzotti, Marco

    2014-09-26

    The three column intermittent simulated moving bed (3C-ISMB) process is a new type of multi-column chromatographic process for binary separations and can be regarded as a modification of the I-SMB process commercialized by Nippon Rensui Corporation. In contrast to conventional I-SMB, this enables the use of only three instead of four columns without compromising product purity and throughput. The novel mode of operation is characterized by intermittent feeding and product withdrawal as well as by partial recycling of the weakly retained component from section III to section I. Due to the smaller number of columns with respect to conventional I-SMB, higher internal flow rates can be applied without violating pressure drop constraints. Therefore, the application of 3C-ISMB allows for a higher throughput whilst using a smaller number of columns. As a result, we expect that the productivity given in terms of throughput per unit time and unit volume of stationary phase can be significantly increased. In this contribution, we describe the new process concept in detail and analyze its cyclic steady state behavior through an extensive simulation study. The latter shows that 3C-ISMB can be easily designed by Triangle Theory even under highly non-linear conditions. The simple process design is an important advantage to other advanced SMB-like processes. Moreover, the simulation study demonstrates the superior performance of 3C-ISMB, namely productivity increases by roughly 60% with respect to conventional I-SMB without significantly sacrificing solvent consumption. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. COMPARATIVE ANALYSIS OF KIRLIANOGRAFIIA IMAGES GLOW OF BIOLOGICAL TISSUES WITH BIOCHEMICAL PROCESSES

    Directory of Open Access Journals (Sweden)

    L. A. Pisotska

    2015-12-01

    the investigated samples. For kirlianograficeskih studies used an experimental device, RIVERS 1, developed by Ukrainian Scientific Research Institute of mechanical engineering technologies (Dnepropetrovsk. For mathematical processing of results using Matlab program. The growing shortage of ATP causes the breach and termination of ion exchange, increases reactive oxygen generation, lipid peroxidation destroys cell membranes. The process of self digestion (autoliza tissue tendons, as shown by the results of the experiments, had cyclical changes metabolism enzyme activity (ALT, carbohydrate (LDH, nucleotides, of total protein and micronutrients.

  13. TIME SERIES FORECASTING WITH MULTIPLE CANDIDATE MODELS: SELECTING OR COMBINING?

    Institute of Scientific and Technical Information of China (English)

    YU Lean; WANG Shouyang; K. K. Lai; Y.Nakamori

    2005-01-01

    Various mathematical models have been commonly used in time series analysis and forecasting. In these processes, academic researchers and business practitioners often come up against two important problems. One is whether to select an appropriate modeling approach for prediction purposes or to combine these different individual approaches into a single forecast for the different/dissimilar modeling approaches. Another is whether to select the best candidate model for forecasting or to mix the various candidate models with different parameters into a new forecast for the same/similar modeling approaches. In this study, we propose a set of computational procedures to solve the above two issues via two judgmental criteria. Meanwhile, in view of the problems presented in the literature, a novel modeling technique is also proposed to overcome the drawbacks of existing combined forecasting methods. To verify the efficiency and reliability of the proposed procedure and modeling technique, the simulations and real data examples are conducted in this study.The results obtained reveal that the proposed procedure and modeling technique can be used as a feasible solution for time series forecasting with multiple candidate models.

  14. Does centrifugation and semen processing with swim up at 37°C yield sperm with better DNA integrity compared to centrifugation and processing at room temperature?

    Directory of Open Access Journals (Sweden)

    Deepthi Repalle

    2013-01-01

    Full Text Available Aim: To evaluate whether semen processing at 37°C yield sperm with better DNA integrity compared to centrifugation and processing at room temperature (RT by swim-up method. Settings: This study was done at tertiary care center attached to Reproductive Medicine Unit and Medical College. Design: Prospective pilot study. Patients: Normozoospermic men (n = 50 undergoing diagnostic semen analysis. Materials and Methods: Normozoospermic samples (World Health Organization, 2010 criteria after analysis was divided into two aliquots (0.5 mL each; one was processed at 37°C and the other at RT by swim-up method. DNA fragmentation of both samples post wash was calculated by acridine orange method. Statistical Analysis Used: The values of sperm DNA fragmentation were represented as mean and standard error (mean ± SEM of the mean. Paired t-test was used for calculating the sperm DNA integrity difference between post wash at RT and 37°C. Results: Statistically significant difference was not observed in post wash sperm DNA fragmentation values at 37°C compared to RT. Conclusion: Our data represents that there was no significant difference in sperm DNA fragmentation values of samples processed at 37°C and at RT. Hence, sperm processing at 37°C does not yield sperm with better DNA integrity compared to centrifugation and processing at RT.

  15. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  16. Selection and study performance : comparing three admission processes within one medical school

    NARCIS (Netherlands)

    Schripsema, Nienke R.; van Trigt, Anke M.; Borleffs, Jan C. C.; Cohen-Schotanus, Janke

    2014-01-01

    ObjectivesThis study was conducted to: (i) analyse whether students admitted to one medical school based on top pre-university grades, a voluntary multifaceted selection process, or lottery, respectively, differed in study performance; (ii) examine whether students who were accepted in the multiface

  17. Competency-Based Training in International Perspective: Comparing the Implementation Processes Towards the Achievement of Employability

    Science.gov (United States)

    Boahin, Peter; Eggink, Jose; Hofman, Adriaan

    2014-01-01

    This article undertakes a comparison of competency-based training (CBT) systems in a number of countries with the purpose of drawing lessons to support Ghana and other countries in the process of CBT implementation. The study focuses on recognition of prior learning and involvement of industry since these features seem crucial in achieving…

  18. The Comparative and Developmental Study of Auditory Information Processing in Autistic Adults.

    Science.gov (United States)

    Nakamura, Kenryu; And Others

    1986-01-01

    The study examined brain functions related to information processing in autistic adults using auditory evoked potentials (AEP) and missing stimulus potentials (MSP). Both nonautistic and autistic adults showed normal mature patterns and lateralities in AEP for music stimuli, but nonautistic children did not. Autistic adults showed matured patterns…

  19. The Linguistic Correlates of Conversational Deception: Comparing Natural Language Processing Technologies

    Science.gov (United States)

    Duran, Nicholas D.; Hall, Charles; McCarthy, Philip M.; McNamara, Danielle S.

    2010-01-01

    The words people use and the way they use them can reveal a great deal about their mental states when they attempt to deceive. The challenge for researchers is how to reliably distinguish the linguistic features that characterize these hidden states. In this study, we use a natural language processing tool called Coh-Metrix to evaluate deceptive…

  20. Comparing Science Process Skills of Prospective Science Teachers: A Cross-Sectional Study

    Science.gov (United States)

    Farsakoglu, Omer Faruk; Sahin, Cigdem; Karsli, Fethiye

    2012-01-01

    This study was conducted with the purpose of examining how Prospective Science Teachers' (PST) Science Process Skills (SPS) develop according to different grades. In this study, a cross-sectional research approach in the form of a case study was used. The sample group consisted of a total number of 102 undergraduate students who were selected from…

  1. Real-World Experimentation Comparing Time-Sharing and Batch Processing in Teaching Computer Science,

    Science.gov (United States)

    effectiveness of time-sharing and batch processing in teaching computer science . The experimental design was centered on direct, ’real world’ comparison...ALGOL). The experimental sample involved all introductory computer science courses with a total population of 415 cadets. The results generally

  2. A comparative study of information processing time in chronic alcoholic and non-alcoholic men

    Directory of Open Access Journals (Sweden)

    Vishavdeep Kaur

    2016-11-01

    Conclusions: This study concludes that with chronic consumption of alcohol there is slow processing of information as well as decrease in efficiency of sensorimotor functioning which is shown by an increased reaction time for visual stimuli. [Int J Res Med Sci 2016; 4(11.000: 4812-4815

  3. The Comparative and Developmental Study of Auditory Information Processing in Autistic Adults.

    Science.gov (United States)

    Nakamura, Kenryu; And Others

    1986-01-01

    The study examined brain functions related to information processing in autistic adults using auditory evoked potentials (AEP) and missing stimulus potentials (MSP). Both nonautistic and autistic adults showed normal mature patterns and lateralities in AEP for music stimuli, but nonautistic children did not. Autistic adults showed matured patterns…

  4. Learning in the Process of Industrial Work--A Comparative Study of Finland, Sweden and Germany

    Science.gov (United States)

    Kira, Mari

    2007-01-01

    By combining a positivistic and an interpretive approach, this research investigates the learning opportunities that contemporary industrial work processes and workplaces offer for employees individually and collectively. The research explores how employees can become trained through their work and how individual development may expand to…

  5. A Comparative Analysis of the Budget Process in the Venezuelan and U.S. Navies.

    Science.gov (United States)

    1979-12-01

    los Organismo de la Administracion Central" CAccounting System of Financial Execution of the budget for the Central Administration Offices), which is...Venezuelan budgetary process no distinction is made between an authorization and an appropriation. The Escuela Nacional de Administracion Publica - The

  6. Processing the ground vibration signal produced by debris flows: the methods of amplitude and impulses compared

    Science.gov (United States)

    Arattano, M.; Abancó, C.; Coviello, V.; Hürlimann, M.

    2014-12-01

    Ground vibration sensors have been increasingly used and tested, during the last few years, as devices to monitor debris flows and they have also been proposed as one of the more reliable devices for the design of debris flow warning systems. The need to process the output of ground vibration sensors, to diminish the amount of data to be recorded, is usually due to the reduced storing capabilities and the limited power supply, normally provided by solar panels, available in the high mountain environment. There are different methods that can be found in literature to process the ground vibration signal produced by debris flows. In this paper we will discuss the two most commonly employed: the method of impulses and the method of amplitude. These two methods of data processing are analyzed describing their origin and their use, presenting examples of applications and their main advantages and shortcomings. The two methods are then applied to process the ground vibration raw data produced by a debris flow occurred in the Rebaixader Torrent (Spanish Pyrenees) in 2012. The results of this work will provide means for decision to researchers and technicians who find themselves facing the task of designing a debris flow monitoring installation or a debris flow warning equipment based on the use of ground vibration detectors.

  7. fs- and ns-laser processing of polydimethylsiloxane (PDMS) elastomer: Comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Stankova, N.E., E-mail: nestankova@yahoo.com [Institute of Electronics, Bulgarian Academy of Sciences, 72 Tsarigradsko Shose, Sofia 1784 (Bulgaria); Atanasov, P.A.; Nedyalkov, N.N.; Stoyanchov, T.R. [Institute of Electronics, Bulgarian Academy of Sciences, 72 Tsarigradsko Shose, Sofia 1784 (Bulgaria); Kolev, K.N.; Valova, E.I.; Georgieva, J.S.; Armyanov, St.A. [Rostislaw Kaischew Institute of Physical Chemistry, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Block 11, Sofia 1113 (Bulgaria); Amoruso, S.; Wang, X.; Bruzzese, R. [CNR-SPIN, Dipartimento di Scienze Fisiche, Universita degli Studi di Napoli Federico II, Complesso Universitario di Monte S. Angelo, Via Cintia, I-80126 Napoli (Italy); Grochowska, K.; Śliwiński, G. [Photophysics Department, The Szewalski Institute, Polish Academy of Sciences, 14 Fiszera St., 80-231 Gdańsk (Poland); Baert, K.; Hubin, A. [Vrije Universiteit Brussels, Faculty of Engineering, Research group, SURF “Electrochemical and Surface Engineering” (Belgium); Delplancke, M.P.; Dille, J. [Université Libre de Bruxelles, Materials Engineering, Characterization, Synthesis and Recycling (Service 4MAT), Faculté des Sciences Appliquées, 1050 Brussels (Belgium)

    2015-05-01

    Highlights: • fs- and ns-laser (266 and 532 nm) processing of PDMS-elastomer, in air, is studied. • High definition tracks (on the PDMS-elastomer surface) for electrodes are produced. • Selective Pt or Ni metallization of the tracks is produced via electroless plating. • Irradiated and metallized tracks are characterized by μ-Raman spectrometry and SEM. • DC resistance of Pt and Ni tracks is always between 0.5 and 15 Ω/mm. - Abstract: Medical grade polydimethylsiloxane (PDMS) elastomer is a widely used biomaterial as encapsulation and/or as substrate insulator carrier for long term neural implants because of its remarkable properties. Femtosecond (λ = 263 and 527 nm) and nanosecond (266 and 532 nm) laser processing of PDMS-elastomer surface, in air, is investigated. The influence of different processing parameters, including laser wavelength, pulse duration, fluence, scanning speed and overlapping of the subsequent pulses, on the surface activation and the surface morphology are studied. High definition tracks and electrodes are produced. Remarkable alterations of the chemical composition and structural morphology of the ablated traces are observed in comparison with the native material. Raman spectra illustrate well-defined dependence of the chemical composition on the laser fluence, pulse duration, number of pulses and wavelength. An extra peak about ∼512–518 cm{sup −1}, assigned to crystalline silicon, is observed after ns- or visible fs-laser processing of the surface. In all cases, the intensities of Si−O−Si symmetric stretching at 488 cm{sup −1}, Si−CH{sub 3} symmetric rocking at 685 cm{sup −1}, Si−C symmetric stretching at 709 cm{sup −1}, CH{sub 3} asymmetric rocking + Si−C asymmetric stretching at 787 cm{sup −1}, and CH{sub 3} symmetric rocking at 859 cm{sup −1}, modes strongly decrease. The laser processed areas are also analyzed by SEM and optical microscopy. Selective Pt or Ni metallization of the laser processed

  8. Empirical evaluation of scoring functions for Bayesian network model selection.

    Science.gov (United States)

    Liu, Zhifa; Malone, Brandon; Yuan, Changhe

    2012-01-01

    In this work, we empirically evaluate the capability of various scoring functions of Bayesian networks for recovering true underlying structures. Similar investigations have been carried out before, but they typically relied on approximate learning algorithms to learn the network structures. The suboptimal structures found by the approximation methods have unknown quality and may affect the reliability of their conclusions. Our study uses an optimal algorithm to learn Bayesian network structures from datasets generated from a set of gold standard Bayesian networks. Because all optimal algorithms always learn equivalent networks, this ensures that only the choice of scoring function affects the learned networks. Another shortcoming of the previous studies stems from their use of random synthetic networks as test cases. There is no guarantee that these networks reflect real-world data. We use real-world data to generate our gold-standard structures, so our experimental design more closely approximates real-world situations. A major finding of our study suggests that, in contrast to results reported by several prior works, the Minimum Description Length (MDL) (or equivalently, Bayesian information criterion (BIC)) consistently outperforms other scoring functions such as Akaike's information criterion (AIC), Bayesian Dirichlet equivalence score (BDeu), and factorized normalized maximum likelihood (fNML) in recovering the underlying Bayesian network structures. We believe this finding is a result of using both datasets generated from real-world applications rather than from random processes used in previous studies and learning algorithms to select high-scoring structures rather than selecting random models. Other findings of our study support existing work, e.g., large sample sizes result in learning structures closer to the true underlying structure; the BDeu score is sensitive to the parameter settings; and the fNML performs pretty well on small datasets. We also

  9. Smooth-Threshold Multivariate Genetic Prediction with Unbiased Model Selection.

    Science.gov (United States)

    Ueki, Masao; Tamiya, Gen

    2016-04-01

    We develop a new genetic prediction method, smooth-threshold multivariate genetic prediction, using single nucleotide polymorphisms (SNPs) data in genome-wide association studies (GWASs). Our method consists of two stages. At the first stage, unlike the usual discontinuous SNP screening as used in the gene score method, our method continuously screens SNPs based on the output from standard univariate analysis for marginal association of each SNP. At the second stage, the predictive model is built by a generalized ridge regression simultaneously using the screened SNPs with SNP weight determined by the strength of marginal association. Continuous SNP screening by the smooth thresholding not only makes prediction stable but also leads to a closed form expression of generalized degrees of freedom (GDF). The GDF leads to the Stein's unbiased risk estimation (SURE), which enables data-dependent choice of optimal SNP screening cutoff without using cross-validation. Our method is very rapid because computationally expensive genome-wide scan is required only once in contrast to the penalized regression methods including lasso and elastic net. Simulation studies that mimic real GWAS data with quantitative and binary traits demonstrate that the proposed method outperforms the gene score method and genomic best linear unbiased prediction (GBLUP), and also shows comparable or sometimes improved performance with the lasso and elastic net being known to have good predictive ability but with heavy computational cost. Application to whole-genome sequencing (WGS) data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) exhibits that the proposed method shows higher predictive power than the gene score and GBLUP methods.

  10. A comparative study of cellulose nanofibrils disintegrated via multiple processing approaches

    Science.gov (United States)

    Yan Qing; Ronald Sabo; J.Y. Zhu; Umesh Agarwal; Zhiyong Cai; Yiqiang Wu

    2013-01-01

    Various cellulose nanofibrils (CNFs) created by refining and microfluidization, in combination withenzymatic or 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) oxidized pretreatment were compared. Themorphological properties, degree of polymerization, and crystallinity for the obtained nanofibrils, aswell as physical and mechanical properties of the corresponding films...

  11. Comparing Two Forms of Concept Map Critique Activities to Facilitate Knowledge Integration Processes in Evolution Education

    Science.gov (United States)

    Schwendimann, Beat A.; Linn, Marcia C.

    2016-01-01

    Concept map activities often lack a subsequent revision step that facilitates knowledge integration. This study compares two collaborative critique activities using a Knowledge Integration Map (KIM), a form of concept map. Four classes of high school biology students (n?=?81) using an online inquiry-based learning unit on evolution were assigned…

  12. The Paradigm of Utilizing Robots in the Teaching Process: A Comparative Study

    Science.gov (United States)

    Bacivarov, Ioan C.; Ilian, Virgil L. M.

    2012-01-01

    This paper discusses a comparative study of the effects of using a humanoid robot for introducing students to personal robotics. Even if a humanoid robot is one of the more complicated types of robots, comprehension was not an issue. The study highlighted the importance of using real hardware for teaching such complex subjects as opposed to…

  13. Theory for Explaining and Comparing the Dynamics of Education in Transitional Processes

    Science.gov (United States)

    van der Walt, Johannes L.

    2016-01-01

    Countries all over the world find themselves in the throes of revolution, change, transition or transformation. Because of the complexities of these momentous events, it is no simple matter to describe and evaluate them. This paper suggests that comparative educationists apply a combination of three theories as a lens through which such national…

  14. Causal Inference and Model Selection in Complex Settings

    Science.gov (United States)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  15. How Multilevel Societal Learning Processes Facilitate Transformative Change: A Comparative Case Study Analysis on Flood Management

    Directory of Open Access Journals (Sweden)

    Claudia Pahl-Wostl

    2013-12-01

    Full Text Available Sustainable resources management requires a major transformation of existing resource governance and management systems. These have evolved over a long time under an unsustainable management paradigm, e.g., the transformation from the traditionally prevailing technocratic flood protection toward the holistic integrated flood management approach. We analyzed such transformative changes using three case studies in Europe with a long history of severe flooding: the Hungarian Tisza and the German and Dutch Rhine. A framework based on societal learning and on an evolutionary understanding of societal change was applied to identify drivers and barriers for change. Results confirmed the importance of informal learning and actor networks and their connection to formal policy processes. Enhancing a society's capacity to adapt is a long-term process that evolves over decades, and in this case, was punctuated by disastrous flood events that promoted windows of opportunity for change.

  16. A Comparative Survey Based on Processing Network Traffic Data Using Hadoop Pig and Typical Mapreduce

    Directory of Open Access Journals (Sweden)

    Anjali P P

    2014-02-01

    Full Text Available Big data analysis has now become an integral part of many computational and statistical departments. Analysis of peta-byte scale of data is having an enhanced importance in the present day scenario. Big data manipulation is now considered as a key area of research in the field of data analytics and novel techniques are being evolved day by day. Thousands of transaction requests are being processed in every minute by different websites related to e-commerce, shopping carts and online banking. Here comes the need of network traffic and weblog analysis for which Hadoop comes as a suggested solution. It can efficiently process the Netflow data collected from routers, switches or even from website access logs at fixed intervals.

  17. A Comparative Analysis of the Efficiency and Effectiveness of the F-14 Tomcat Overhaul Process.

    Science.gov (United States)

    1998-06-01

    Aquiland, Productions and Operations Management , Manufacturing and Services, 8th ed., Irwin McGraw-Hill, 1996 Heizer , Jay, and Barry Render...analyze, and use data to assist all levels of NAMP management -. Organizational level (O-level) maintenance is performed by an operating unit on a day...put back into operational service. 32 2. Management Philosophy and Practices a) Planning The F-14 SDLM program has been plagued with process

  18. Modeling Central Carbon Metabolic Processes in Soil Microbial Communities: Comparing Measured With Modeled

    Science.gov (United States)

    Dijkstra, P.; Fairbanks, D.; Miller, E.; Salpas, E.; Hagerty, S.

    2013-12-01

    Understanding the mechanisms regulating C cycling is hindered by our inability to directly observe and measure the biochemical processes of glycolysis, pentose phosphate pathway, and TCA cycle in intact and complex microbial communities. Position-specific 13C labeled metabolic tracer probing is proposed as a new way to study microbial community energy production, biosynthesis, C use efficiency (the proportion of substrate incorporated into microbial biomass), and enables the quantification of C fluxes through the central C metabolic network processes (Dijkstra et al 2011a,b). We determined the 13CO2 production from U-13C, 1-13C, 2-13C, 3-13C, 4-13C, 5-13C, and 6-13C labeled glucose and 1-13C and 2,3-13C pyruvate in parallel incubations in three soils along an elevation gradient. Qualitative and quantitative interpretation of the results indicate a high pentose phosphate pathway activity in soils. Agreement between modeled and measured CO2 production rates for the six C-atoms of 13C-labeled glucose indicate that the metabolic model used is appropriate for soil community processes, but that improvements can be made. These labeling and modeling techniques may improve our ability to analyze the biochemistry and (eco)physiology of intact microbial communities. Dijkstra, P., Blankinship, J.C., Selmants, P.C., Hart, S.C., Koch, G.W., Schwartz, E., Hungate, B.A., 2011a. Probing C flux patterns of soil microbial metabolic networks using parallel position-specific tracer labeling. Soil Biology & Biochemistry 43, 126-132. Dijkstra, P., Dalder, J.J., Selmants, P.C., Hart, S.C., Koch, G.W., Schwartz, E., Hungate, B.A., 2011b. Modeling soil metabolic processes using isotopologue pairs of position-specific 13C-labeled glucose and pyruvate. Soil Biology & Biochemistry 43, 1848-1857.

  19. A Comparative Evaluation of Cash Flow and Batch Profit Hedging Effectiveness in Commodity Processing

    OpenAIRE

    Dahlgran, Roger A.

    2006-01-01

    Agribusinesses make long-term plant-investment decisions based on discounted cash flow. It is therefore incongruous for an agribusiness firm to use cash flow as a plant-investment criterion and then to completely discard cash flow in favor of batch profits as an operating objective. This paper assumes that cash flow and its stability are important to commodity processors and examines methods for hedging cash flows under continuous processing. Its objectives are (a) to determine how standard h...

  20. Charge generation associated with liquid spraying in tank cleaning and comparable processes - preliminary experiments

    Science.gov (United States)

    Blum, Carsten; Losert, Oswald F. J.

    2015-10-01

    The BG RCI has initiated investigations in order to improve the data basis for assessing the ignition hazard by electrostatic charging processes associated with the spraying of liquids. On the base of preliminary experiments, we established procedures for measurements of electric field strength and charging current in the presence of aerosol particles. Results obtained with three different nozzle types, variation of pressure and with built-in deflecting plate are presented.

  1. Comparing BPM suite K2 Blackpearl and IBM Business process manager

    OpenAIRE

    Kosmač, Matjaž

    2012-01-01

    The thesis brings a comparison of two significant BPM tools called K2 Blackpearl and IBM Business Process Manager, also known as Lombardi. At the beginning we had to identify key criteria to evaluate BPM tools and divide them into larger sets. Afterwards we described how the chosen BPM tools fulfill important criteria in each of these sets. We came to the conclusion that both tools are quite similar, both offering similar (basic) functionalities and features, but often in ve...

  2. Comparative Analysis of the Processes of Quality in Physiotherapy / Kinesiology of Colombia and Chile

    OpenAIRE

    Luis Fernando Rodríguez Ibagué; Andrés Felipe Sánchez Medina; Paola Andrea Zamora Restrepo; Luis Alejandro Araya Veliz

    2015-01-01

    Introduction: The initiatives in Latin America’s reforms frame a concern to ensure universal coverage and provide quality services, thus the quality management has become one of the most important issues in the XXI century, especially with health issues. Objectives: To characterize the processes of Enabling / health authorization and accreditation of kinesiological services of Colombia and Chile from the perspective of health quality. Methodology: For this we conducted a descriptive comparati...

  3. Arms Transfers to Venezuela: A Comparative and Critical Analysis of the Acquisition Process (1980-1996).

    Science.gov (United States)

    1999-03-01

    arbitration: institutional and independent. Any commercial association or international association related to the national economy can act as arbiter...the defense procurement process, occasionally affected by divergent objectives such as economy or politics. This chapter discusses the U.S. defense...and Information Systems (Oficina Central de Estadisticas e Informatica, OCEI), which is a subordinated office of the presidency. This differs from the

  4. THE BOLOGNA PROCESS AND THE DYNAMICS OF ACADEMIC MOBILITY: A COMPARATIVE APPROACH TO ROMANIA AND TURKEY

    Directory of Open Access Journals (Sweden)

    Monica ROMAN

    2008-12-01

    Full Text Available Recent changes that have occurred in the European higher education system are grounded on the options of continental countries, expressed in the Bologna Declaration, to achieve a single European space in this field by the year 2010. The purpose of this paper is to develop a better understanding of student mobility in the process of internationalization of higher education in a South European context. The rationale of the study is that student mobility has long been the most important dimension of the process of internationalization of higher education. At the moment there is increasing demand for higher education, as a consequence of demographic trends and the need for new degrees and diploma programs. The article focuses on two countries from South-Eastern Europe, Romania and Turkey. Both countries have a very dynamic higher education system, in terms of number of students and stuff, integrating in Bologna process. They also are primarily perceived as sending students countries. The key findings are linked to obstacles and solutions to overcome this obstacle. It also stresses the necessity of the two higher education systems to be more involved in attracting European students.

  5. Comparative of signal processing techniques for micro-Doppler signature extraction with automotive radar systems

    Science.gov (United States)

    Rodriguez-Hervas, Berta; Maile, Michael; Flores, Benjamin C.

    2014-05-01

    In recent years, the automotive industry has experienced an evolution toward more powerful driver assistance systems that provide enhanced vehicle safety. These systems typically operate in the optical and microwave regions of the electromagnetic spectrum and have demonstrated high efficiency in collision and risk avoidance. Microwave radar systems are particularly relevant due to their operational robustness under adverse weather or illumination conditions. Our objective is to study different signal processing techniques suitable for extraction of accurate micro-Doppler signatures of slow moving objects in dense urban environments. Selection of the appropriate signal processing technique is crucial for the extraction of accurate micro-Doppler signatures that will lead to better results in a radar classifier system. For this purpose, we perform simulations of typical radar detection responses in common driving situations and conduct the analysis with several signal processing algorithms, including short time Fourier Transform, continuous wavelet or Kernel based analysis methods. We take into account factors such as the relative movement between the host vehicle and the target, and the non-stationary nature of the target's movement. A comparison of results reveals that short time Fourier Transform would be the best approach for detection and tracking purposes, while the continuous wavelet would be the best suited for classification purposes.

  6. Comparative hazard analysis of processes leading to remarkable flash floods (France, 1930-1999)

    Science.gov (United States)

    Boudou, M.; Lang, M.; Vinet, F.; Cœur, D.

    2016-10-01

    Flash flood events are responsible for large economic losses and lead to fatalities every year in France. This is especially the case in the Mediterranean and oversea territories/departments of France, characterized by extreme hydro-climatological features and with a large part of the population exposed to flood risks. The recurrence of remarkable flash flood events, associated with high hazard intensity, significant damage and socio-political consequences, therefore raises several issues for authorities and risk management policies. This study aims to improve our understanding of the hazard analysis process in the case of four remarkable flood events: March 1930, October 1940, January 1980 and November 1999. Firstly, we present the methodology used to define the remarkability score of a flood event. Then, to identify the factors leading to a remarkable flood event, we explore the main parameters of the hazard analysis process, such as the meteorological triggering conditions, the return period of the rainfall and peak discharge, as well as some additional factors (initial catchment state, flood chronology, cascade effects, etc.). The results contribute to understanding the complexity of the processes leading to flood hazard and highlight the importance for risk managers of taking additional factors into account.

  7. The model selection in the process of teambuilding for the management of the organization

    Directory of Open Access Journals (Sweden)

    Sergey Petrov

    2010-10-01

    Full Text Available Improving competitiveness of organizations necessary for their success in a market economy is no longer possible only due to material resources. This implies need for qualitatively new approach to human capital. The author reviews approaches to team building and suggests team management model based on situations-cases in which the organized one way or another team reaches goal.

  8. Photonic processing and realization of an all-optical digital comparator based on semiconductor optical amplifiers

    Science.gov (United States)

    Singh, Simranjit; Kaur, Ramandeep; Kaler, Rajinder Singh

    2015-01-01

    A module of an all-optical 2-bit comparator is analyzed and implemented using semiconductor optical amplifiers (SOAs). By employing SOA-based cross phase modulation, the optical XNOR logic is used to get an A=B output signal, where as AB¯ and A¯B> logics operations are used to realize A>B and Aoptical high speed networks and computing systems.

  9. A comparative life cycle assessment of hybrid osmotic dilution desalination and established seawater desalination and wastewater reclamation processes.

    Science.gov (United States)

    Hancock, Nathan T; Black, Nathan D; Cath, Tzahi Y

    2012-03-15

    The purpose of this study was to determine the comparative environmental impacts of coupled seawater desalination and water reclamation using a novel hybrid system that consist of an osmotically driven membrane process and established membrane desalination technologies. A comparative life cycle assessment methodology was used to differentiate between a novel hybrid process consisting of forward osmosis (FO) operated in osmotic dilution (ODN) mode and seawater reverse osmosis (SWRO), and two other processes: a stand alone conventional SWRO desalination system, and a combined SWRO and dual barrier impaired water purification system consisting of nanofiltration followed by reverse osmosis. Each process was evaluated using ten baseline impact categories. It was demonstrated that from a life cycle perspective two hurdles exist to further development of the ODN-SWRO process: module design of FO membranes and cleaning intensity of the FO membranes. System optimization analysis revealed that doubling FO membrane packing density, tripling FO membrane permeability, and optimizing system operation, all of which are technically feasible at the time of this publication, could reduce the environmental impact of the hybrid ODN-SWRO process compared to SWRO by more than 25%; yet, novel hybrid nanofiltration-RO treatment of seawater and wastewater can achieve almost similar levels of environmental impact.

  10. RESEARCH ON THE POWER CONSUMPTION IN SANDING PROCESS WITH ABRASIVE BRUSHES, COMPARED TO THE WIDE BELT SANDING

    Directory of Open Access Journals (Sweden)

    Loredana Anne-Marie BĂDESCU

    2015-12-01

    Full Text Available This paper presents the modelling of over-finishing grinding with abrasive brushes and also a comparative study between the absorbed power when over-finishing grinding the beech, the spruce and the MDF with this kind of tools as compared to the absorbed power when over-finishing grinding under similar conditions using the wide belt sanding (grinding technology, presenting the advantages of reconsidering such a technological process.

  11. Comparing population patterns to processes: abundance and survival of a forest salamander following habitat degradation.

    Directory of Open Access Journals (Sweden)

    Clint R V Otto

    Full Text Available Habitat degradation resulting from anthropogenic activities poses immediate and prolonged threats to biodiversity, particularly among declining amphibians. Many studies infer amphibian response to habitat degradation by correlating patterns in species occupancy or abundance with environmental effects, often without regard to the demographic processes underlying these patterns. We evaluated how retention of vertical green trees (CANOPY and coarse woody debris (CWD influenced terrestrial salamander abundance and apparent survival in recently clearcut forests. Estimated abundance of unmarked salamanders was positively related to CANOPY (β Canopy  = 0.21 (0.02-1.19; 95% CI, but not CWD (β CWD  = 0.11 (-0.13-0.35 within 3,600 m2 sites, whereas estimated abundance of unmarked salamanders was not related to CANOPY (β Canopy  = -0.01 (-0.21-0.18 or CWD (β CWD  = -0.02 (-0.23-0.19 for 9 m2 enclosures. In contrast, apparent survival of marked salamanders within our enclosures over 1 month was positively influenced by both CANOPY and CWD retention (β Canopy  = 0.73 (0.27-1.19; 95% CI and β CWD  = 1.01 (0.53-1.50. Our results indicate that environmental correlates to abundance are scale dependent reflecting habitat selection processes and organism movements after a habitat disturbance event. Our study also provides a cautionary example of how scientific inference is conditional on the response variable(s, and scale(s of measure chosen by the investigator, which can have important implications for species conservation and management. Our research highlights the need for joint evaluation of population state variables, such as abundance, and population-level process, such as survival, when assessing anthropogenic impacts on forest biodiversity.

  12. Structures and processes in spontaneous ADR reporting systems: a comparative study of Australia and Denmark.

    Science.gov (United States)

    Aagaard, Lise; Stenver, Doris Irene; Hansen, Ebba Holme

    2008-10-01

    To explore the organisational structure and processes of the Danish and Australian spontaneous ADR reporting systems with a view to how information is generated about new ADRs. The Danish and Australian spontaneous ADR reporting systems. Qualitative analyses of documentary material, descriptive interviews with key informants, and observations were made. We analysed the organisational structure of the Danish and Australian ADR reporting systems with respect to structures and processes, including information flow and exchange of ADR data. The analysis was made based on Scott's adapted version of Leavitt's diamond model, with the components: goals/tasks, social structure, technology and participants, within a surrounding environment. The main differences between the systems were: (1) PARTICIPANTS: Outsourcing of ADR assessments to the pharmaceutical companies complicates maintenance of scientific skills within the Danish Medicines Agency (DKMA), as it leaves the handling of spontaneous ADR reports purely administrative within the DKMA, and the knowledge creation process remains with the pharmaceutical companies, while in Australia senior scientific staff work with evaluation of the ADR report; (2) Goals/tasks: In Denmark, resources are targeted at evaluating Periodic Safety Update Reports (PSUR) submitted by the companies, while the resources in Australia are focused on single case assessment resulting in faster and more proactive medicine surveillance; (3) Social structure: Discussions between scientific staff about ADRs take place in Australia, while the Danish system primarily focuses on entering and forwarding ADR data to the relevant pharmaceutical companies; (4) Technology: The Danish system exchanges ADR data electronically with pharmaceutical companies and the other EU countries, while Australia does not have a system for electronic exchange of ADR data; and (5) ENVIRONMENT: The Danish ADR system is embedded in the routines of cooperation within European

  13. RUNON a hitherto little noticed factor - Field experiments comparing RUNOFF/RUNON processes

    Science.gov (United States)

    Kohl, Bernhard; Achleitner, Stefan; Lumassegger, Simon

    2017-04-01

    When ponded water moves downslope as overland flow, an important process called runon manifests itself, but is often ignored in rainfall-runoff studies (Nahar et al. 2004) linking infiltration exclusively to rainfall. Runon effects on infiltration have not yet or only scarcely been evaluated (e.g. Zheng et al. 2000). Runoff-runon occurs when spatially variable infiltration capacities result in runoff generated in one location potentially infiltrating further downslope in an area with higher infiltration capacity (Jones et al. 2013). Numerous studies report inverse relationships between unit area volumes of overland flow and plot lengths (Jones et al. 2016). This is an indication that the effects of rainfall and runon often become blurred. We use a coupled hydrological/2D hydrodynamic model to simulate surface runoff and pluvial flooding including the associated infiltration process. In frame of the research project SAFFER-CC (sensitivity assessment of critical condition for local flash floods - evaluating the recurrence under climate change) the influence of land use and soil conservation on pluvial flash flood modeling is assessed. Field experiments are carried out with a portable irrigation spray installation at different locations with a plot size 5m width and 10m length. The test plots were subjected first to a rainfall with constant intensity of 100 mm/h for one hour. Consecutively a super intense, one hour mid accentuated rainfall hydrograph was applied after 30 minutes at the same plots, ranging from 50 mm/h to 200 mm/h for 1hour. Finally, runon was simulated by upstream feeding of the test plots using two different inflow intensities. The irrigation test showed expected differences of runoff coefficients depending on the various agricultural management. However, these runoff coefficients change with the applied process (rainfall or runon). While a decrease was observed on a plot with a closed litter layer, runoff coefficient from runon increases on poor

  14. Comparing Proteolytic Fingerprints of Antigen-Presenting Cells during Allergen Processing

    Directory of Open Access Journals (Sweden)

    Heidi Hofer

    2017-06-01

    Full Text Available Endolysosomal processing has a critical influence on immunogenicity as well as immune polarization of protein antigens. In industrialized countries, allergies affect around 25% of the population. For the rational design of protein-based allergy therapeutics for immunotherapy, a good knowledge of T cell-reactive regions on allergens is required. Thus, we sought to analyze endolysosomal degradation patterns of inhalant allergens. Four major allergens from ragweed, birch, as well as house dust mites were produced as recombinant proteins. Endolysosomal proteases were purified by differential centrifugation from dendritic cells, macrophages, and B cells, and combined with allergens for proteolytic processing. Thereafter, endolysosomal proteolysis was monitored by protein gel electrophoresis and mass spectrometry. We found that the overall proteolytic activity of specific endolysosomal fractions differed substantially, whereas the degradation patterns of the four model allergens obtained with the different proteases were extremely similar. Moreover, previously identified T cell epitopes were assigned to endolysosomal peptides and indeed showed a good overlap with known T cell epitopes for all four candidate allergens. Thus, we propose that the degradome assay can be used as a predictor to determine antigenic peptides as potential T cell epitopes, which will help in the rational design of protein-based allergy vaccine candidates.

  15. Product Development and its Comparative Analysis by SLA, SLS and FDM Rapid Prototyping Processes

    Science.gov (United States)

    Choudhari, C. M.; Patil, V. D.

    2016-09-01

    To grab market and meeting deadlines has increased the scope of new methods in product design and development. Industries continuously strive to optimize the development cycles with high quality and cost efficient products to maintain market competitiveness. Thus the need of Rapid Prototyping Techniques (RPT) has started to play pivotal role in rapid product development cycle for complex product. Dimensional accuracy and surface finish are the corner stone of Rapid Prototyping (RP) especially if they are used for mould development. The paper deals with the development of part made with the help of Selective Laser Sintering (SLS), Stereo-lithography (SLA) and Fused Deposition Modelling (FDM) processes to benchmark and investigate on various parameters like material shrinkage rate, dimensional accuracy, time, cost and surface finish. This helps to conclude which processes can be proved to be effective and efficient in mould development. In this research work the emphasis was also given to the design stage of a product development to obtain an optimum design solution for an existing product.

  16. A comparative analysis of pre-processing techniques in colour retinal images

    Energy Technology Data Exchange (ETDEWEB)

    Salvatelli, A [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Bizai, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Barbosa, G [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Drozdowicz, B [Artificial Intelligence Group, Facultad de Ingenieria, Universidad Nacional de Entre Rios (Argentina); Delrieux, C [Electric and Computing Engineering Department, Universidad Nacional del Sur, Alem 1253, BahIa Blanca, (Partially funded by SECyT-UNS) (Argentina)], E-mail: claudio@acm.org

    2007-11-15

    Diabetic retinopathy (DR) is a chronic disease of the ocular retina, which most of the times is only discovered when the disease is on an advanced stage and most of the damage is irreversible. For that reason, early diagnosis is paramount for avoiding the most severe consequences of the DR, of which complete blindness is not uncommon. Unsupervised or supervised image processing of retinal images emerges as a feasible tool for this diagnosis. The preprocessing stages are the key for any further assessment, since these images exhibit several defects, including non uniform illumination, sampling noise, uneven contrast due to pigmentation loss during sampling, and many others. Any feasible diagnosis system should work with images where these defects were compensated. In this work we analyze and test several correction techniques. Non uniform illumination is compensated using morphology and homomorphic filtering; uneven contrast is compensated using morphology and local enhancement. We tested our processing stages using Fuzzy C-Means, and local Hurst (self correlation) coefficient for unsupervised segmentation of the abnormal blood vessels. The results over a standard set of DR images are more than promising.

  17. TOMATO PROCESSING FIRMS’ MANAGEMENT: A COMPARATIVE APPLICATION OF ECONOMIC AND FINANCIAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Mattia Iotti

    2014-01-01

    Full Text Available In Italy, the territory that includes the Emilia-Romagna region, the southern areas of Lombardia and some of Piemonte’s territory is a center of national importance for tomato production and transformation. The processing firms operating in this area are characterized by significant investments in fixed assets and working capital. The article analyzes the annual account data of a sample of firms, showing that economic margins traditionally applied to assess the sustainability of the business cycle differ significantly from financial margins; also, the Interest Coverage Ratios (ICRs differ if calculated by applying an economic or a financial approach. Moreover, the annual account data highlight difficult credit access, expressed by applying a multiple regression model to analyze Free Cash Flow to Equity (FCFE generation. The article suggests a useful metric to measure more correctly the sustainability of a firm’s management that could be applied to others in the agri-food sector, particularly if characterized by a capital-intensive processing cycle.

  18. Processed tart cherry products--comparative phytochemical content, in vitro antioxidant capacity and in vitro anti-inflammatory activity.

    Science.gov (United States)

    Ou, Boxin; Bosak, Kristen N; Brickner, Paula R; Iezzoni, Dominic G; Seymour, E Mitchell

    2012-05-01

    Processing of fruits and vegetables affects their phytochemical and nutrient content. Tart cherries are commercially promoted to possess antioxidant and anti-inflammatory activity. However, processing affects their phytochemical content and may affect their related health benefits. The current study compares the in vitro antioxidant capacity and anti-inflammatory cyclooxygenase activity of processed tart cherry (Prunus cerasus) products-cherry juice concentrate, individually quick-frozen cherries, canned cherries, and dried cherries. Cherry products were analyzed for total anthocyanin and proanthocyanidin content and profile. On a per serving basis, total anthocyanins were highest in frozen cherries and total proanthocyanidins were highest in juice concentrate. Total phenolics were highest in juice concentrate. Juice concentrate had the highest oxygen radical absorbance capacity (ORAC) and peroxynitrite radical averting capacity (NORAC). Dried cherries had the highest hydroxyl radical averting capacity (HORAC) and superoxide radical averting capacity (SORAC). Processed tart cherry products compared very favorably to the U.S. Dept. of Agriculture-reported ORAC of other fresh and processed fruits. Inhibition of in vitro inflammatory COX-1 activity was greatest in juice concentrate. In summary, all processed tart cherry products possessed antioxidant and anti-inflammatory activity, but processing differentially affected phytochemical content and in vitro bioactivity. On a per serving basis, juice concentrate was superior to other tart cherry products.

  19. Measurements of Corneal Thickness in Eyes with Pseudoexfoliation Syndrome: Comparative Study of Different Image Processing Protocols

    Directory of Open Access Journals (Sweden)

    Katarzyna Krysik

    2017-01-01

    Full Text Available Purpose. Comparative analysis of central and peripheral corneal thickness in PEX patients using three different imaging systems: Pentacam-Scheimpflug device, time-domain optical coherence tomography (OCT Visante, and swept-source OCT Casia. Materials and Methods. 128 eyes of 80 patients with diagnosed PEX were examined and compared with 112 normal, non-PEX eyes of 72 cataract patients. The study parameters included 5 measured zones: central and 4 peripheral (superior, inferior, nasal, and temporal. Results. The mean CCT in eyes with PEX syndrome measured with all three instruments was thicker than that in normal eyes. Corneal thickness measurements in the PEX group were statistically significantly different between Pentacam and OCT Casia: central corneal thickness (p=0.04, inferior corneal zone (p=0.01, and nasal and temporal corneal zones (p<0.01. Between Pentacam and OCT Visante inferior, nasal and temporal corneal zones were statistically significantly different (p<0.01. Between OCT Casia and OCT Visante, there were no statistically significant differences in measured parameters values. Conclusion. The central corneal thickness in eyes with PEX syndrome measured with three different independent methods is higher than that in the non-PEX group, and despite variable peripheral corneal thickness, this one parameter is still crucial in intraocular pressure measurements.

  20. Separating macroecological pattern and process: comparing ecological, economic, and geological systems.

    Directory of Open Access Journals (Sweden)

    Benjamin Blonder

    Full Text Available Theories of biodiversity rest on several macroecological patterns describing the relationship between species abundance and diversity. A central problem is that all theories make similar predictions for these patterns despite disparate assumptions. A troubling implication is that these patterns may not reflect anything unique about organizational principles of biology or the functioning of ecological systems. To test this, we analyze five datasets from ecological, economic, and geological systems that describe the distribution of objects across categories in the United States. At the level of functional form ('first-order effects', these patterns are not unique to ecological systems, indicating they may reveal little about biological process. However, we show that mechanism can be better revealed in the scale-dependency of first-order patterns ('second-order effects'. These results provide a roadmap for biodiversity theory to move beyond traditional patterns, and also suggest ways in which macroecological theory can constrain the dynamics of economic systems.

  1. Process of Judging Significant Modifications for Different Transportation Systems compared to the Approach for Nuclear Installations

    Directory of Open Access Journals (Sweden)

    Nicolas Petrek

    2015-12-01

    Full Text Available The implementation of the CSM regulation by the European Commission in 2009 which harmonizes the risk assessment process and introduces a rather new concept of judging changes within the European railway industry. This circumstance has risen the question how other technology sectors handle the aspect of modifications and alterations. The paper discusses the approaches for judging the significance of modifications within the three transport sectors of European railways, aviation and maritime transportation and the procedure which is used in the area of nuclear safety. We will outline the similarities and differences between these four methods and discuss the underlying reasons. Finally, we will take into account the role of the European legislator and the fundamental idea of a harmonization of the different approaches.

  2. Processes of Urban and Rural Development: a Comparative Analysis of Europe and China.

    Directory of Open Access Journals (Sweden)

    Andrea Raffaele Neri

    2014-03-01

    Full Text Available China, in its construction fever, has imported from Europe a great range of architectural and design features. The planning systems of China and of most European countries are based on functionalzoning, allowing meaningful comparison. Nonetheless, the process and goals of spatial planning differ markedly and China largely ignores the distinctive progress achieved in the field in Europe. AcrossEurope, the model of planning is undergoing important transformations in the last decades, gradually making decisions concerning land­use more participated, flexible and sustainable, and safeguarding the rural dimension. In contrast, the planning system of China is primarily focused on promoting urban GDP growth and is still based on a top­ down approach. The inclusion of some key elements of European planning into the Chinese system, with particular reference to laws establishing national standards and comprehensive environmental protection, would benefit China by reducing the internal inequalities between cities and countryside and safeguarding its natural assets.

  3. Comparing Online Analytical Processing and Data Mining Tasks In Enterprise Resource Planning Systems

    Directory of Open Access Journals (Sweden)

    Tamer Salah

    2011-11-01

    Full Text Available Enterprise Resource Planning (ERP is an environment which is often rich of data about the enterprise. Data warehouse online analytical processing techniques provided decision makers a set of useful tools to analyze report and graphically represent data of the ERP. It can be said that OLAP tools provides different summarized perspectives of the data. On the other hand, Data Mining techniques can discover previously unknown patterns of knowledge. It can be said that data mining provides a deeper look in the data. This paper provides a comparison and case-study of benefits obtained by applying OLAP or data mining techniques and the effect of integrating the both approaches in ERP

  4. Whole fusion-fission process with Langevin approach and compared with analytical solution for barrier passage

    CERN Document Server

    Han, Jie

    2014-01-01

    We investigate time-dependent probability for a Brownian particle passing over the barrier to stay at a metastable potential pocket against escaping over the barrier. This is related to whole fusion-fission dynamical process and can be called the reverse Kramers problem. By the passing probability over the saddle point of inverse harmonic potential multiplying the exponential decay factor of a particle in the metastable potential, we present an approximate expression for the modified passing probability over the barrier, in which the effect of reflection boundary of potential is taken into account. Our analytical result and Langevin Monte-Carlo simulation show that the probability passing and against escaping over the barrier is a non-monotonous function of time and its maximal value is less than the stationary result of passing probability over the saddle point of inverse harmonic potential.

  5. Comparative analysis of different process simulation settings of a micro injection molded part featuring conformal cooling

    DEFF Research Database (Denmark)

    Marhöfer, David Maximilian; Tosello, Guido; Islam, Aminul

    2015-01-01

    different simulation models are established: a version including the part without the surrounding mold block, an advanced version including the mold block and conventional cooling channels, and a third version alike the second with additional conformal cooling for efficient thermal management...... of the implementation of the actual mold block, conventional cooling, and conformal cooling. In the comparison, characteristic quality criteria for injection molding are studied, such as the filling behavior of the cavity, the injection pressure, the temperature distribution, and the resulting part warpage....... Additionally, the analysis of the cooling channels exploiting computational fluid dynamics is introduced as helpful tool for the mold design process. It is observed that the comprehensive implementation of the actual injection molding system and conditions is highly relevant at sub-mm/micro dimensional scales...

  6. Separating macroecological pattern and process: comparing ecological, economic, and geological systems.

    Science.gov (United States)

    Blonder, Benjamin; Sloat, Lindsey; Enquist, Brian J; McGill, Brian

    2014-01-01

    Theories of biodiversity rest on several macroecological patterns describing the relationship between species abundance and diversity. A central problem is that all theories make similar predictions for these patterns despite disparate assumptions. A troubling implication is that these patterns may not reflect anything unique about organizational principles of biology or the functioning of ecological systems. To test this, we analyze five datasets from ecological, economic, and geological systems that describe the distribution of objects across categories in the United States. At the level of functional form ('first-order effects'), these patterns are not unique to ecological systems, indicating they may reveal little about biological process. However, we show that mechanism can be better revealed in the scale-dependency of first-order patterns ('second-order effects'). These results provide a roadmap for biodiversity theory to move beyond traditional patterns, and also suggest ways in which macroecological theory can constrain the dynamics of economic systems.

  7. Comparative evaluation of different hemicelluloses isolation processes integrated with alkaline cooking - HemiEx

    Energy Technology Data Exchange (ETDEWEB)

    Sixta, H.; Testova, L.; Rauhala, T. (and others) (Aalto Univ. School of Science and Technology, Espoo (Finland). Dept. of Forest Products Technology)

    2010-10-15

    HemiEx is a project focusing on the selective extraction of hemicelluloses from hardwood species in connection with alkaline pulping and study of different chemical aspects of the process. The project scope includes investigation of hemicelluloses isolation methods i.e. water prehydrolysis and alkaline pre-extraction prior to and subsequent to alkaline pulping. The sugar fraction of the extracts is then separated from other wood degradation products by means of membrane separation technology before it is converted to furanic compounds and xylose-based food additives. As regards pulp production, both dissolving and paper pulps are aimed at. The effect of pretreatment conditions on papermaking properties of pulp will also be investigated. (orig.)

  8. Comparative evaluation of different hemicelluloses isolation processes integrated with alkaline cooking - HemiEx

    Energy Technology Data Exchange (ETDEWEB)

    Sixta, H.; Testova, L.; Rauhala, T. (and others) (Helsinki Univ. of Technology, Dept. of Forest Products Technology, Espoo (Finland))

    2009-10-15

    HemiEx is a project focusing on the selective extraction of hemicelluloses from hardwood species in connection with alkaline pulping and study of different chemical aspects of the process. The project scope includes investigation of hemicelluloses isolation methods i.e. water prehydrolysis and alkaline pre-extraction prior to and novel solvents extraction subsequent to alkaline pulping. The sugar fraction of the extracts is then separated from other wood degradation products by means of membrane separation technology before it is converted to furanic compounds and xylose-based food additives. As regards pulp production, both dissolving and paper pulps are aimed at. The effect of pretreatment conditions on papermaking properties of pulp will also be investigated. (orig.)

  9. A Multi-Process Test Case to Perform Comparative Analysis of Coastal Oceanic Models

    Science.gov (United States)

    Lemarié, F.; Burchard, H.; Knut, K.; Debreu, L.

    2016-12-01

    Due to the wide variety of choices that need to be made during the development of dynamical kernels of oceanic models, there is a strong need for an effective and objective assessment of the various methods and approaches that predominate in the community. We present here an idealized multi-scale scenario for coastal ocean models combining estuarine, coastal and shelf sea scales at midlatitude. The bathymetry, initial conditions and external forcings are defined analytically so that any model developer or user could reproduce the test case with its own numerical code. Thermally stratified conditions are prescribed and a tidal forcing is imposed as a propagating coastal Kelvin wave. The following physical processes can be assessed from the model results: estuarine process driven by tides and buoyancy gradients, the river plume dynamics, tidal fronts, and the interaction between tides and inertial oscillations. We show results obtained using the GETM (General Estuarine Transport Model) and the CROCO (Coastal and Regional Ocean Community model) models. Those two models are representative of the diversity of numerical methods in use in coastal models: GETM is based on a quasi-lagrangian vertical coordinate, a coupled space-time approach for advective terms, a TVD (Total Variation Diminishing) tracer advection scheme while CROCO is discretized with a quasi-eulerian vertical coordinate, a method of lines is used for advective terms, and tracer advection satisfies the TVB (Total Variation Bounded) property. The multiple scales are properly resolved thanks to nesting strategies, 1-way nesting for GETM and 2-way nesting for CROCO. Such test case can be an interesting experiment to continue research in numerical approaches as well as an efficient tool to allow intercomparison between structured-grid and unstructured-grid approaches. Reference : Burchard, H., Debreu, L., Klingbeil, K., Lemarié, F. : The numerics of hydrostatic structured-grid coastal ocean models: state of

  10. Evaluating Safeguards Benefits of Process Monitoring as compared with Nuclear Material Accountancy

    Energy Technology Data Exchange (ETDEWEB)

    Humberto Garcia; Wen-Chiao Lin; Reed Carlson

    2014-07-01

    This paper illustrates potential safeguards benefits that process monitoring (PM) may have as a diversion deterrent and as a complementary safeguards measure to nuclear material accountancy (NMA). This benefit is illustrated by quantifying the standard deviation associated with detecting a considered material diversion scenario using either an NMA-based method or a PM-based approach. To illustrate the benefits of PM for effective safeguards, we consider a reprocessing facility. We assume that the diversion of interest for detection manifests itself as a loss of Pu caused by abnormally operating a dissolver for an extended period to accomplish protracted diversion (or misdirection) of Pu to a retained (unconditioned) waste stream. For detecting the occurrence of this diversion (which involves anomalous operation of the dissolver), we consider two different data evaluation and integration (DEI) approaches, one based on NMA and the other based on PM. The approach based on PM does not directly do mass balance calculations, but rather monitors for the possible occurrence of anomaly patterns related to potential loss of nuclear material. It is thus assumed that the loss of a given mass amount of nuclear material can be directly associated with the execution of proliferation-driven activities that trigger the occurrence of an anomaly pattern consisting of series of events or signatures occurring at different unit operations and time instances. By effectively assessing these events over time and space, the PM-based DEI approach tries to infer whether this specific pattern of events has occurred and how many times within a given time period. To evaluate the goodness of PM, the 3 Sigma of the estimated mass loss is computed under both DEI approaches as function of the number of input batches processed. Simulation results are discussed.

  11. Comparing word processing times in naming, lexical decision, and progressive demasking:Evidence from Chronolex

    Directory of Open Access Journals (Sweden)

    Ludovic eFerrand

    2011-11-01

    Full Text Available We report performance measures for lexical decision, word naming, and progressive demasking for a large sample of monosyllabic, monomorphemic French words (N = 1,482. We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in lexical decision. For word naming, it is the first phoneme. Progressive demasking was more influenced by a semantic variable (word imageability than lexical decision, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters. This may reduce its usefulness as a psycholinguistic word recognition task.

  12. Comparative analysis of the speed performance of texture analysis algorithms on a graphic processing unit (GPU)

    Science.gov (United States)

    Triana-Martinez, J.; Orjuela-Vargas, S. A.; Philips, W.

    2013-03-01

    This paper compares the speed performance of a set of classic image algorithms for evaluating texture in images by using CUDA programming. We include a summary of the general program mode of CUDA. We select a set of texture algorithms, based on statistical analysis, that allow the use of repetitive functions, such as the Coocurrence Matrix, Haralick features and local binary patterns techniques. The memory allocation time between the host and device memory is not taken into account. The results of this approach show a comparison of the texture algorithms in terms of speed when executed on CPU and GPU processors. The comparison shows that the algorithms can be accelerated more than 40 times when implemented using CUDA environment.

  13. Comparing word processing times in naming, lexical decision, and progressive demasking: evidence from chronolex.

    Science.gov (United States)

    Ferrand, Ludovic; Brysbaert, Marc; Keuleers, Emmanuel; New, Boris; Bonin, Patrick; Méot, Alain; Augustinova, Maria; Pallier, Christophe

    2011-01-01

    We report performance measures for lexical decision (LD), word naming (NMG), and progressive demasking (PDM) for a large sample of monosyllabic monomorphemic French words (N = 1,482). We compare the tasks and also examine the impact of word length, word frequency, initial phoneme, orthographic and phonological distance to neighbors, age-of-acquisition, and subjective frequency. Our results show that objective word frequency is by far the most important variable to predict reaction times in LD. For word naming, it is the first phoneme. PDM was more influenced by a semantic variable (word imageability) than LD, but was also affected to a much greater extent by perceptual variables (word length, first phoneme/letters). This may reduce its usefulness as a psycholinguistic word recognition task.

  14. METHODOLOGY COMPARATIVE EVALUATION OF PROFESSIONAL STANDARDS AND EDUCATION STANDARDS WITH THE USE OF NON-NUMERIC DATA PROCESSING METHODS

    Directory of Open Access Journals (Sweden)

    Gennady V. Abramov

    2016-01-01

    Full Text Available The article discusses the development of a technique that allows for a comparative assessment of the requirements of the professional standard and the federal state educational standards. The results can be used by universities to adjust the learning process for the analysis of their curricula to better compliance with professional standards. 

  15. Comparative Numerical Analysis of Sheet Formed into a V-Shaped Die Using Conventional and Electromagnetic Forming Processes

    Directory of Open Access Journals (Sweden)

    Jeong Kim

    2014-05-01

    Full Text Available A comparative and numerical study on the formability of a sheet formed into a V-shaped die using a conventional stamping operation and an electromagnetic forming (EMF process was performed. To evaluate the damage evolution and failure prediction using a finite-element method (FEM, the Gurson-Tvergaard-Needleman plasticity material model was employed in the numerical simulation. The impact of the sheet with the die generates a complex stress state during the EMF process. Damage suppression due to the tool-sheet interaction may be one of the main factors contributing to the increased formability in the EMF process compared to the conventional forming operation. In addition, a high level of kinetic energy produces high strain-rate constitutive and inertial effects, which delay the onset of necking and may also be responsible for the increased formability using EMF.

  16. Comparative tensile strength study of the adhesion improvement of PTFE by UV photon assisted surface processing

    Science.gov (United States)

    Hopp, B.; Geretovszky, Zs.; Bertóti, I.; Boyd, I. W.

    2002-01-01

    Poly(tetrafluoroethylene) (PTFE) is notable for its non-adhesive and non-reactive properties. A number of technologies can potentially benefit from the application of PTFE, but these characteristics restrict the ability to structuring its surface. In this paper, we present results on two ultraviolet photon assisted treatments of PTFE. The originally poor adhesion was significantly improved by both 172 nm excimer lamp and 193 nm excimer laser assisted surface treatments. While Xe2∗ lamp irradiation, applied in a modest vacuum environment, was sufficient by itself to improve adhesion, the ArF laser process was only effective when the irradiated interface was in contact with 1,2-diaminoethane photoreagent. It was found that the tensile strength of an epoxy resin glued interface created on treated surfaces depended strongly on the applied number of laser pulses and lamp irradiation time. Laser treatment caused fast tensile strength increase during the first 50-500 pulses, while after this it saturates slowly at about 5.5 MPa in the 500-2500 pulse domain. The excimer lamp irradiation resulted in a maximum tensile strength of approximately 10 MPa after 2 min irradiation time which reduced to about 65% of the peak value at longer times.

  17. Current Practice: comparative analysis and ways to improve the assessment process

    Directory of Open Access Journals (Sweden)

    Allan Rackham

    1999-03-01

    Full Text Available New Zealand approaches to territorial landscape assessment have been strongly influenced by: research carried out in the United Kingdom in the 1970s, particularly visual quality assessments; and theoretical work from American universities during the 1970s and early 1980s. The visual emphasis of this work remained the focus of New Zealand landscape assessment into the early 1990s. However, a number of developments during the 1990s have encouraged a gradual readjustment of focus. Of particular significance have been: the introduction of the Resource Management Act 1991 (RMA91 with its holistic environmental perspective and broad-ranging landscape provisions; a greater bicultural awareness which recognises that Maori perspectives add different dimensions to mainstream landscape appreciation; and a highly competitive market economy where landscape investigations have to be cost effective and outcome focused. As a consequence of the introduction of the RMA91, many district and regional councils have commissioned landscape assessments. This paper considers the current situation of these district scale assessments and suggests ways of improving the assessment process.

  18. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

    Science.gov (United States)

    Humphreys, Kate; Avidan, Galia; Behrmann, Marlene

    2007-01-01

    Whether the ability to recognize facial expression can be preserved in the absence of the recognition of facial identity remains controversial. The current study reports the results of a detailed investigation of facial expression recognition in three congenital prosopagnosic (CP) participants, in comparison with two patients with acquired prosopagnosia (AP) and a large group of 30 neurologically normal participants, including individually age- and gender-matched controls. Participants completed a fine-grained expression recognition paradigm requiring a six-alternative forced-choice response to continua of morphs of six different basic facial expressions (e.g. happiness and surprise). Accuracy, sensitivity and reaction times were measured. The performance of all three CP individuals was indistinguishable from that of controls, even for the most subtle expressions. In contrast, both individuals with AP displayed pronounced difficulties with the majority of expressions. The results from the CP participants attest to the dissociability of the processing of facial identity and of facial expression. Whether this remarkably good expression recognition is achieved through normal, or compensatory, mechanisms remains to be determined. Either way, this normal level of performance does not extend to include facial identity.

  19. Comparative and phylogenetic perspectives of the cleavage process in tailed amphibians.

    Science.gov (United States)

    Desnitskiy, Alexey G; Litvinchuk, Spartak N

    2015-10-01

    The order Caudata includes about 660 species and displays a variety of important developmental traits such as cleavage pattern and egg size. However, the cleavage process of tailed amphibians has never been analyzed within a phylogenetic framework. We use published data on the embryos of 36 species concerning the character of the third cleavage furrow (latitudinal, longitudinal or variable) and the magnitude of synchronous cleavage period (up to 3-4 synchronous cell divisions in the animal hemisphere or a considerably longer series of synchronous divisions followed by midblastula transition). Several species from basal caudate families Cryptobranchidae (Andrias davidianus and Cryptobranchus alleganiensis) and Hynobiidae (Onychodactylus japonicus) as well as several representatives from derived families Plethodontidae (Desmognathus fuscus and Ensatina eschscholtzii) and Proteidae (Necturus maculosus) are characterized by longitudinal furrows of the third cleavage and the loss of synchrony as early as the 8-cell stage. By contrast, many representatives of derived families Ambystomatidae and Salamandridae have latitudinal furrows of the third cleavage and extensive period of synchronous divisions. Our analysis of these ontogenetic characters mapped onto a phylogenetic tree shows that the cleavage pattern of large, yolky eggs with short series of synchronous divisions is an ancestral trait for the tailed amphibians, while the data on the orientation of third cleavage furrows seem to be ambiguous with respect to phylogeny. Nevertheless, the midblastula transition, which is characteristic of the model species Ambystoma mexicanum (Caudata) and Xenopus laevis (Anura), might have evolved convergently in these two amphibian orders.

  20. Nonword reading: comparing dual-route cascaded and connectionist dual-process models with human data.

    Science.gov (United States)

    Pritchard, Stephen C; Coltheart, Max; Palethorpe, Sallyanne; Castles, Anne

    2012-10-01

    Two prominent dual-route computational models of reading aloud are the dual-route cascaded (DRC) model, and the connectionist dual-process plus (CDP+) model. While sharing similarly designed lexical routes, the two models differ greatly in their respective nonlexical route architecture, such that they often differ on nonword pronunciation. Neither model has been appropriately tested for nonword reading pronunciation accuracy to date. We argue that empirical data on the nonword reading pronunciation of people is the ideal benchmark for testing. Data were gathered from 45 Australian-English-speaking psychology undergraduates reading aloud 412 nonwords. To provide contrast between the models, the nonwords were chosen specifically because DRC and CDP+ disagree on their pronunciation. Both models failed to accurately match the experiment data, and both have deficiencies in nonword reading performance. However, the CDP+ model performed significantly worse than the DRC model. CDP++, the recent successor to CDP+, had improved performance over CDP+, but was also significantly worse than DRC. In addition to highlighting performance shortcomings in each model, the variety of nonword responses given by participants points to a need for models that can account for this variety.

  1. Ultrafast laser processing of copper: A comparative study of experimental and simulated transient optical properties

    Science.gov (United States)

    Winter, Jan; Rapp, Stephan; Schmidt, Michael; Huber, Heinz P.

    2017-09-01

    In this paper, we present ultrafast measurements of the complex refractive index for copper up to a time delay of 20 ps with an accuracy laser fluences in the vicinity of the ablation threshold. The measured refractive index n and extinction coefficient k are supported by a simulation including the two-temperature model with an accurate description of thermal and optical properties and a thermomechanical model. Comparison of the measured time resolved optical properties with results of the simulation reveals underlying physical mechanisms in three distinct time delay regimes. It is found that in the early stage (-5 ps to 0 ps) the thermally excited d-band electrons make a major contribution to the laser pulse absorption and create a steep increase in transient optical properties n and k. In the second time regime (0-10 ps) the material expansion influences the plasma frequency, which is also reflected in the transient extinction coefficient. In contrast, the refractive index n follows the total collision frequency. Additionally, the electron-ion thermalization time can be attributed to a minimum of the extinction coefficient at ∼10 ps. In the third time regime (10-20 ps) the transient extinction coefficient k indicates the surface cooling-down process.

  2. Inversion factor in the comparative analysis of dynamical processes in radioecology

    Energy Technology Data Exchange (ETDEWEB)

    Zarubin, O.; Zarubina, N. [Institute for Nuclear Researh of National Academy of Science of Ukraine (Ukraine)

    2014-07-01

    We have studied levels of specific activity of radionuclides in fish and fungi of the Kiev region of Ukraine since 1986 till 2013, including 30-km alienation zone of Chernobyl Nuclear Power Plant (ChNPP) after the accident. The radionuclides specific activity dynamics analysis for 10 species of freshwater fishes of different trophic levels and at 7 species of higher fungi was carried out for this period. Multiple research of specific activity of radionuclides in fish was carried out on the Kanevskoe reservoir and cooling-pond of ChNPP, in fungi - on 6 testing areas, which are situated within the range of 2 to 150 km from ChNPP. The basic attention was given to accumulation of {sup 137}Cs. We have established that dynamics of specific activity of {sup 137}Cs within different species of fish in the same reservoir is not identical. Dynamics of specific activity of {sup 137}Cs within various species of fungi of the same testing area is also not identical. Dynamics of specific activity of {sup 137}Cs with the investigated objects of various testing dry-land and water areas also varies. Authors suggest an inversion factor to be used for comparison of dynamics of specific activity of {sup 137}Cs, which in case of biota is a nonlinear process: K{sub inv} = A{sub 0} / A{sub t}, where A{sub 0} stands for the value of specific activity of the radionuclide at time 0; A{sub t} - specific activity of radionuclide at time t. Therefore, K{sub inv} reflects ratio (inversion) of specific activity of radionuclides to its starting value as a function of time, where K{sub inv} > 1 corresponds to increase in radionuclides' specific activity and K{sub inv} < 1 corresponds to its decrease. For example, K{sub inv} of {sup 137}Cs in fish Rutilus rutilus in the Kanevskoe reservoir was equal to 0.57, and 13.33 in the cooling-pond of ChNPP, at Blicca bjoerkna 0.95 and 29.61 accordingly in 1987 - 1996. In 1987 - 2011 K{sub inv} of {sup 137}Cs at R. rutilus in the Kanevskoe reservoir

  3. Using nonlinear models in fMRI data analysis: model selection and activation detection.

    Science.gov (United States)

    Deneux, Thomas; Faugeras, Olivier

    2006-10-01

    There is an increasing interest in using physiologically plausible models in fMRI analysis. These models do raise new mathematical problems in terms of parameter estimation and interpretation of the measured data. In this paper, we show how to use physiological models to map and analyze brain activity from fMRI data. We describe a maximum likelihood parameter estimation algorithm and a statistical test that allow the following two actions: selecting the most statistically significant hemodynamic model for the measured data and deriving activation maps based on such model. Furthermore, as parameter estimation may leave much incertitude on the exact values of parameters, model identifiability characterization is a particular focus of our work. We applied these methods to different variations of the Balloon Model (Buxton, R.B., Wang, E.C., and Frank, L.R. 1998. Dynamics of blood flow and oxygenation changes during brain activation: the balloon model. Magn. Reson. Med. 39: 855-864; Buxton, R.B., Uludağ, K., Dubowitz, D.J., and Liu, T.T. 2004. Modelling the hemodynamic response to brain activation. NeuroImage 23: 220-233; Friston, K. J., Mechelli, A., Turner, R., and Price, C. J. 2000. Nonlinear responses in fMRI: the balloon model, volterra kernels, and other hemodynamics. NeuroImage 12: 466-477) in a visual perception checkerboard experiment. Our model selection proved that hemodynamic models better explain the BOLD response than linear convolution, in particular because they are able to capture some features like poststimulus undershoot or nonlinear effects. On the other hand, nonlinear and linear models are comparable when signals get noisier, which explains that activation maps obtained in both frameworks are comparable. The tools we have developed prove that statistical inference methods used in the framework of the General Linear Model might be generalized to nonlinear models.

  4. Bayesian model selection in hydrogeophysics: Application to conceptual subsurface models of the South Oyster Bacterial Transport Site, Virginia, USA

    Science.gov (United States)

    Brunetti, Carlotta; Linde, Niklas; Vrugt, Jasper A.

    2017-04-01

    Geophysical data can help to discriminate among multiple competing subsurface hypotheses (conceptual models). Here, we explore the merits of Bayesian model selection in hydrogeophysics using crosshole ground-penetrating radar data from the South Oyster Bacterial Transport Site in Virginia, USA. Implementation of Bayesian model selection requires computation of the marginal likelihood of the measured data, or evidence, for each conceptual model being used. In this paper, we compare three different evidence estimators, including (1) the brute force Monte Carlo method, (2) the Laplace-Metropolis method, and (3) the numerical integration method proposed by Volpi et al. (2016). The three types of subsurface models that we consider differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. Our results demonstrate that all three estimators provide equivalent results in low parameter dimensions, yet in higher dimensions the brute force Monte Carlo method is inefficient. The isotropic multi-Gaussian model is most supported by the travel time data with Bayes factors that are larger than 10100 compared to conceptual models that assume horizontal or vertical layering of the porosity field.

  5. Comparing OSL and CN techniques for dating fluvial terraces and estimating surface process rates in Pamir

    Science.gov (United States)

    Fuchs, Margret; Gloaguen, Richard; Pohl, Eric; Sulaymonova, Vasila; Merchel, Silke; Rugel, Georg

    2014-05-01

    The quantification of surface process rates is crucial for understanding the topographic evolution of high mountains. Spatial and temporal variations in fluvial incision and basin-wide erosion enable to decipher the role of tectonic and climatic drivers. The Pamir is peculiar in both aspects because of its location at the western end of the India-Asia collision zone, and its position at the edge of two atmospheric circulation systems, the Westerlies and the Indian Summer Monsoon. The architecture of the Panj river network indicates prominent variations across the main tectonic structures of the Pamir. The trunk stream, deflects from the predominantly westward river orientation and cuts across the southern and central Pamir domes before doubling back to the west and leaving the orogen. Optically stimulated luminescence (OSL) dating of fluvial terraces reveals short-term sedimentation along the trunk stream during the last ~25 kyr. The agreement of OSL results to new exposure ages based on the cosmogenic nuclide (CN) 10Be confirms accurate terrace age modelling and treatment of incomplete bleaching. The consistent terrace sedimentation and exposure ages suggest also fast terrace abandonment and rapid onset of incision. Considerable differences in terrace heights reflect high spatial variations of fluvial incision, independent of time interval, change in rock type or catchment increase. Highest rates of (5.9 ± 1.1) mm/yr to (10.0 ± 2.0) mm/yr describe the fluvial dynamic across the Shakhdara Dome and that related to the Darvaz Fault Zone. Lower rates of (3.9 ± 0.6) mm/yr to (4.5 ± 0.7) mm/yr indicate a transient stage north of the Yazgulom Dome. Fluvial incision decreases to rates ranging from (1.7 ± 0.3) mm/yr to (3.9 ± 0.7) mm/yr in graded river reaches associated to southern dome boundaries. The pattern agrees to the interpretation of successive upstream river captures across the southern and central Pamir domes inferred from morphometric analyses of river

  6. Synthesizing Equivalence Indices for the Comparative Evaluation of Technoeconomic Efficiency of Industrial Processes at the Design/Re-engineering Level

    Science.gov (United States)

    Fotilas, P.; Batzias, A. F.

    2007-12-01

    The equivalence indices synthesized for the comparative evaluation of technoeconomic efficiency of industrial processes are of critical importance since they serve as both, (i) positive/analytic descriptors of the physicochemical nature of the process and (ii) measures of effectiveness, especially helpful for investigated competitiveness in the industrial/energy/environmental sector of the economy. In the present work, a new algorithmic procedure has been developed, which initially standardizes a real industrial process, then analyzes it as a compromise of two ideal processes, and finally synthesizes the index that can represent/reconstruct the real process as a result of the trade-off between the two ideal processes taking as parental prototypes. The same procedure makes fuzzy multicriteria ranking within a set of pre-selected industrial processes for two reasons: (a) to analyze the process most representative of the production/treatment under consideration, (b) to use the `second best' alternative as a dialectic pole in absence of the two ideal processes mentioned above. An implantation of this procedure is presented, concerning a facility of biological wastewater treatment with six alternatives: activated sludge through (i) continuous-flow incompletely-stirred tank reactors in series, (ii) a plug flow reactor with dispersion, (iii) an oxidation ditch, and biological processing through (iv) a trickling filter, (v) rotating contactors, (vi) shallow ponds. The criteria used for fuzzy (to count for uncertainty) ranking are capital cost, operating cost, environmental friendliness, reliability, flexibility, extendibility. Two complementary indices were synthesized for the (ii)-alternative ranked first and their quantitative expressions were derived, covering a variety of kinetic models as well as recycle/bypass conditions. Finally, analysis of estimating the optimal values of these indices at maximum technoeconomic efficiency is presented and the implications

  7. The comparative research on constituents of Radix Aconiti and its processing by HPLC quadrupole TOF-MS.

    Science.gov (United States)

    Wu, Jian; Hong, Bo; Wang, Jia; Wang, Xi; Niu, Sijia; Zhao, Chunjie

    2012-11-01

    Based upon the regulations stipulated by the State Food and Drug Administration of China, only the processed, detoxified tubers and roots of Aconitum are allowed to be administered orally, used in clinical decoctions and adopted as raw materials for pharmaceutical manufacturing, so the processing principle of preparation of Radix Aconiti is important for ensuring the Radix Aconiti praeparata quality. A simple approach was described for HPLC-Q-TOF-MS screening and identification of many of the aconitine alkaloids present in unprocessed Radix Aconiti and Radix Aconiti praeparata. To compare their fingerprints, the processing principle of preparation of Radix Aconiti was developed. Twenty-nine compounds and 26 compounds were assigned to aconitine alkaloids and tentatively identified by comparing accurate mass and fragments information with that of the authentic standards or by mass spectrometry analysis and retrieving the reference literature. The nonester alkaloids were almost the same. The diester diterpene alkaloids were decreased, the monoester-diterpene alkaloids were increased and lipo-alkaloids decreased obviously in the processing of the preparation. These transformed components could be regarded as potential chemical markers that can be used to distinguish between raw and processed herbs.

  8. Model Selection Coupled with a Particle Tracking Proxy Using Surface Deformation Data for Monitoring CO2 Plume Migration

    Science.gov (United States)

    Min, B.; Nwachukwu, A.; Srinivasan, S.; Wheeler, M. F.

    2015-12-01

    This study formulates a framework of a model selection that refines geological models for monitoring CO2 plume migration. Special emphasis is placed on CO2 injection, and the particular techniques that are used for this study including model selection, particle tracking proxies, and partial coupling of flow and geomechanics. The proposed process starts with generating a large initial ensemble of reservoir models that reflect a prior uncertainty in reservoir description, including all plausible geologic scenarios. These models are presumed to be conditioned to available static data. In the absence of production or injection data, all prior reservoir models are regarded as equiprobable. Thus, the model selection algorithm is applied to select a few representative reservoir models that are more consistent with observed dynamic responses. A quick assessment of the models must then be performed to evaluate their dynamic characteristics and flow connectivity. This approach develops a particle tracking proxy and a finite element method solver for solving the flow equation and the stress problem, respectively. The shape of CO2 plume is estimated using a particle-tracking proxy that serves as a fast approximation of finite-difference simulation models. Sequentially, a finite element method solver is coupled with the proxy for analyzing geomechanical effects resulting from CO2 injection. A method is then implemented to group the models into clusters based on similarities in the estimated responses. The posterior model set is chosen as the cluster that produces the minimum deviation from the observed field data. The efficacy of non-dominated sorting based on Pareto-optimality is also tested in the current model selection framework. The proposed scheme is demonstrated on a carbon sequestration project in Algeria. Coupling surface deformation data with well injection data enhances the efficiency of tracking the CO2 plume. Therefore, this algorithm provides a probabilistic

  9. Bayesian model selection in complex linear systems, as illustrated in genetic association studies.

    Science.gov (United States)

    Wen, Xiaoquan

    2014-03-01

    Motivated by examples from genetic association studies, this article considers the model selection problem in a general complex linear model system and in a Bayesian framework. We discuss formulating model selection problems and incorporating context-dependent a priori information through different levels of prior specifications. We also derive analytic Bayes factors and their approximations to facilitate model selection and discuss their theoretical and computational properties. We demonstrate our Bayesian approach based on an implemented Markov Chain Monte Carlo (MCMC) algorithm in simulations and a real data application of mapping tissue-specific eQTLs. Our novel results on Bayes factors provide a general framework to perform efficient model comparisons in complex linear model systems.

  10. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  11. Comparative evaluation of iodoacids removal by UV/persulfate and UV/H2O2 processes.

    Science.gov (United States)

    Xiao, Yongjun; Zhang, Lifeng; Zhang, Wei; Lim, Kok-Yong; Webster, Richard D; Lim, Teik-Thye

    2016-10-01

    To develop a cost-effective method for post-formation mitigation of iodinated disinfection by-products, degradation of iodoacids by UV, UV/PS (persulfate), and UV/H2O2 was extensively investigated in this study. UV direct photolysis of 4 iodoacids followed first-order kinetics with rate constants in the range of 2.43 × 10(-4)-3.02 × 10(-3) cm(2) kJ(-1). The derived quantum yields (Ф254) of the 4 iodoacids range from 0.13 to 0.34, respectively. A quantitative structure-activity relationship (QSAR) model was subsequently established and applied to predict the direct photolysis rates of 6 other structurally similar iodoacids whose standards are commercially unavailable. At a UV dose of 140 mJ cm(-2) which is typically applied for disinfection of drinking water, the removal percentages of 4 iodoacids were only between 3.35% and 34.7%. Thus, ICH2CO2H (IAA), the most photo-recalcitrant species, was selected as the target compound for removal in the UV/PS and UV/H2O2 processes. The IAA degradation rates decreased with increasing pH from 3 to 11 in both processes. Humic acid (HA) and HCO3(-) had inhibitory effects on IAA degradation in both processes. Cl(-) adversely affected the IAA degradation in the UV/PS process but had no effect in the UV/H2O2 process. Generally, in the deionized (DI) water, surface water, treated drinking water, and secondary effluent, UV/PS process is more effective than UV/H2O2 process for IAA removal, based on the same molar ratio of oxidant: IAA. SO4(-) generated in the UV/PS process yields a greater mineralization of IAA than HO in the UV/H2O2 process. IO3(-) was the predominant end-product in the UV/PS process, while I(-) was the major end-product in the UV/H2O2 process. The respective contributions of UV, HO, and SO4(-) for IAA removal in the UV/PS process were 7.8%, 14.7%, and 77.5%, respectively, at a specific condition (1.5 μM IAA, 60 μM oxidant, and pH 7). Compared to UV/H2O2 process, UV/PS was also observed as more cost

  12. Manager's Discretionary Power and Comparability of Financial Reports: An Analysis of the Regulatory Transition Process in Brazilian Accounting

    Directory of Open Access Journals (Sweden)

    Alex Mussoi Ribeiro

    2016-04-01

    Full Text Available This research aimed to directly evaluate the impact of the accounting regulatory flexibility movement on the comparability of financial reports. The country chosen for the analysis was Brazil, because it was one of the few countries in the world where a process of regulatory change from a completely rule-based standard with a strong link to tax accounting (Lopes, 2011 to a principle-based standard with greater need for decision by managers who prepare the financial reports took place. To measure comparability, the accounting function similarity model developed by DeFranco, Kothari and Verdi (2011 was used. The companies analyzed were all listed ones with full data for the period concerned having, at least, a pair company within the same economic activity sector. To obtain the research results, we adopted a panel data model where the years 2005 to 2012 were compared to the year 2004. The results obtained prove that, on average, there was no significant decrease in the comparability level within country during the regulatory transition period in Brazil. On the contrary, there was an increase in genuine comparability in the year 2012 when compared to 2004. In the model adjusted by stepwise, the years 2011 and 2012 had a significantly higher average comparability when compared to 2004. The results found corroborate other researches addressing the quality of accounting information (Collins, Pasewark, & Riley, 2012; Psaros & Trotman, 2004; Agoglia, Doupnik, & Tsakumis, 2011 and prove the superiority of the principle-based standard also over the comparability of financial reports. The main conclusion of this research is that increasing manager's discretionary power through flexibility of accounting standards does not decrease the comparability of financial reports.

  13. Exergy destruction and losses on four North Sea offshore platforms: A comparative study of the oil and gas processing plants

    DEFF Research Database (Denmark)

    Voldsund, Mari; Nguyen, Tuong-Van; Elmegaard, Brian

    2014-01-01

    differs across offshore platforms. However, the results indicate that the largest rooms for improvement lie in (i) gas compression systems where large amounts of gas may be compressed and recycled to prevent surge, (ii) production manifolds where well-streams are depressurised and mixed, and (iii......The oil and gas processing plants of four North Sea offshore platforms are analysed and compared, based on the exergy analysis method. Sources of exergy destruction and losses are identified and the findings for the different platforms are compared. Different platforms have different working...... conditions, which implies that some platforms need less heat and power than others. Reservoir properties and composition vary over the lifetime of an oil field, and therefore maintaining a high efficiency of the processing plant is challenging. The results of the analysis show that 27%-57% of the exergy...

  14. Children's resilience and trauma-specific cognitive behavioral therapy: Comparing resilience as an outcome, a trait, and a process.

    Science.gov (United States)

    Happer, Kaitlin; Brown, Elissa J; Sharma-Patel, Komal

    2017-09-20

    Resilience, which is associated with relatively positive outcomes following negative life experiences, is an important research target in the field of child maltreatment (Luthar et al., 2000). The extant literature contains multiple conceptualizations of resilience, which hinders development in research and clinical utility. Three models emerge from the literature: resilience as an immediate outcome (i.e., behavioral or symptom response), resilience as a trait, and resilience as a dynamic process. The current study compared these models in youth undergoing trauma-specific cognitive behavioral therapy. Results provide the most support for resilience as a process, in which increase in resilience preceded associated decrease in posttraumatic stress and depressive symptoms. There was partial support for resilience conceptualized as an outcome, and minimal support for resilience as a trait. Results of the models are compared and discussed in the context of existing literature and in light of potential clinical implications for maltreated youth seeking treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Comparative analysis between the SPIF and DPIF variants for die-less forming process for an automotive workpiece

    Directory of Open Access Journals (Sweden)

    Adrian José Benitez Lozano

    2015-07-01

    Full Text Available Over time the process of incremental deformation Die-less has been developed in many ways to meet the needs of flexible production with no investment in tooling and low production costs. Two of their configurations are the SPIF (Single point incremental forming and DPIF (Double point Incremental forming technique. The aim of this study is to compare both techniques with the purpose of exposing their advantages and disadvantages in the production of industrial parts, as well as to inform about Die-less as an alternative manufacturing process. Experiments with the exhaust pipe cover of a vehicle are performed, the main process parameters are described, and formed workpieces without evidence of defects are achieved. Significant differences between the two techniques in terms of production times and accuracy to the original model are also detected. Finally, it is suggested when is more convenient to use each of these.

  16. A comparative study of sustained attentional bias on emotional processing in ADHD children to pictures with eye-tracking.

    Science.gov (United States)

    Pishyareh, Ebrahim; Tehrani-Doost, Mehdi; Mahmoodi-Gharaie, Javad; Khorrami, Anahita; Rahmdar, Saeid Reza

    2015-01-01

    ADHD children have anomalous and negative behavior especially in emotionally related fields when compared to other. Evidence indicates that attention has an impact on emotional processing. The present study evaluates the effect of emotional processing on the sustained attention of children with ADHD type C. Sixty participants form two equal groups (each with 30 children) of normal and ADHD children) and each subject met the required selected criterion as either a normal or an ADHD child. Both groups were aged from 6-11-years-old. All pictures were chosen from the International Affective Picture System (IAPS) and presented paired emotional and neutral scenes in the following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral-neutral. Sustained attention was evaluated based on the number and duration of total fixation and was compared between the groups with MANOVA analysis. The duration of sustained attention on pleasant in the pleasant-unpleasant pair was significant. Bias in duration of sustained attention on pleasant scenes in pleasant-neutral pairs is significantly different between the groups. Such significant differences might be indicative of ADHD children deficiencies in emotional processing. It seems that the highly deep effect of emotionally unpleasant scenes to gain the focus of ADHD children's attention is responsible for impulsiveness and abnormal processing of emotional stimuli.

  17. Comparing the electrical characteristics and reliabilities of BJTs and MOSFETs between Pt and Ti contact silicide processes

    Science.gov (United States)

    Liu, Kaiping; Shang, Ling

    1999-08-01

    The sub-threshold characteristics and the reliability of BJTs, using platinum contact silicide (PtSi) or titanium contact silicide (TiSi2), are compared and analyzed. During processing, it is observed that the TiSi2 process produces higher interface state density (Dit) than the PtSi process. The increase in Dit not only leads to a higher base current in the BJTs, but also leads to a lower transconductance for the MOS transistors. The data also show that the impact on NPN and nMOS is more severe than the impact of PNP and pMOS, respectively. This can be explained by the non-symmetric interface state distribution, the re- activation of boron, and/or by substrate trap centers. The amount of interface states produced depends not only on the thickness of the titanium film deposited, but also on the temperature and duration of the titanium silicide process. The electrical data indicates that after all the Back-End- Of-The-Line processing steps, which includes a forming gas anneal, Dit is still higher on wafers with the TiSi2 transistor's base current increases at different rates between the two processes, but eventually levels off to the same final value. However, the PNP transistor's base current increases at approximately the same rate, but eventually levels off at different final values. These indicate that the TiSi2 process may have modified the silicon and oxygen dangling bond structure during its high temperature process in addition to removing the hydrogen from the passivated interface states.

  18. Decolorization of distillery spent wash effluent by electro oxidation (EC and EF) and Fenton processes: A comparative study.

    Science.gov (United States)

    David, Charles; Arivazhagan, M; Tuvakara, Fazaludeen

    2015-11-01

    In this study, laboratory scale experiments were performed to degrade highly concentrated organic matter in the form of color in the distillery spent wash through batch oxidative methods such as electrocoagulation (EC), electrofenton (EF) and Fenton process. The effect of corresponding operating parameters, namely initial pH: 2-10; current intensity: 1-5A; electrolysis time: 0.5-4h; agitation speed: 100-500rpm; inter-electrode distance: 0.5-4cm and Fenton's reagent dosage: 5-40mg/L was employed for optimizing the process of spent wash color removal. The performance of all the three processes was compared and assessed in terms of percentage color removal. For EC, 79% color removal was achieved using iron electrodes arranged with 0.5cm of inter-electrode space and at optimum conditions of pH 7, 5A current intensity, 300rpm agitation speed and in 2h of electrolysis time. In EF, 44% spent wash decolorization was observed using carbon (graphite) electrodes with an optimum conditions of 0.5cm inter-electrode distance, pH 3, 4A current intensity, 20mg/L FeSO4 and agitation speed of 400rpm for 3h of electrolysis time. By Fenton process, 66% decolorization was attained by Fenton process at optimized conditions of pH 3, 40mg/L of Fenton's reagent and at 500rpm of agitation speed for 4h of treatment time.

  19. Comparing the neural bases of self-referential processing in typically developing and 22q11.2 adolescents.

    Science.gov (United States)

    Schneider, Maude; Debbané, Martin; Lagioia, Annalaura; Salomon, Roy; d'Argembeau, Arnaud; Eliez, Stephan

    2012-04-01

    The investigation of self-reflective processing during adolescence is relevant, as this period is characterized by deep reorganization of the self-concept. It may be the case that an atypical development of brain regions underlying self-reflective processing increases the risk for psychological disorders and impaired social functioning. In this study, we investigated the neural bases of self- and other-related processing in typically developing adolescents and youths with 22q11.2 deletion syndrome (22q11DS), a rare neurogenetic condition associated with difficulties in social interactions and increased risk for schizophrenia. The fMRI paradigm consisted in judging if a series of adjectives applied to the participant himself/herself (self), to his/her best friend or to a fictional character (Harry Potter). In control adolescents, we observed that self- and other-related processing elicited strong activation in cortical midline structures (CMS) when contrasted with a semantic baseline condition. 22q11DS exhibited hypoactivation in the CMS and the striatum during the processing of self-related information when compared to the control group. Finally, the hypoactivation in the anterior cingulate cortex was associated with the severity of prodromal positive symptoms of schizophrenia. The findings are discussed in a developmental framework and in light of their implication for the development of schizophrenia in this at-risk population.

  20. Pretreatment of furfural industrial wastewater by Fenton, electro-Fenton and Fe(II)-activated peroxydisulfate processes: a comparative study.

    Science.gov (United States)

    Yang, C W; Wang, D; Tang, Q

    2014-01-01

    The Fenton, electro-Fenton and Fe(II)-activated peroxydisulfate (PDS) processes have been applied for the treatment of actual furfural industrial wastewater in this paper. Through the comparative study of the three processes, a suitable pretreatment technology for actual furfural wastewater treatment was obtained, and the mechanism and dynamics process of this technology is discussed. The experimental results show that Fenton technology has a good and stable effect without adjusting pH of furfural wastewater. At optimal conditions, which were 40 mmol/L H₂O₂ initial concentration and 10 mmol/L Fe²⁺ initial concentration, the chemical oxygen demand (COD) removal rate can reach 81.2% after 90 min reaction at 80 °C temperature. The PDS process also has a good performance. The COD removal rate could attain 80.3% when Na₂S₂O₈ initial concentration was 4.2 mmol/L, Fe²⁺ initial concentration was 0.1 mol/L, the temperature remained at 70 °C, and pH value remained at 2.0. The electro-Fenton process was not competent to deal with the high-temperature furfural industrial wastewater and only 10.2% COD was degraded at 80 °C temperature in the optimal conditions (2.25 mA/cm² current density, 4 mg/L Na₂SO₄, 0.3 m³/h aeration rate). For the Fenton, electro-Fenton and PDS processes in pretreatment of furfural wastewater, their kinetic processes follow the pseudo first order kinetics law. The pretreatment pathways of furfural wastewater degradation are also investigated in this study. The results show that furfural and furan formic acid in furfural wastewater were preferentially degraded by Fenton technology. Furfural can be degraded into low-toxicity or nontoxic compounds by Fenton pretreatment technology, which could make furfural wastewater harmless and even reusable.

  1. A Short Introduction to Model Selection, Kolmogorov Complexity and Minimum Description Length (MDL)

    NARCIS (Netherlands)

    Nannen, Volker

    2010-01-01

    The concept of overtting in model selection is explained and demon- strated. After providing some background information on information theory and Kolmogorov complexity, we provide a short explanation of Minimum Description Length and error minimization. We conclude with a discussion of the typical

  2. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  3. Bayesian Exploratory and Confirmatory Factor Analysis: Perspectives on Constrained-Model Selection

    NARCIS (Netherlands)

    Peeters, C.F.W.

    2012-01-01

    The dissertation revolves around three aims. The first aim is the construction of a conceptually and computationally simple Bayes factor for Type I constrained-model selection (dimensionality determination) that is determinate under usage of improper priors and the subsequent utilization of this

  4. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness

    Directory of Open Access Journals (Sweden)

    Doug eRoberts-Wolfe

    2012-02-01

    Full Text Available Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigating the effects of mindfulness training on emotional information processing (i.e. memory biases in relation to both clinical symptomatology and well-being in comparison to active control conditions.Methods: Fifty-eight university students (28 female, age = 20.1 ± 2.7 years participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music. Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course.Results: Meditators showed greater increases in positive word recall compared to controls F(1, 56 = 6.6, p = .02. The meditation group increased significantly more on measures of well-being [F(1, 56 = 6.6, p = .01], with a marginal decrease in depression and anxiety [(F(1, 56 = 3.0, p = .09] compared to controls. Increased positive word recall was associated with increased psychological well-being [r = 0.31, p = .02] and decreased clinical symptoms [r = -0.29, p = .03].Conclusion: Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing.

  5. Resource requirements and economics of the coal-mining process: a comparative analysis of mines in selected countries

    Energy Technology Data Exchange (ETDEWEB)

    Astakhov, A.; Gruebler, A.

    1984-06-01

    This report examines the natural resource requirements and economics of the resource extraction process, taking coal-mining activities as an example. Coal was chosen for the study because it is receiving growing attention as the fossile energy resource with the largest potential to contribute to the world's long-term energy supply. The computerized description of the extraction process is stored in the Coal Mines Data Base (CMDB) which was developed within the framework of this study. The data base currently holds information on 70 mines located in different countries. The analytic approach used is the first of its kind to compare resource requirements and economics of coal mines under such a broad range of geological and socioeconomic conditions. A general model of the factors influencing resource inputs and impacts of the coal-mining process is presented. Then for each of the main mining methods (opencast, conventional underground, and hydraulic underground) the principal geological and technological factors influencing the resource requirements, economics, and environmental impacts, as well as the comparative advantages and disadvantages of each mining method, are discussed. For the three main mining methods the resource requirements (including manpower, energy, materials, and land) and the economics (including construction investments and operating costs) are then quantified and their cost structures (i.e. requirements for the different operations at a mine) are examined in detail using data from coal mines in the USA, the USSR, and other selected coal-producing countries (Australia, Austria, and France).

  6. Bayesian parameter inference and model selection by population annealing in systems biology.

    Science.gov (United States)

    Murakami, Yohei

    2014-01-01

    Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named "posterior parameter ensemble". We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.

  7. Comparing ICD9-encoded diagnoses and NLP-processed discharge summaries for clinical trials pre-screening: a case study.

    Science.gov (United States)

    Li, Li; Chase, Herbert S; Patel, Chintan O; Friedman, Carol; Weng, Chunhua

    2008-11-06

    The prevalence of electronic medical record (EMR) systems has made mass-screening for clinical trials viable through secondary uses of clinical data, which often exist in both structured and free text formats. The tradeoffs of using information in either data format for clinical trials screening are understudied. This paper compares the results of clinical trial eligibility queries over ICD9-encoded diagnoses and NLP-processed textual discharge summaries. The strengths and weaknesses of both data sources are summarized along the following dimensions: information completeness, expressiveness, code granularity, and accuracy of temporal information. We conclude that NLP-processed patient reports supplement important information for eligibility screening and should be used in combination with structured data.

  8. Different signal processing techniques of ratio spectra for spectrophotometric resolution of binary mixture of bisoprolol and hydrochlorothiazide; a comparative study.

    Science.gov (United States)

    Elzanfaly, Eman S; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2015-04-01

    Five signal processing techniques were applied to ratio spectra for quantitative determination of bisoprolol (BIS) and hydrochlorothiazide (HCT) in their binary mixture. The proposed techniques are Numerical Differentiation of Ratio Spectra (ND-RS), Savitsky-Golay of Ratio Spectra (SG-RS), Continuous Wavelet Transform of Ratio Spectra (CWT-RS), Mean Centering of Ratio Spectra (MC-RS) and Discrete Fourier Transform of Ratio Spectra (DFT-RS). The linearity of the proposed methods was investigated in the range of 2-40 and 1-22 μg/mL for BIS and HCT, respectively. The proposed methods were applied successfully for the determination of the drugs in laboratory prepared mixtures and in commercial pharmaceutical preparations and standard deviation was less than 1.5. The five signal processing techniques were compared to each other and validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limit.

  9. A comparative study of abiological granular sludge (ABGS) formation in different processes for zinc removal from wastewater.

    Science.gov (United States)

    Chai, Liyuan; Yan, Xu; Li, Qingzhu; Yang, Bentao; Wang, Qingwei

    2014-11-01

    Abiological granular sludge (ABGS) formation is a potential and facile strategy for improving sludge settling performance during zinc removal from wastewater using chemical precipitation. In this study, the effect of pH, seed dosage, and flocculant dosage on ABGS formation and treated water quality was investigated. Results show that settling velocity of ABGS can reach up to 4.00 cm/s under optimal conditions, e.g., pH of 9.0, zinc oxide (ZnO) seeds dosage of 1.5 g/l, and polyacrylamide (PAM) dosage of 10 mg/l. More importantly, ABGS formation mechanism was investigated in NaOH precipitation process and compared with that in bio-polymer ferric sulfate (BPFS)-NaOH precipitation process regarding their sludge structure and composition. In the NaOH precipitation process, ABGS formation depends on some attractions between particles, such as van der Waals attraction and bridging attraction. However, during the BPFS-NaOH sludge formation process, steric repulsion becomes dominant due to the adsorption of BPFS on ZnO seeds. This repulsion further causes extremely loose structure and poor settling performance of BPFS-NaOH sludge.

  10. Inference of the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes from comparative gene mapping.

    Directory of Open Access Journals (Sweden)

    Yoshinobu Uno

    Full Text Available Comparative genome analysis of non-avian reptiles and amphibians provides important clues about the process of genome evolution in tetrapods. However, there is still only limited information available on the genome structures of these organisms. Consequently, the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes in tetrapods remain poorly understood. We constructed chromosome maps of functional genes for the Chinese soft-shelled turtle (Pelodiscus sinensis, the Siamese crocodile (Crocodylus siamensis, and the Western clawed frog (Xenopus tropicalis and compared them with genome and/or chromosome maps of other tetrapod species (salamander, lizard, snake, chicken, and human. This is the first report on the protokaryotypes of amniotes and tetrapods and the evolutionary processes of microchromosomes inferred from comparative genomic analysis of vertebrates, which cover all major non-avian reptilian taxa (Squamata, Crocodilia, Testudines. The eight largest macrochromosomes of the turtle and chicken were equivalent, and 11 linkage groups had also remained intact in the crocodile. Linkage groups of the chicken macrochromosomes were also highly conserved in X. tropicalis, two squamates, and the salamander, but not in human. Chicken microchromosomal linkages were conserved in the squamates, which have fewer microchromosomes than chicken, and also in Xenopus and the salamander, which both lack microchromosomes; in the latter, the chicken microchromosomal segments have been integrated into macrochromosomes. Our present findings open up the possibility that the ancestral amniotes and tetrapods had at least 10 large genetic linkage groups and many microchromosomes, which corresponded to the chicken macro- and microchromosomes, respectively. The turtle and chicken might retain the microchromosomes of the amniote protokaryotype almost intact. The decrease in number and/or disappearance of microchromosomes by repeated

  11. COMPARATIVE STUDY IN THE PASSIVE FORCE AND CUTTING TORQUE IN THE MILLING PROCESS OF POLYMER MATRIX COMPOSITES AND ALUMINUM ALLOYS

    Directory of Open Access Journals (Sweden)

    Krzysztof Ciecieląg

    2013-06-01

    Full Text Available This paper presents the results of a study undertaken to investigate the passive force and cutting torque during the milling of carbon fiber reinforced plastics saturated with epoxy resin and two aluminum alloys: AlSi21CuNi (AK 20 and 7075 (PA 9. The milling process was conducted using end mills with diamond inserts. The machining parameters were changed equally for each material as a result of which the passive force and cutting torque during the milling of these materials could be compared.

  12. Pentaho and Jaspersoft: A Comparative Study of Business Intelligence Open Source Tools Processing Big Data to Evaluate Performances

    Directory of Open Access Journals (Sweden)

    Victor M. Parra

    2016-10-01

    Full Text Available Regardless of the recent growth in the use of “Big Data” and “Business Intelligence” (BI tools, little research has been undertaken about the implications involved. Analytical tools affect the development and sustainability of a company, as evaluating clientele needs to advance in the competitive market is critical. With the advancement of the population, processing large amounts of data has become too cumbersome for companies. At some stage in a company’s lifecycle, all companies need to create new and better data processing systems that improve their decision-making processes. Companies use BI Results to collect data that is drawn from interpretations grouped from cues in the data set BI information system that helps organisations with activities that give them the advantage in a competitive market. However, many organizations establish such systems, without conducting a preliminary analysis of the needs and wants of a company, or without determining the benefits and targets that they aim to achieve with the implementation. They rarely measure the large costs associated with the implementation blowout of such applications, which results in these impulsive solutions that are unfinished or too complex and unfeasible, in other words unsustainable even if implemented. BI open source tools are specific tools that solve this issue for organizations in need, with data storage and management. This paper compares two of the best positioned BI open source tools in the market: Pentaho and Jaspersoft, processing big data through six different sized databases, especially focussing on their Extract Transform and Load (ETL and Reporting processes by measuring their performances using Computer Algebra Systems (CAS. The ETL experimental analysis results clearly show that Jaspersoft BI has an increment of CPU time in the process of data over Pentaho BI, which is represented by an average of 42.28% in performance metrics over the six databases

  13. Comparing statistical and process-based flow duration curve models in ungauged basins and changing rain regimes

    Science.gov (United States)

    Müller, M. F.; Thompson, S. E.

    2016-02-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drivers of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by frequent wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are favored over statistical models.

  14. A Comparative Analysis of Taguchi Methodology and Shainin System DoE in the Optimization of Injection Molding Process Parameters

    Science.gov (United States)

    Khavekar, Rajendra; Vasudevan, Hari, Dr.; Modi, Bhavik

    2017-08-01

    Two well-known Design of Experiments (DoE) methodologies, such as Taguchi Methods (TM) and Shainin Systems (SS) are compared and analyzed in this study through their implementation in a plastic injection molding unit. Experiments were performed at a perfume bottle cap manufacturing company (made by acrylic material) using TM and SS to find out the root cause of defects and to optimize the process parameters for minimum rejection. Experiments obtained the rejection rate to be 8.57% from 40% (appx.) during trial runs, which is quiet low, representing successful implementation of these DoE methods. The comparison showed that both methodologies gave same set of variables as critical for defect reduction, but with change in their significance order. Also, Taguchi methods require more number of experiments and consume more time compared to the Shainin System. Shainin system is less complicated and is easy to implement, whereas Taguchi methods is statistically more reliable for optimization of process parameters. Finally, experimentations implied that DoE methods are strong and reliable in implementation, as organizations attempt to improve the quality through optimization.

  15. Comparative effect of high pressure processing and traditional thermal treatment on the physicochemical, microbiology, and sensory analysis of olive jam

    Directory of Open Access Journals (Sweden)

    Delgado-Adamez, J.

    2013-09-01

    Full Text Available In the present work the effect of the processing by high hydrostatic pressures (HPP was assessed as an alternative to the thermal treatment of pasteurization in olive jam. The effects of both treatments on the product after processing were compared and stability during storage under refrigeration was assessed through the characterization of physicochemical, microbiological and sensory aspects. To assess the effect of processing, two HPP treatments (450 and 600MPa and thermal pasteurization (80 °C for 20 min were applied, comparing them with the unprocessed product. HPP 600MPa versus the rest of treatments showed a reduction in microorganisms, greater clarity and less browning, and sensory acceptance. The shelf-life of the refrigerated product would indicate the feasibility of the application of the HPP technology for food with similar shelf-life to that obtained with the traditional treatment of pasteurization, but with a better sensory quality.En el presente trabajo se valoró el efecto del procesado por altas presiones hidrostáticas (HPP como método alternativo al tratamiento térmico de pasteurización en la mermelada de aceitunas. Para ello se comparó el efecto de ambos tratamientos sobre el producto procesado y se evaluó su estabilidad durante el almacenamiento en refrigeración, mediante la caracterización de los aspectos físico-químicos, microbiológicos, y sensoriales. Para evaluar el efecto del procesado, se aplicaron dos tratamientos de HPP (450 y 600MPa y otro de pasteurización térmica (80 °C durante 20 min, comparándose con el producto no procesado. Las muestras tratadas con HPP 600MPa presentaron, frente al resto de tratamientos una reducción en la presencia de microorganismos, mayor claridad y menor pardeamiento, y una mayor aceptación sensorial. El estudio de la vida útil del producto en refrigeración, indicaría la viabilidad de la aplicación de la tecnología de HPP para obtener alimentos con vida útil similar

  16. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness.

    Science.gov (United States)

    Roberts-Wolfe, Douglas; Sacchet, Matthew D; Hastings, Elizabeth; Roth, Harold; Britton, Willoughby

    2012-01-01

    While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigate the effects of mindfulness training on emotional information processing (i.e., memory) biases in relation to both clinical symptomatology and well-being in comparison to active control conditions. Fifty-eight university students (28 female, age = 20.1 ± 2.7 years) participated in either a 12-week course containing a "meditation laboratory" or an active control course with similar content or experiential practice laboratory format (music). Participants completed an emotional word recall task and self-report questionnaires of well-being and clinical symptoms before and after the 12-week course. Meditators showed greater increases in positive word recall compared to controls [F(1, 56) = 6.6, p = 0.02]. The meditation group increased significantly more on measures of well-being [F(1, 56) = 6.6, p = 0.01], with a marginal decrease in depression and anxiety [F(1, 56) = 3.0, p = 0.09] compared to controls. Increased positive word recall was associated with increased psychological well-being (r = 0.31, p = 0.02) and decreased clinical symptoms (r = -0.29, p = 0.03). Mindfulness training was associated with greater improvements in processing efficiency for positively valenced stimuli than active control conditions. This change in emotional information processing was associated with improvements in psychological well-being and less depression and anxiety. These data suggest that mindfulness training may improve well-being via changes in emotional information processing. Future research with a fully randomized design will be

  17. Comparative Proteomic Analysis of the Graft Unions in Hickory (Carya cathayensis Provides Insights into Response Mechanisms to Grafting Process

    Directory of Open Access Journals (Sweden)

    Daoliang Yan

    2017-04-01

    Full Text Available Hickory (Carya cathayensis, a tree with high nutritional and economic value, is widely cultivated in China. Grafting greatly reduces the juvenile phase length and makes the large scale cultivation of hickory possible. To reveal the response mechanisms of this species to grafting, we employed a proteomics-based approach to identify differentially expressed proteins in the graft unions during the grafting process. Our study identified 3723 proteins, of which 2518 were quantified. A total of 710 differentially expressed proteins (DEPs were quantified and these were involved in various molecular functional and biological processes. Among these DEPs, 341 were up-regulated and 369 were down-regulated at 7 days after grafting compared with the control. Four auxin-related proteins were down-regulated, which was in agreement with the transcription levels of their encoding genes. The Kyoto Encyclopedia of Genes and Genomes (KEGG analysis showed that the ‘Flavonoid biosynthesis’ pathway and ‘starch and sucrose metabolism’ were both significantly up-regulated. Interestingly, five flavonoid biosynthesis-related proteins, a flavanone 3-hyfroxylase, a cinnamate 4-hydroxylase, a dihydroflavonol-4-reductase, a chalcone synthase, and a chalcone isomerase, were significantly up-regulated. Further experiments verified a significant increase in the total flavonoid contents in scions, which suggests that graft union formation may activate flavonoid biosynthesis to increase the content of a series of downstream secondary metabolites. This comprehensive analysis provides fundamental information on the candidate proteins and secondary metabolism pathways involved in the grafting process for hickory.

  18. glmulti: An R Package for Easy Automated Model Selection with (Generalized Linear Models

    Directory of Open Access Journals (Sweden)

    Vincent Calcagno

    2010-10-01

    Full Text Available We introduce glmulti, an R package for automated model selection and multi-model inference with glm and related functions. From a list of explanatory variables, the provided function glmulti builds all possible unique models involving these variables and, optionally, their pairwise interactions. Restrictions can be specified for candidate models, by excluding specific terms, enforcing marginality, or controlling model complexity. Models are fitted with standard R functions like glm. The n best models and their support (e.g., (QAIC, (QAICc, or BIC are returned, allowing model selection and multi-model inference through standard R functions. The package is optimized for large candidate sets by avoiding memory limitation, facilitating parallelization and providing, in addition to exhaustive screening, a compiled genetic algorithm method. This article briefly presents the statistical framework and introduces the package, with applications to simulated and real data.

  19. Generative model selection using a scalable and size-independent complex network classifier.

    Science.gov (United States)

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-12-01

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks," outperforms existing methods with respect to accuracy, scalability, and size-independence.

  20. Generative model selection using a scalable and size-independent complex network classifier

    Energy Technology Data Exchange (ETDEWEB)

    Motallebi, Sadegh, E-mail: motallebi@ce.sharif.edu; Aliakbary, Sadegh, E-mail: aliakbary@ce.sharif.edu; Habibi, Jafar, E-mail: jhabibi@sharif.edu [Department of Computer Engineering, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.

  1. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations.

  2. Effective Parameter Dimension via Bayesian Model Selection in the Inverse Acoustic Scattering Problem

    Directory of Open Access Journals (Sweden)

    Abel Palafox

    2014-01-01

    Full Text Available We address a prototype inverse scattering problem in the interface of applied mathematics, statistics, and scientific computing. We pose the acoustic inverse scattering problem in a Bayesian inference perspective and simulate from the posterior distribution using MCMC. The PDE forward map is implemented using high performance computing methods. We implement a standard Bayesian model selection method to estimate an effective number of Fourier coefficients that may be retrieved from noisy data within a standard formulation.

  3. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  4. Comparative Study of the Effects of Long and Short Term Biological Processes on the Cycling of Colloidal Trace Metals

    Science.gov (United States)

    Pinedo, P.; Sanudo-Wilhelmy, S. A.; West, A.

    2013-05-01

    colloidal and truly dissolved fractions was strongly influenced by the distribution of DOC and chlorophyll-a concentration. On the other hand, ~93% of Cu was found in the truly dissolved pool, suggesting little correlation between Cu speciation and chlorophyll-a concentration. Together, the findings from our study of both long and short term biological processes show differential fractionation of Cu, which has a low nutrient value and a high affinity for dissolved constituents compared to Fe, which has a much higher cellular requirements and high particle reactivity. Overall, this study points out that both long and short term biological processes have an important bearing on understanding the mobility of trace metals in the ocean, with implications for how we should collect and interpret water-quality data, and for how cyclical variation in metal concentrations may affect aquatic ecology.

  5. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  6. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  7. Model selection for the North American Breeding Bird Survey: A comparison of methods

    Science.gov (United States)

    Link, William; Sauer, John; Niven, Daniel

    2017-01-01

    The North American Breeding Bird Survey (BBS) provides data for >420 bird species at multiple geographic scales over 5 decades. Modern computational methods have facilitated the fitting of complex hierarchical models to these data. It is easy to propose and fit new models, but little attention has been given to model selection. Here, we discuss and illustrate model selection using leave-one-out cross validation, and the Bayesian Predictive Information Criterion (BPIC). Cross-validation is enormously computationally intensive; we thus evaluate the performance of the Watanabe-Akaike Information Criterion (WAIC) as a computationally efficient approximation to the BPIC. Our evaluation is based on analyses of 4 models as applied to 20 species covered by the BBS. Model selection based on BPIC provided no strong evidence of one model being consistently superior to the others; for 14/20 species, none of the models emerged as superior. For the remaining 6 species, a first-difference model of population trajectory was always among the best fitting. Our results show that WAIC is not reliable as a surrogate for BPIC. Development of appropriate model sets and their evaluation using BPIC is an important innovation for the analysis of BBS data.

  8. A comparative study of principal component analysis and independent component analysis in eddy current pulsed thermography data processing

    Science.gov (United States)

    Bai, Libing; Gao, Bin; Tian, Shulin; Cheng, Yuhua; Chen, Yifan; Tian, Gui Yun; Woo, W. L.

    2013-10-01

    Eddy Current Pulsed Thermography (ECPT), an emerging Non-Destructive Testing and Evaluation technique, has been applied for a wide range of materials. The lateral heat diffusion leads to decreasing of temperature contrast between defect and defect-free area. To enhance the flaw contrast, different statistical methods, such as Principal Component Analysis and Independent Component Analysis, have been proposed for thermography image sequences processing in recent years. However, there is lack of direct and detailed independent comparisons in both algorithm implementations. The aim of this article is to compare the two methods and to determine the optimized technique for flaw contrast enhancement in ECPT data. Verification experiments are conducted on artificial and thermal fatigue nature crack detection.

  9. Comparative technoeconomic analysis of a softwood ethanol process featuring posthydrolysis sugars concentration operations and continuous fermentation with cell recycle.

    Science.gov (United States)

    Schneiderman, Steven J; Gurram, Raghu N; Menkhaus, Todd J; Gilcrease, Patrick C

    2015-01-01

    Economical production of second generation ethanol from Ponderosa pine is of interest due to widespread mountain pine beetle infestation in the western United States and Canada. The conversion process is limited by low glucose and high inhibitor concentrations resulting from conventional low-solids dilute acid pretreatment and enzymatic hydrolysis. Inhibited fermentations require larger fermentors (due to reduced volumetric productivity) and low sugars lead to low ethanol titers, increasing distillation costs. In this work, multiple effect evaporation (MEE) and nanofiltration (NF) were evaluated to concentrate the hydrolysate from 30 g/l to 100, 150, or 200 g/l glucose. To ferment this high gravity, inhibitor containing stream, traditional batch fermentation was compared with continuous stirred tank fermentation (CSTF) and continuous fermentation with cell recycle (CSTF-CR). Equivalent annual operating cost (EAOC = amortized capital + yearly operating expenses) was used to compare these potential improvements for a local-scale 5 MGY ethanol production facility. Hydrolysate concentration via evaporation increased EAOC over the base process due to the capital and energy intensive nature of evaporating a very dilute sugar stream; however, concentration via NF decreased EAOC for several of the cases (by 2 to 15%). NF concentration to 100 g/l glucose with a CSTF-CR was the most economical option, reducing EAOC by $0.15 per gallon ethanol produced. Sensitivity analyses on NF options showed that EAOC improvement over the base case could still be realized for even higher solids removal requirements (up to two times higher centrifuge requirement for the best case) or decreased NF performance.

  10. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    Directory of Open Access Journals (Sweden)

    Sveiczer Akos

    2006-03-01

    Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.

  11. Comparative characteristic and erosion behavior of NiCr coatings deposited by various high-velocity oxyfuel spray processes

    Science.gov (United States)

    Sidhu, Hazoor Singh; Sidhu, Buta Singh; Prakash, S.

    2006-12-01

    The purpose of this study is to analyze and compare the mechanical properties and microstructure details at the interface of high-velocity oxyfuel (HVOF)-sprayed NiCr-coated boiler tube steels, namely ASTM-SA-210 grade A1, ASTM-SA213-T-11, and ASTM-SA213-T-22. Coatings were developed by two different techniques, and in these techniques liquefied petroleum gas was used as the fuel gas. First, the coatings were characterized by metallographic, scanning electron microscopy/energy-dispersive x-ray analysis, x-ray diffraction, surface roughness, and microhardness, and then were subjected to erosion testing. An attempt has been made to describe the transformations taking place during thermal spraying. It is concluded that the HVOF wire spraying process offers a technically viable and cost-effective alternative to HVOF powder spraying process for applications in an energy generation power plant with a point view of life enhancement and to minimize the tube failures because it gives a coating having better resistance to erosion.

  12. Comparing the processing of music and language meaning using EEG and FMRI provides evidence for similar and distinct neural representations.

    Directory of Open Access Journals (Sweden)

    Nikolaus Steinbeis

    Full Text Available Recent demonstrations that music is capable of conveying semantically meaningful information has raised several questions as to what the underlying mechanisms of establishing meaning in music are, and if the meaning of music is represented in comparable fashion to language meaning. This paper presents evidence showing that expressed affect is a primary pathway to music meaning and that meaning in music is represented in a very similar fashion to language meaning. In two experiments using EEG and fMRI, it was shown that single chords varying in harmonic roughness (consonance/dissonance and thus perceived affect could prime the processing of subsequently presented affective target words, as indicated by an increased N400 and activation of the right middle temporal gyrus (MTG. Most importantly, however, when primed by affective words, single chords incongruous to the preceding affect also elicited an N400 and activated the right posterior STS, an area implicated in processing meaning of a variety of signals (e.g. prosody, voices, motion. This provides an important piece of evidence in support of music meaning being represented in a very similar but also distinct fashion to language meaning: Both elicit an N400, but activate different portions of the right temporal lobe.

  13. Comparing the processing of music and language meaning using EEG and FMRI provides evidence for similar and distinct neural representations.

    Science.gov (United States)

    Steinbeis, Nikolaus; Koelsch, Stefan

    2008-05-21

    Recent demonstrations that music is capable of conveying semantically meaningful information has raised several questions as to what the underlying mechanisms of establishing meaning in music are, and if the meaning of music is represented in comparable fashion to language meaning. This paper presents evidence showing that expressed affect is a primary pathway to music meaning and that meaning in music is represented in a very similar fashion to language meaning. In two experiments using EEG and fMRI, it was shown that single chords varying in harmonic roughness (consonance/dissonance) and thus perceived affect could prime the processing of subsequently presented affective target words, as indicated by an increased N400 and activation of the right middle temporal gyrus (MTG). Most importantly, however, when primed by affective words, single chords incongruous to the preceding affect also elicited an N400 and activated the right posterior STS, an area implicated in processing meaning of a variety of signals (e.g. prosody, voices, motion). This provides an important piece of evidence in support of music meaning being represented in a very similar but also distinct fashion to language meaning: Both elicit an N400, but activate different portions of the right temporal lobe.

  14. Comparative analysis of foot support-spring indicators of primary school age children with weak eyesight in physical education process

    Directory of Open Access Journals (Sweden)

    Juha Habіb

    2016-02-01

    Full Text Available Purpose: to fulfill comparative analysis of foot support-spring indicators of schoolchildren with weak eyesight. Material: in the research 7-10 years’ age children (n=76 with weak eyesight participated. The children learn in specialized boarding school. Results: we found statistically confident differences between some foot support-spring indicators of primary school children with weak eyesight and their practically healthy children. It was registered that primary school children had weaker muscles and ligaments of lower limbs. The reason can be insufficient motor functioning and muscles’ stiffening in moving in space as well as the absence of exercises for prophylaxis of foot functional disorders. Conclusions: we determined that there is demand in working out and implementation of practical recommendations in physical education process of schoolchildren with weak eyesight. Physical education process shall be oriented on educational aims, on application of health related correcting and compensatory-prophylaxis physical exercises. Such approach will positively influence on correction of foot support-spring disorders.

  15. Comparative study on the impact of coal and uranium mining, processing, and transportation in the western United States

    Energy Technology Data Exchange (ETDEWEB)

    Sandquist, G.M.

    1979-06-01

    A comparative study and quantitative assessment of the impacts, costs and benefits associated with the mining, processing and transportation of coal and uranium within the western states, specifically Arizona, California, Colorado, Montana, New Mexico, Oregon, Utah, Washington and Wyoming are presented. The western states possess 49% of the US reserve coal base, 67% of the total identified reserves and 82% of the hypothetical reserves. Western coal production has increased at an average annual rate of about 22% since 1970 and should become the major US coal supplier in the 1980's. The Colorado Plateau (in Arizona, Colorado, New Mexico and Utah) and the Wyoming Basin areas account for 72% of the $15/lb U/sub 3/O/sub 8/ resources, 76% of the $30/lb, and 75% of the $50/lb resources. It is apparent that the West will serve as the major supplier of domestic US coal and uranium fuels for at least the next several decades. Impacts considered are: environmental impacts, (land, water, air quality); health effects of coal and uranium mining, processing, and transportation; risks from transportation accidents; radiological impact of coal and uranium mining; social and economic impacts; and aesthetic impacts (land, air, noise, water, biota, and man-made objects). Economic benefits are discussed.

  16. Probabilistic forecasts of extreme local precipitation using HARMONIE predictors and comparing 3 different post-processing methods

    Science.gov (United States)

    Whan, Kirien; Schmeits, Maurice

    2017-04-01

    Statistical post-processing of deterministic weather forecasts allows production of the full forecast distribution, and thus probabilistic forecasts, to be derived from that deterministic model output. We focus on local extreme precipitation amounts, as these are one predictand used in the KNMI weather warning system. As such, the predictand is based on the maximum hourly calibrated radar precipitation in a 3x3 km2 area within 12 large regions covering The Netherlands in a 6-hour afternoon period in summer (12-18 UTC). We compare three statistical methods when post-processing output from the operational high-resolution forecast model at KNMI, HARMONIE. These methods are 1) extended logistic regression (ELR), 2) an ensemble model output statistics approach where the parameters of a zero-adjusted gamma (ZAGA) distribution depends on a set of covariates and 3) quantile random forests (QRF). The set of predictors used as covariates includes model precipitation and indices capturing a variety of processes associated with deep convection. We use stepwise selection to select predictors for ELR and ZAGA based on the AIC. Predictors and coefficients are selected in a cross-validation framework based one two-years of training data and the skill of the forecasts are assessed on one-year of test data. The inclusion of additional predictors results in more skilfull forecasts, as expected, particularly for higher precipitation thresholds and for forecasts using the QRF method. We also assess the value of using a time-lagged ensemble. Forecasts derived from ZAGA and QRF are generally more skilfull, as defined by the Brier Skill Score, than ELR and lower precipitation amounts are skillfully predicted.

  17. Quantification of greenhouse gas emissions from waste management processes for municipalities--a comparative review focusing on Africa.

    Science.gov (United States)

    Friedrich, Elena; Trois, Cristina

    2011-07-01

    The amount of greenhouse gases (GHG) emitted due to waste management in the cities of developing countries is predicted to rise considerably in the near future; however, these countries have a series of problems in accounting and reporting these gases. Some of these problems are related to the status quo of waste management in the developing world and some to the lack of a coherent framework for accounting and reporting of greenhouse gases from waste at municipal level. This review summarizes and compares GHG emissions from individual waste management processes which make up a municipal waste management system, with an emphasis on developing countries and, in particular, Africa. It should be seen as a first step towards developing a more holistic GHG accounting model for municipalities. The comparison between these emissions from developed and developing countries at process level, reveals that there is agreement on the magnitude of the emissions expected from each process (generation of waste, collection and transport, disposal and recycling). The highest GHG savings are achieved through recycling, and these savings would be even higher in developing countries which rely on coal for energy production (e.g. South Africa, India and China) and where non-motorized collection and transport is used. The highest emissions are due to the methane released by dumpsites and landfills, and these emissions are predicted to increase significantly, unless more of the methane is captured and either flared or used for energy generation. The clean development mechanism (CDM) projects implemented in the developing world have made some progress in this field; however, African countries lag behind. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Comparative reduction of Giardia cysts, F+ coliphages, sulphite reducing clostridia and fecal coliforms by wastewater treatment processes.

    Science.gov (United States)

    Nasser, Abidelfatah M; Benisti, Neta-Lee; Ofer, Naomi; Hovers, Sivan; Nitzan, Yeshayahu

    2017-01-28

    Advanced wastewater treatment processes are applied to prevent the environmental dissemination of pathogenic microorganisms. Giardia lamblia causes a severe disease called giardiasis, and is highly prevalent in untreated wastewater worldwide. Monitoring the microbial quality of wastewater effluents is usually based on testing for the levels of indicator microorganisms in the effluents. This study was conducted to compare the suitability of fecal coliforms, F+ coliphages and sulfide reducing clostridia (SRC) as indicators for the reduction of Giardia cysts in two full-scale wastewater treatment plants. The treatment process consists of activated sludge, coagulation, high rate filtration and either chlorine or UV disinfection. The results of the study demonstrated that Giardia cysts are highly prevalent in raw wastewater at an average concentration of 3600 cysts/L. Fecal coliforms, F+ coliphages and SRC were also detected at high concentrations in raw wastewater. Giardia cysts were efficiently removed (3.6 log10) by the treatment train. The greatest reduction was observed for fecal coliforms (9.6 log10) whereas the least reduction was observed for F+ coliphages (2.1 log10) following chlorine disinfection. Similar reduction was observed for SRC by filtration and disinfection by either UV (3.6 log10) or chlorine (3.3 log10). Since F+ coliphage and SRC were found to be more resistant than fecal coliforms for the tertiary treatment processes, they may prove to be more suitable as indicators for Giardia. The results of this study demonstrated that advanced wastewater treatment may prove efficient for the removal of Giardia cysts and may prevent its transmission when treated effluents are applied for crop irrigation or streams restoration.

  19. Polysaccharide production in batch process of Neisseria meningitidis serogroup C comparing Frantz, modified Frantz and Cartlin 6 cultivation media

    Directory of Open Access Journals (Sweden)

    Paz Marcelo Fossa da

    2003-01-01

    Full Text Available Polysaccharide of N. meningitidis serogroup C constitutes the antigen for the vaccine against meningitis. The goal of this work was to compare three cultivation media for production of this polysaccharide: Frantz, modified Frantz medium (with replacement of glucose by glycerol, and Catlin 6 (a synthetic medium with glucose. The comparative criteria were based on the final polysaccharide concentrations and the yield coefficient cell/polysaccharide (Y P/X. The kinetic parameters: pH, substrate consumption and cell growth were also determined. For this purpose, 9 cultivation runs were carried out in a 80 L New Brunswick bioreactor, under the following conditions: 42 L of culture medium, temperature 35ºC, air flow 5 L/min, agitation frequency 120 rpm and vessel pressure 6 psi, without dissolved oxygen or pH controls. The cultivation runs were divided in three groups, with 3 repetitions each. The cultivation using the Frantz medium presented the best results: average of final polysaccharide concentration = 0.134 g/L and Y P/X=0.121, followed by Catlin 6 medium, with results of 0.095 g/L and 0.067 respectively. Considering the principal advantages in the use of the synthetic medium, i.e. facilitation of a cultivation and purification steps of the polysaccharide production process, there is a possibility that in the near future, Catlin 6 will replace the traditional Frantz medium.

  20. Comparative evaluation of microbial and chemical leaching processes for heavy metal removal from dewatered metal plating sludge.

    Science.gov (United States)

    Bayat, Belgin; Sari, Bulent

    2010-02-15

    The purpose of the study described in this paper was to evaluate the application of bioleaching technique involving Acidithiobacillus ferrooxidans to recover heavy metals (Zn, Cu, Ni, Pb, Cd and Cr) in dewatered metal plating sludge (with no sulfide or sulfate compounds). The effect of some conditional parameters (i.e. pH, oxidation-reduction potential (ORP), sulfate production) and operational parameters (i.e. pulp density of the sludge and agitation time) were investigated in a 3l completely mixed batch (CMB) reactor. The metal recovery yields in bioleaching were also compared with chemical leaching of the sludge waste using commercial inorganic acids (sulfuric acids and ferric chloride). The leaching of heavy metals increased with decreasing of pH and increasing of ORP and sulfate production during the bioleaching experiment. Optimum pulp density for bioleaching was observed at 2% (w/v), and leaching efficiency decreased with increasing pulp density in bioleaching experiments. Maximum metal solubilization (97% of Zn, 96% of Cu, 93% of Ni, 84% of Pb, 67% of Cd and 34% of Cr) was achieved at pH 2, solids contents of 2% (w/v), and a reaction temperature of 25+/-2 degrees C during the bioleaching process. The maximum removal efficiencies of 72% and 79% Zn, 70% and 75% Cu, 69% and 73% Ni, 57% and 70% Pb, 55% and 65% Cd, and 11% and 22% Cr were also attained with the chemical leaching using sulfuric acids and ferric chloride, respectively, at pH 2, solids contents of 2% (w/v), and a reaction temperature of 25+/-2 degrees C during the acid leaching processes. The rates of metal leaching for bioleaching and chemical leaching are well described by a kinetic equation related to time. Although bioleaching generally requires a longer period of operation compared to chemical leaching, it achieves higher removal efficiency for heavy metals. The efficiency of leaching processes can be arranged in descending order as follows: bioleaching>ferric chloride leaching>sulfuric acid

  1. Live imaging-based model selection reveals periodic regulation of the stochastic G1/S phase transition in vertebrate axial development.

    Directory of Open Access Journals (Sweden)

    Mayu Sugiyama

    2014-12-01

    Full Text Available In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1 fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1 notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of

  2. A Comparative Study of the Quality of Teaching Learning Process at Post Graduate Level in the Faculty of Science and Social Science

    Science.gov (United States)

    Shahzadi, Uzma; Shaheen, Gulnaz; Shah, Ashfaque Ahmed

    2012-01-01

    The study was intended to compare the quality of teaching learning process in the faculty of social science and science at University of Sargodha. This study was descriptive and quantitative in nature. The objectives of the study were to compare the quality of teaching learning process in the faculty of social science and science at University of…

  3. Comparing bottom-up and top-down parameterisations of a process-based runoff generation model tailored on floods

    Science.gov (United States)

    Antonetti, Manuel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-04-01

    Information about the spatial distribution of dominant runoff processes (DRPs) can improve flood predictions on ungauged basins, where conceptual rainfall-runoff models usually appear to be limited due to the need for calibration. For example, hydrological classifications based on DRPs can be used as regionalisation tools assuming that, once a model structure and its parameters have been identified for each DRP, they can be transferred to other areas where the same DRP occurs. Here we present a process-based runoff generation model as an event-based spin-off of the conceptual hydrological model PREVAH. The model is grid-based and consists of a specific storage system for each DRP. To unbind the parameter values from catchment-related characteristics, the runoff concentration and the flood routing are uncoupled from the runoff generation routine and simulated separately. For the model parameterisation, two contrasting approaches are applied. First, in a bottom-up approach, the parameters of the runoff generation routine are determined a priori based on the results of sprinkling experiments on 60-100 m2 hillslope plots at several grassland locations in Switzerland. The model is, then, applied on a small catchment (0.5 km2) on the Swiss Plateau, and the parameters linked to the runoff concentration are calibrated on a single heavy rainfall-runoff event. The whole system is finally verified on several nearby catchments of larger sizes (up to 430 km2) affected by different heavy rainfall events. In a second attempt, following a top-down approach, all the parameters are calibrated on the largest catchment under investigation and successively verified on three sub-catchments. Simulation results from both parameterisation techniques are finally compared with results obtained with the traditional PREVAH.

  4. Comparing equivalent thermal, high pressure and pulsed electric field processes for mild pasteurization of orange juice. Part I: Impact on overall quality attributes

    NARCIS (Netherlands)

    Timmermans, R.A.H.; Mastwijk, H.C.; Knol, J.J.; Quataert, M.C.J.; Vervoort, L.; Plancken, van der I.; Hendrickx, M.E.; Matser, A.M.

    2011-01-01

    Mild heat pasteurization, high pressure processing (HP) and pulsed electric field (PEF) processing of freshly squeezed orange juice were comparatively evaluated examining their impact on microbial load and quality parameters immediately after processing and during two months of storage. Microbial co

  5. Comparing the effect of Acticoat (TM dressing and dressing with phenytoin cream on the healing process of pressure ulcer

    Directory of Open Access Journals (Sweden)

    Maryam Zakizadeh

    2016-11-01

    Full Text Available Background and purpose: pressure ulcer is a type of injury which requires the diagnosis and primary care by nurses in hospitalized patients, which not only delays recovery but also imposes high costs on patients and their families. Themost important method used to deal with them is the use of dressing. The type of dressing has a significant effect on the level of healing. That is why this study was performed to compare the effect of Acticoatdressing (modern dressing and phenytoin cream (traditional dressing on the healing process of pressure ulcers. Materials and methods: this single-blind clinical trial was performed on 40 patients with pressure ulcer grade 2 and above at Shahidzadeh hospital in Behbahan city, Iran. The patients were selected based on the inclusion criteria and divided into the two groups of patients with Acticoat (TM dressing and those with phenytoin cream dressing.The data collection tool in this study,included a demographic information questionnaire and alsothe Pressure Ulcer Scale (PUSHfor healing. Every week for three weeks, the wounds were examined and their PUSH scoreswere determined. By comparing the PUSH scores in both groups, a wound's status was evaluated. The data were analyzed using the SPSS software. Results: nosignificant statistical difference was seen in thewound (tissue colorsof the two groups. However, a significant statistical difference was seen in the size, amount of exudateand average total (healing points of the wounds. Conclusion: Acticoat (TM dressing accelerates pressure ulcer healing more than phenytoin cream does.Consequently, given its availability and cost effectiveness, it can be used as a common treatment in the healing of pressure ulcers.

  6. A Confident Information First Principle for Parameter Reduction and Model Selection of Boltzmann Machines.

    Science.gov (United States)

    Zhao, Xiaozhao; Hou, Yuexian; Song, Dawei; Li, Wenjie

    2017-03-16

    Typical dimensionality reduction (DR) methods are data-oriented, focusing on directly reducing the number of random variables (or features) while retaining the maximal variations in the high-dimensional data. Targeting unsupervised situations, this paper aims to address the problem from a novel perspective and considers model-oriented DR in parameter spaces of binary multivariate distributions. Specifically, we propose a general parameter reduction criterion, called confident-information-first (CIF) principle, to maximally preserve confident parameters and rule out less confident ones. Formally, the confidence of each parameter can be assessed by its contribution to the expected Fisher information distance within a geometric manifold over the neighborhood of the underlying real distribution. Then, we demonstrate two implementations of CIF in different scenarios. First, when there are no observed samples, we revisit the Boltzmann machines (BMs) from a model selection perspective and theoretically show that both the fully visible BM and the BM with hidden units can be derived from the general binary multivariate distribution using the CIF principle. This finding would help us uncover and formalize the essential parts of the target density that BM aims to capture and the nonessential parts that BM should discard. Second, when there exist observed samples, we apply CIF to the model selection for BM, which is in turn made adaptive to the observed samples. The sample-specific CIF is a heuristic method to decide the priority order of parameters, which can improve the search efficiency without degrading the quality of model selection results as shown in a series of density estimation experiments.

  7. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  8. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  9. Collaborative Care for patients with severe borderline and NOS personality disorders: A comparative multiple case study on processes and outcomes

    Directory of Open Access Journals (Sweden)

    Koekkoek Bauke

    2011-06-01

    Full Text Available Abstract Background Structured psychotherapy is recommended as the preferred treatment of personality disorders. A substantial group of patients, however, has no access to these therapies or does not benefit. For those patients who have no (longer access to psychotherapy a Collaborative Care Program (CCP is developed. Collaborative Care originated in somatic health care to increase shared decision making and to enhance self management skills of chronic patients. Nurses have a prominent position in CCP's as they are responsible for optimal continuity and coordination of care. The aim of the CCP is to improve quality of life and self management skills, and reduce destructive behaviour and other manifestations of the personality disorder. Methods/design Quantitative and qualitative data are combined in a comparative multiple case study. This makes it possible to test the feasibility of the CCP, and also provides insight into the preliminary outcomes of CCP. Two treatment conditions will be compared, one in which the CCP is provided, the other in which Care as Usual is offered. In both conditions 16 patients will be included. The perspectives of patients, their informal carers and nurses are integrated in this study. Data (questionnaires, documents, and interviews will be collected among these three groups of participants. The process of treatment and care within both research conditions is described with qualitative research methods. Additional quantitative data provide insight in the preliminary results of the CCP compared to CAU. With a stepped analysis plan the 'black box' of the application of the program will be revealed in order to understand which characteristics and influencing factors are indicative for positive or negative outcomes. Discussion The present study is, as to the best of our knowledge, the first to examine Collaborative Care for patients with severe personality disorders receiving outpatient mental health care. With the chosen

  10. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  11. Model selection emphasises the importance of non-chromosomal information in genetic studies.

    Directory of Open Access Journals (Sweden)

    Reda Rawi

    Full Text Available Ever since the case of the missing heritability was highlighted some years ago, scientists have been investigating various possible explanations for the issue. However, none of these explanations include non-chromosomal genetic information. Here we describe explicitly how chromosomal and non-chromosomal modifiers collectively influence the heritability of a trait, in this case, the growth rate of yeast. Our results show that the non-chromosomal contribution can be large, adding another dimension to the estimation of heritability. We also discovered, combining the strength of LASSO with model selection, that the interaction of chromosomal and non-chromosomal information is essential in describing phenotypes.

  12. Secondary Ion Mass Spectrometry Imaging of Molecular Distributions in Cultured Neurons and Their Processes: Comparative Analysis of Sample Preparation

    Science.gov (United States)

    Tucker, Kevin R.; Li, Zhen; Rubakhin, Stanislav S.; Sweedler, Jonathan V.

    2012-11-01

    Neurons often exhibit a complex chemical distribution and topography; therefore, sample preparation protocols that preserve structures ranging from relatively large cell somata to small neurites and growth cones are important factors in secondary ion mass spectrometry (SIMS) imaging studies. Here, SIMS was used to investigate the subcellular localization of lipids and lipophilic species in neurons from Aplysia californica. Using individual neurons cultured on silicon wafers, we compared and optimized several SIMS sampling approaches. After an initial step to remove the high salt culturing media, formaldehyde, paraformaldehyde, and glycerol, and various combinations thereof, were tested for their ability to achieve cell stabilization during and after the removal of extracellular media. These treatments improved the preservation of cellular morphology as visualized with SIMS imaging. For analytes >250 Da, coating the cell surface with a 3.2 nm-thick gold layer increased the ion intensity; multiple analytes previously not observed or observed at low abundance were detected, including intact cholesterol and vitamin E molecular ions. However, once a sample was coated, many of the lower molecular mass (cell stabilization with glycerol and 4 % paraformaldehyde. The sample preparation methods described here enhance SIMS imaging of processes of individual cultured neurons over a broad mass range with enhanced image contrast.

  13. A finite volume alternate direction implicit approach to modeling selective laser melting

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Mohanty, Sankhya

    2013-01-01

    is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...... to accurately simulate the process, are constrained by either the size or scale of the model domain. A second challenging aspect involves the inclusion of non-linear material behavior into the 3D implicit FE models. An alternating direction implicit (ADI) method based on a finite volume (FV) formulation......Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...

  14. Comparative Study of Sustained Attentional Bias on Emotional Processing in ADHD Children to Pictures with Eye-Tracking

    Directory of Open Access Journals (Sweden)

    Ebrahim PISHYAREH

    2015-01-01

    Full Text Available How to Cite This Article: Pishyareh E, Tehrani-doost M, Mahmoodi-gharaie J, Khorrami A, Rahmdar SR. A Comparative Study of SustainedAttentional Bias on Emotional Processing in ADHD Children to Pictures with Eye-Tracking. Iran J Child Neurol. 2015 Winter;9(1:64-70.AbstractObjectiveADHD children have anomalous and negative behavior especially in emotionally related fields when compared to other. Evidence indicates that attention has an impact on emotional processing. The present study evaluates the effect of emotional processing on the sustained attention of children with ADHD type C.Materials & Methods Sixty participants form two equal groups (each with 30 children of normal and ADHD children and each subject met the required selected criterion as either a normal or an ADHD child. Both groups were aged from 6–11-years-old. All pictures were chosen from the International Affective Picture System (IAPS and presented paired emotional and neutral scenes in the following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral–neutral. Sustained attention was evaluated based on the number and duration of total fixation and was compared between the groups with MANOVA analysis.ResultsThe duration of sustained attention on pleasant in the pleasant-unpleasant pair was significant. Bias in duration of sustained attention on pleasant scenes in pleasant-neutral pairs is significantly different between the groups.ConclusionSuch significant differences might be indicative of ADHD children deficiencies in emotional processing. It seems that the highly deep effect of emotionally unpleasant scenes to gain the focus of ADHD children’s attention is responsible for impulsiveness and abnormal processing of emotional stimuli. References1. Sadock, B.J., H.I. Kaplan, and V.A. Sadock, Kaplan & Sadcok’s Synopsis of Psychiatry: Behavioral Sciences, Clinical Psychiatry. Lippincott; Williams & Wilkins. 2003.2. Cormier, E., Attention deficit

  15. Comparing Acceptance and Commitment Group Therapy and 12-Steps Narcotics Anonymous in Addict's Rehabilitation Process: A Randomized Controlled Trial.

    Science.gov (United States)

    Azkhosh, Manoochehr; Farhoudianm, Ali; Saadati, Hemn; Shoaee, Fateme; Lashani, Leila

    2016-10-01

    Objective: Substance abuse is a socio-psychological disorder. The aim of this study was to compare the effectiveness of acceptance and commitment therapy with 12-steps Narcotics Anonymous on psychological well-being of opiate dependent individuals in addiction treatment centers in Shiraz, Iran. Method: This was a randomized controlled trial. Data were collected at entry into the study and at post-test and follow-up visits. The participants were selected from opiate addicted individuals who referred to addiction treatment centers in Shiraz. Sixty individuals were evaluated according to inclusion/ exclusion criteria and were divided into three equal groups randomly (20 participants per group). One group received acceptance and commitment group therapy (Twelve 90-minute sessions) and the other group was provided with the 12-steps Narcotics Anonymous program and the control group received the usual methadone maintenance treatment. During the treatment process, seven participants dropped out. Data were collected using the psychological well-being questionnaire and AAQ questionnaire in the three groups at pre-test, post-test and follow-up visits. Data were analyzed using repeated measure analysis of variance. Results: Repeated measure analysis of variance revealed that the mean difference between the three groups was significant (Pacceptance and commitment therapy group showed improvement relative to the NA and control groups on psychological well-being and psychological flexibility. Conclusion: The results of this study revealed that acceptance and commitment therapy can be helpful in enhancing positive emotions and increasing psychological well-being of addicts who seek treatment.

  16. Comparative study of denaturation of whey protein isolate (WPI) in convective air drying and isothermal heat treatment processes.

    Science.gov (United States)

    Haque, M Amdadul; Aldred, Peter; Chen, Jie; Barrow, Colin J; Adhikari, Benu

    2013-11-15

    The extent and nature of denaturation of whey protein isolate (WPI) in convective air drying environments was measured and analysed using single droplet drying. A custom-built, single droplet drying instrument was used for this purpose. Single droplets having 5±0.1μl volume (initial droplet diameter 1.5±0.1mm) containing 10% (w/v) WPI were dried at air temperatures of 45, 65 and 80°C for 600s at constant air velocity of 0.5m/s. The extent and nature of denaturation of WPI in isothermal heat treatment processes was measured at 65 and 80°C for 600s and compared with those obtained from convective air drying. The extent of denaturation of WPI in a high hydrostatic pressure environment (600MPa for 600s) was also determined. The results showed that at the end of 600s of convective drying at 65°C the denaturation of WPI was 68.3%, while it was only 10.8% during isothermal heat treatment at the same medium temperature. When the medium temperature was maintained at 80°C, the denaturation loss of WPI was 90.0% and 68.7% during isothermal heat treatment and convective drying, respectively. The bovine serum albumin (BSA) fraction of WPI was found to be more stable in the convective drying conditions than β-lactoglobulin and α-lactalbumin, especially at longer drying times. The extent of denaturation of WPI in convective air drying (65 and 80°C) and isotheral heat treatment (80°C) for 600s was found to be higher than its denaturation in a high hydrostatic pressure environment at ambient temperature (600MPa for 600s).

  17. A comparative study of fractional order PIλ/PIλDµ tuning rules for stable first order plus time delay processes

    Directory of Open Access Journals (Sweden)

    R. Ranganayakulu

    2016-12-01

    Six recent tuning methods, three for fractional order PIλ and the remaining for fractional order PIλDµ, were considered. Finally, from the simulation results the optimal tuning method is recommended based on the control objective of the process and the process dead time (L to time constant (T ratio. It is observed that the performance of tuning methods vary with the nature of the process like lag dominant, balanced and delay significant processes. The FOPTD processes were checked for robustness with increasing L/T ratio with respect to IAE, TV and Ms.

  18. Polymorphic toxin systems: Comprehensive characterization of trafficking modes, processing, mechanisms of action, immunity and ecology using comparative genomics

    Directory of Open Access Journals (Sweden)

    Zhang Dapeng

    2012-06-01

    Full Text Available Abstract Background Proteinaceous toxins are observed across all levels of inter-organismal and intra-genomic conflicts. These include recently discovered prokaryotic polymorphic toxin systems implicated in intra-specific conflicts. They are characterized by a remarkable diversity of C-terminal toxin domains generated by recombination with standalone toxin-coding cassettes. Prior analysis revealed a striking diversity of nuclease and deaminase domains among the toxin modules. We systematically investigated polymorphic toxin systems using comparative genomics, sequence and structure analysis. Results Polymorphic toxin systems are distributed across all major bacterial lineages and are delivered by at least eight distinct secretory systems. In addition to type-II, these include type-V, VI, VII (ESX, and the poorly characterized “Photorhabdus virulence cassettes (PVC”, PrsW-dependent and MuF phage-capsid-like systems. We present evidence that trafficking of these toxins is often accompanied by autoproteolytic processing catalyzed by HINT, ZU5, PrsW, caspase-like, papain-like, and a novel metallopeptidase associated with the PVC system. We identified over 150 distinct toxin domains in these systems. These span an extraordinary catalytic spectrum to include 23 distinct clades of peptidases, numerous previously unrecognized versions of nucleases and deaminases, ADP-ribosyltransferases, ADP ribosyl cyclases, RelA/SpoT-like nucleotidyltransferases, glycosyltranferases and other enzymes predicted to modify lipids and carbohydrates, and a pore-forming toxin domain. Several of these toxin domains are shared with host-directed effectors of pathogenic bacteria. Over 90 families of immunity proteins might neutralize anywhere between a single to at least 27 distinct types of toxin domains. In some organisms multiple tandem immunity genes or immunity protein domains are organized into polyimmunity loci or polyimmunity proteins. Gene-neighborhood-analysis of

  19. Polymorphic toxin systems: Comprehensive characterization of trafficking modes, processing, mechanisms of action, immunity and ecology using comparative genomics

    Science.gov (United States)

    2012-01-01

    Background Proteinaceous toxins are observed across all levels of inter-organismal and intra-genomic conflicts. These include recently discovered prokaryotic polymorphic toxin systems implicated in intra-specific conflicts. They are characterized by a remarkable diversity of C-terminal toxin domains generated by recombination with standalone toxin-coding cassettes. Prior analysis revealed a striking diversity of nuclease and deaminase domains among the toxin modules. We systematically investigated polymorphic toxin systems using comparative genomics, sequence and structure analysis. Results Polymorphic toxin systems are distributed across all major bacterial lineages and are delivered by at least eight distinct secretory systems. In addition to type-II, these include type-V, VI, VII (ESX), and the poorly characterized “Photorhabdus virulence cassettes (PVC)”, PrsW-dependent and MuF phage-capsid-like systems. We present evidence that trafficking of these toxins is often accompanied by autoproteolytic processing catalyzed by HINT, ZU5, PrsW, caspase-like, papain-like, and a novel metallopeptidase associated with the PVC system. We identified over 150 distinct toxin domains in these systems. These span an extraordinary catalytic spectrum to include 23 distinct clades of peptidases, numerous previously unrecognized versions of nucleases and deaminases, ADP-ribosyltransferases, ADP ribosyl cyclases, RelA/SpoT-like nucleotidyltransferases, glycosyltranferases and other enzymes predicted to modify lipids and carbohydrates, and a pore-forming toxin domain. Several of these toxin domains are shared with host-directed effectors of pathogenic bacteria. Over 90 families of immunity proteins might neutralize anywhere between a single to at least 27 distinct types of toxin domains. In some organisms multiple tandem immunity genes or immunity protein domains are organized into polyimmunity loci or polyimmunity proteins. Gene-neighborhood-analysis of polymorphic toxin systems

  20. Model Selection of Cooling Equipment for Granular Fertilizers%颗粒肥料冷却设备的选型

    Institute of Scientific and Technical Information of China (English)

    李秋萍; 程建伟; 邵国兴

    2011-01-01

    在颗粒肥料生产中,产品的冷却是直接影响产品包装及成品质量的重要过程之一.介绍了转鼓冷却器、流化床冷却器、波面冷却器等几种适合于颗粒肥料冷却的设备,并对它们的工作原理和性能特点作了简单介绍和评述,供用户在设备选型时参考.%In the production of granular fertilizers, product cooling is one of the important processes that affect directly the product packaging and quality. Several cooling facilities are described suitable for granular fertilizers, such as the rotary drum cooler, fluid-bed cooler, wave surface cooler, etc.A brief account and review are given of their working principles and performance characteristics, as a reference for users in their model selection of equipment.

  1. Model selection reveals control of cold signalling by evening-phased components of the plant circadian clock.

    Science.gov (United States)

    Keily, Jack; MacGregor, Dana R; Smith, Robert W; Millar, Andrew J; Halliday, Karen J; Penfield, Steven

    2013-10-01

    Circadian clocks confer advantages by restricting biological processes to certain times of day through the control of specific phased outputs. Control of temperature signalling is an important function of the plant oscillator, but the architecture of the gene network controlling cold signalling by the clock is not well understood. Here we use a model ensemble fitted to time-series data and a corrected Akaike Information Criterion (AICc) analysis to extend a dynamic model to include the control of the key cold-regulated transcription factors C-REPEAT BINDING FACTORs 1-3 (CBF1, CBF2, CBF3). AICc was combined with in silico analysis of genetic perturbations in the model ensemble, and selected a model that predicted mutant phenotypes and connections between evening-phased circadian clock components and CBF3 transcriptional control, but these connections were not shared by CBF1 and CBF2. In addition, our model predicted the correct gating of CBF transcription by cold only when the cold signal originated from the clock mechanism itself, suggesting that the clock has an important role in temperature signal transduction. Our data shows that model selection could be a useful method for the expansion of gene network models. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.

  2. Comparing equivalent thermal, high pressure and pulsed electric field processes for mild pasteurization of orange juice: Part II: Impact on specific chemical and biochemical quality parameters

    NARCIS (Netherlands)

    Vervoort, L.; Plancken, van der I.; Grauwet, T.; Timmermans, R.A.H.; Mastwijk, H.C.; Matser, A.M.; Hendrickx, M.E.; Loey, van A.

    2011-01-01

    The impact of thermal, high pressure (HP) and pulsed electric field (PEF) processing for mild pasteurization of orange juice was compared on a fair basis, using processing conditions leading to an equivalent degree of microbial inactivation. Examining the effect on specific chemical and biochemical

  3. Core-scale solute transport model selection using Monte Carlo analysis

    CERN Document Server

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  4. A study of early stopping and model selection applied to the papermaking industry.

    Science.gov (United States)

    Edwards, P J; Murray, A F

    2000-02-01

    This paper addresses the issues of neural network model development and maintenance in the context of a complex task taken from the papermaking industry. In particular, it describes a comparison study of early stopping techniques and model selection, both to optimise neural network models for generalisation performance. The results presented here show that early stopping via use of a Bayesian model evidence measure is a viable way of optimising performance while also making maximum use of all the data. In addition, they show that ten-fold cross-validation performs well as a model selector and as an estimator of prediction accuracy. These results are important in that they show how neural network models may be optimally trained and selected for highly complex industrial tasks where the data are noisy and limited in number.

  5. Estimation and Model Selection for Model-Based Clustering with the Conditional Classification Likelihood

    CERN Document Server

    Baudry, Jean-Patrick

    2012-01-01

    The Integrated Completed Likelihood (ICL) criterion has been proposed by Biernacki et al. (2000) in the model-based clustering framework to select a relevant number of classes and has been used by statisticians in various application areas. A theoretical study of this criterion is proposed. A contrast related to the clustering objective is introduced: the conditional classification likelihood. This yields an estimator and a model selection criteria class. The properties of these new procedures are studied and ICL is proved to be an approximation of one of these criteria. We oppose these results to the current leading point of view about ICL, that it would not be consistent. Moreover these results give insights into the class notion underlying ICL and feed a reflection on the class notion in clustering. General results on penalized minimum contrast criteria and on mixture models are derived, which are interesting in their own right.

  6. Numerical algebraic geometry for model selection and its application to the life sciences.

    Science.gov (United States)

    Gross, Elizabeth; Davis, Brent; Ho, Kenneth L; Bates, Daniel J; Harrington, Heather A

    2016-10-01

    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology.

  7. EXONEST: Bayesian Model Selection Applied to the Detection and Characterization of Exoplanets Via Photometric Variations

    CERN Document Server

    Placek, Ben; Angerhausen, Daniel

    2013-01-01

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian Inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian Model Selection, a unique aspect of EXONEST is the capability to distinguish between reflective and thermal contributions to the light curve. A case-study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the non-transiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically-relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentric...

  8. EXONEST: Bayesian Model Selection Applied to the Detection and Characterization of Exoplanets via Photometric Variations

    Science.gov (United States)

    Placek, Ben; Knuth, Kevin H.; Angerhausen, Daniel

    2014-11-01

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.

  9. SNP calling using genotype model selection on high-throughput sequencing data

    KAUST Repository

    You, Na

    2012-01-16

    Motivation: A review of the available single nucleotide polymorphism (SNP) calling procedures for Illumina high-throughput sequencing (HTS) platform data reveals that most rely mainly on base-calling and mapping qualities as sources of error when calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts for the errors that occur during the preparation of the genomic sample. Simulations and real data analyses indicate that GeMS has the best performance balance of sensitivity and positive predictive value among the tested SNP callers. © The Author 2012. Published by Oxford University Press. All rights reserved.

  10. EXONEST: Bayesian model selection applied to the detection and characterization of exoplanets via photometric variations

    Energy Technology Data Exchange (ETDEWEB)

    Placek, Ben; Knuth, Kevin H. [Physics Department, University at Albany (SUNY), Albany, NY 12222 (United States); Angerhausen, Daniel, E-mail: bplacek@albany.edu, E-mail: kknuth@albany.edu, E-mail: daniel.angerhausen@gmail.com [Department of Physics, Applied Physics, and Astronomy, Rensselear Polytechnic Institute, Troy, NY 12180 (United States)

    2014-11-10

    EXONEST is an algorithm dedicated to detecting and characterizing the photometric signatures of exoplanets, which include reflection and thermal emission, Doppler boosting, and ellipsoidal variations. Using Bayesian inference, we can test between competing models that describe the data as well as estimate model parameters. We demonstrate this approach by testing circular versus eccentric planetary orbital models, as well as testing for the presence or absence of four photometric effects. In addition to using Bayesian model selection, a unique aspect of EXONEST is the potential capability to distinguish between reflective and thermal contributions to the light curve. A case study is presented using Kepler data recorded from the transiting planet KOI-13b. By considering only the nontransiting portions of the light curve, we demonstrate that it is possible to estimate the photometrically relevant model parameters of KOI-13b. Furthermore, Bayesian model testing confirms that the orbit of KOI-13b has a detectable eccentricity.

  11. Hierarchical block structures and high-resolution model selection in large networks

    CERN Document Server

    Peixoto, Tiago P

    2013-01-01

    Discovering the large-scale topological features in empirical networks is a crucial tool in understanding how complex systems function. However most existing methods used to obtain the modular structure of networks suffer from serious problems, such as the resolution limit on the size of communities, where smaller but well-defined clusters are not detectable when the network becomes large. This phenomenon occurs for the very popular approach of modularity optimization, but also for more principled ones based on statistical inference and model selection. Here we construct a nested generative model which, through a complete description of the entire network hierarchy at multiple scales, is capable of avoiding this limitation, and enables the detection of modular structure at levels far beyond those possible by current approaches. Even with this increased resolution, the method is based on the principle of parsimony, and is capable of separating signal from noise, and thus will not lead to the identification of ...

  12. Numerical algebraic geometry for model selection and its application to the life sciences

    KAUST Repository

    Gross, Elizabeth

    2016-10-12

    Researchers working with mathematical models are often confronted by the related problems of parameter estimation, model validation and model selection. These are all optimization problems, well known to be challenging due to nonlinearity, non-convexity and multiple local optima. Furthermore, the challenges are compounded when only partial data are available. Here, we consider polynomial models (e.g. mass-action chemical reaction networks at steady state) and describe a framework for their analysis based on optimization using numerical algebraic geometry. Specifically, we use probability-one polynomial homotopy continuation methods to compute all critical points of the objective function, then filter to recover the global optima. Our approach exploits the geometrical structures relating models and data, and we demonstrate its utility on examples from cell signalling, synthetic biology and epidemiology.

  13. Using the Analytic Network Process (ANP) to assess the distribution of pharmaceuticals in hospitals – a comparative case study of a Danish and American hospital

    DEFF Research Database (Denmark)

    Feibert, Diana Cordes; Sørup, Christian Michel; Jacobsen, Peter

    2016-01-01

    Pharmaceuticals are a vital part of patient treatment and the timely delivery of pharmaceuticals to patients is therefore important. Hospitals are complex systems that provide a challenging environment for decision making. Implementing process changes and technologies to improve the pharmaceutical...... distribution process can therefore be a complex and challenging undertaking. A comparative case study was conducted benchmarking the pharmaceutical distribution process at a Danish and US hospital to identify best practices. Using the ANP method, taking tangible and intangible aspects into consideration...

  14. COMPARATIVE ANALYSIS OF PRINT GLOSS OF METALIZED SHEETS PRINTED WITH SHEET FED OFFSET PRINTING PROCESS AND DRY TONER BASED DIGITAL PRINTING PROCESS

    OpenAIRE

    Mohit Kumar*, Aman Bhardwaj

    2016-01-01

    Printing metalized sheets with offset printing process requires it to be primer coated prior to the printing. This is complex, time consuming and incorporates some additional cost. Thus, it has not been known in view of the prior art to utilize digital printing methods to create sharp, high quality, complex, multi-color, foil-effect designs foil-covered surfaces at relatively high speed and low cost. it was observed that the sheets printed with Dry toner based  Digital printing process h...

  15. The integration processes in Asia and Latin America in the last two decades: An necessary comparative analysis

    Directory of Open Access Journals (Sweden)

    Fernando Alfonso Rivas Mira

    2012-09-01

    Full Text Available In this article the authors analyzed the integration process in Asian and Latin America countries in the last decades, empathizing that such process, are essentially different, because as historic fact as economic and political and cultural facts. Whereas Latin America region, ballast load, because to its colonial heritage from Spain and ankylosed rules system, in Asia region, the countries, equally its heritage of the European countries but its moderns system of rules, its process of integration between themselves and to international economy explain its different results. Also, actually both regions are converging in the international process of economic integration, particularly in the Asia Pacific Cooperation where the most important countries of such region, search passing its relations to other level of to understanding, so that must leave.

  16. Radiolytic hydrogen production from process vessels in HB line - production rates compared to evolution rates and discussion of LASL reviews

    Energy Technology Data Exchange (ETDEWEB)

    Bibler, N.E.

    1992-11-12

    Hydrogen production from radiolysis of aqueous solutions can create a safety hazard since hydrogen is flammable. At times this production can be significant, especially in HB line where nitric acid solutions containing high concentrations of Pu-238, an intense alpha emitter, are processed. The hydrogen production rates from these solutions are necessary for safety analyses of these process systems. The methods and conclusions of hydrogen production rate tests are provided in this report.

  17. Mindfulness training alters emotional memory recall compared to active controls: support for an emotional information processing model of mindfulness

    OpenAIRE

    Doug eRoberts-Wolfe; Matthew eSacchet; Elizabeth eHastings; Harold eRoth; Willoughby eBritton

    2012-01-01

    Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigating the effects of mindfulness training on emotional information processing (i.e. ...

  18. Mindfulness Training Alters Emotional Memory Recall Compared to Active Controls: Support for an Emotional Information Processing Model of Mindfulness

    OpenAIRE

    Roberts-Wolfe, Douglas; Sacchet, Matthew D.; Hastings, Elizabeth; Roth, Harold; Britton, Willoughby

    2012-01-01

    Objectives: While mindfulness-based interventions have received widespread application in both clinical and non-clinical populations, the mechanism by which mindfulness meditation improves well-being remains elusive. One possibility is that mindfulness training alters the processing of emotional information, similar to prevailing cognitive models of depression and anxiety. The aim of this study was to investigate the effects of mindfulness training on emotional information processing (i.e., m...

  19. A comparative study on the use of drilling and milling processes in hole making of GFRP composite

    Indian Academy of Sciences (India)

    Hussein M Ali; Asif Iqbal; Li Liang

    2013-08-01

    Drilling and milling processes are extensively used for producing riveted and bolted joints during the assembly operations of composite laminates with other components. Hole making in glass fibre reinforced plastic (GFRP) composites is the most common mechanical process, which is used to join them to other metallic structures. Bolt joining effectiveness depends, critically, on the quality of the holes. The quality of machined holes in GFRP is strongly dependent on the appropriate choice of the cutting parameters. The main purpose of the present study is to assess the influence of drilling and milling machining parameters on hole making process of woven laminated GFRP material. A statistical approach is used to understand the effects of the control parameters on the response variables. Analysis of variance (ANOVA) was performed to isolate the effects of the parameters affecting the hole making in the two types of cutting processes. The results showed that milling process is more suitable than drilling process at high level of cutting speed and low level of feed rate, when the cutting quality (minimum surface roughness, minimum difference between upper and lower diameter) is of critical importance in the manufacturing industry, especially for precision assembly operation.

  20. Design of high speed and low offset dynamic latch comparator in 0.18 µm CMOS process.

    Science.gov (United States)

    Rahman, Labonnah Farzana; Reaz, Mamun Bin Ibne; Yin, Chia Chieu; Ali, Mohammad Alauddin Mohammad; Marufuzzaman, Mohammad

    2014-01-01

    The cross-coupled circuit mechanism based dynamic latch comparator is presented in this research. The comparator is designed using differential input stages with regenerative S-R latch to achieve lower offset, lower power, higher speed and higher resolution. In order to decrease circuit complexity, a comparator should maintain power, speed, resolution and offset-voltage properly. Simulations show that this novel dynamic latch comparator designed in 0.18 µm CMOS technology achieves 3.44 mV resolution with 8 bit precision at a frequency of 50 MHz while dissipating 158.5 µW from 1.8 V supply and 88.05 µA average current. Moreover, the proposed design propagates as fast as 4.2 nS with energy efficiency of 0.7 fJ/conversion-step. Additionally, the core circuit layout only occupies 0.008 mm2.

  1. A 10 Gs/s latched comparator with dynamic offset cancellation in 28nm FD-SOI process

    Science.gov (United States)

    Jaworski, Zbigniew

    2016-12-01

    This papers presents a high-speed, latched comparator implemented in industrial 28 nm FD-SOI technology. A novel approach to counter the mismatch is proposed. The solution employs trimming the threshold voltage by means of modulating of back-gate polarization of FD-SOI transistors. The comparator is a first step towards the design of a complete 4-bit FLASH analog-to-digital converter, with a sampling frequency of 10 GHz.

  2. Comparative Study of Laboratory-Scale and Prototypic Production-Scale Fuel Fabrication Processes and Product Characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Douglas W. Marshall

    2014-10-01

    An objective of the High Temperature Gas Reactor fuel development and qualification program for the United States Department of Energy has been to qualify fuel fabricated in prototypic production-scale equipment. The quality and characteristics of the tristructural isotropic coatings on fuel kernels are influenced by the equipment scale and processing parameters. Some characteristics affecting product quality were suppressed while others have become more significant in the larger equipment. Changes to the composition and method of producing resinated graphite matrix material has eliminated the use of hazardous, flammable liquids and enabled it to be procured as a vendor-supplied feed stock. A new method of overcoating TRISO particles with the resinated graphite matrix eliminates the use of hazardous, flammable liquids, produces highly spherical particles with a narrow size distribution, and attains product yields in excess of 99%. Compact fabrication processes have been scaled-up and automated with relatively minor changes to compact quality to manual laboratory-scale processes. The impact on statistical variability of the processes and the products as equipment was scaled are discussed. The prototypic production-scale processes produce test fuels that meet fuel quality specifications.

  3. Comparative of the Tribological Performance of Hydraulic Cylinders Coated by the Process of Thermal Spray HVOF and Hard Chrome Plating

    Directory of Open Access Journals (Sweden)

    R.M. Castro

    2014-03-01

    Full Text Available Due to the necessity of obtaining a surface that is resistant to wear and oxidation, hydraulic cylinders are typically coated with hard chrome through the process of electroplating process. However, this type of coating shows an increase of the area to support sealing elements, which interferes directly in the lubrication of the rod, causing damage to the seal components and bringing oil leakage. Another disadvantage in using the electroplated hard chromium process is the presence of high level hexavalent chromium Cr+6 which is not only carcinogenic, but also extremely contaminating to the environment. Currently, the alternative process of high-speed thermal spraying (HVOF - High Velocity Oxy-Fuel, uses composite materials (metal-ceramic possessing low wear rates. Research has shown that some mechanical properties are changed positively with the thermal spray process in industrial applications. It is evident that a coating based on WC has upper characteristics as: wear resistance, low friction coefficient, with respect to hard chrome coatings. These characteristics were analyzed by optical microscopy, roughness measurements and wear test.

  4. METHANOL REMOVAL FROM METHANOL-WATER MIXTURE USING ACTIVATED SLUDGE, AIR STRIPPING AND ADSORPTION PROCESS: COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    SALAM K. AL-DAWERY

    2015-12-01

    Full Text Available An experimental research has been carried out in order to examine the removal of methanol from methanol-water mixtures using three different methods; activated sludge; activated carbon and air stripping. The results showed that the methanol was totally consumed by the bacteria as quickly as the feed entered the activated sludge vessel. Air stripping process has a limited ability for removing of methanol due to strong intermolecular forces between methanol and water; however, the results showed that the percentage of methanol removed using air pressure at 0.5 bar was higher than that of using air pressure of 0.25 bar. Removal of methanol from the mixture with a methanol content of 5% using activated carbon was not successful due to the limited capacity of the of the activated carbon. Thus, the activated sludge process can be considered as the most suitable process for the treatment of methanol-water mixtures.

  5. A Comparative Study of Temperature Optimal Control in a Solid State Fermentation Process for Edible Mushroom Growing

    Directory of Open Access Journals (Sweden)

    K. J. Gurubel

    2017-04-01

    Full Text Available In this paper, optimal control strategies for temperature trajectory determination in order to maximize thermophilic bacteria in a fed-batch solid-state fermentation reactor are proposed. This process is modeled by nonlinear differential equations, which has been previously validated experimentally with scale reactor temperature profiles. The dynamic input aeration rate of the reactor is determined to increase microorganisms growth of a selective substrate for edible mushroom cultivation. In industrial practice, the process is comprised of three thermal stages with constant input air flow and three types of microorganisms in a 150-hour lapse. Scytalidium thermophilum and actinobacteria are desired in order to obtain a final biomass composition with acceptable microorganisms concentration. The Steepest Descent gradient algorithm in continuous time and the Gradient Projection algorithm in discrete-time are used for the process optimal control. A comparison of simulation results in the presence of disturbances is presented, where the resulting temperature trajectories exhibit similar tendencies as industrial data.

  6. Children's Increased Emotional Egocentricity Compared to Adults Is Mediated by Age-Related Differences in Conflict Processing.

    Science.gov (United States)

    Hoffmann, Ferdinand; Singer, Tania; Steinbeis, Nikolaus

    2015-01-01

    This study investigated the cognitive mechanisms underlying age-related differences in emotional egocentricity bias (EEB) between children (aged 7-12 years, n = 30) and adults (aged 20-30 years, n = 30) using a novel paradigm of visuogustatory stimulation to induce pleasant and unpleasant emotions. Both children and adults showed an EBB, but that of children was larger. The EEB did not correlate with other measures of egocentricity. Crucially, the developmental differences in EEB were mediated by age-related changes in conflict processing and not visual perspective taking, response inhibition, or processing speed. This indicates that different types of egocentricity develop independently of one another and that the increased ability to overcome EEB can be explained by age-related improvements in conflict processing.

  7. A comparative analysis between FinFET Semi-Dynamic Flip-Flop topologies under process variations

    KAUST Repository

    Rabie, Mohamed A.

    2011-11-01

    Semi-Dynamic Flip-Flops are widely used in state-of-art microprocessors. Moreover, scaling down traditional CMOS technology faces major challenges which rises the need for new devices for replacement. FinFET technology is a potential replacement due to similarity in both fabrication process and theory of operation to current CMOS technology. Hence, this paper presents the study of Semi Dynamic Flip Flops using both Independent gate and Tied gate FinFET devices in 32nm technology node. Furthermore, it studies the performance of these new circuits under process variations. © 2011 IEEE.

  8. Numerical Simulations of Electrokinetic Processes Comparing the Use of a Constant Voltage Difference or a Constant Current as Driving Force

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    materials and the prevention of the reinforced concrete corrosion. The electrical energy applied in an electrokinetic process produces electrochemical reactions at the electrodes. Different electrode processes can occur. When considering inert electrodes in aqueous solutions, the reduction of water...... are transported from the anode to the cathode through the closed electrical circuit of the cell. In the solution, the electrical current is carried by the ions, which move towards the electrode with different charge. Therefore, different authors have studied the system using the circuit theory. Assuming...

  9. PyMS: a Python toolkit for processing of gas chromatography-mass spectrometry (GC-MS data. Application and comparative study of selected tools

    Directory of Open Access Journals (Sweden)

    O'Callaghan Sean

    2012-05-01

    Full Text Available Abstract Background Gas chromatography–mass spectrometry (GC-MS is a technique frequently used in targeted and non-targeted measurements of metabolites. Most existing software tools for processing of raw instrument GC-MS data tightly integrate data processing methods with graphical user interface facilitating interactive data processing. While interactive processing remains critically important in GC-MS applications, high-throughput studies increasingly dictate the need for command line tools, suitable for scripting of high-throughput, customized processing pipelines. Results PyMS comprises a library of functions for processing of instrument GC-MS data developed in Python. PyMS currently provides a complete set of GC-MS processing functions, including reading of standard data formats (ANDI- MS/NetCDF and JCAMP-DX, noise smoothing, baseline correction, peak detection, peak deconvolution, peak integration, and peak alignment by dynamic programming. A novel common ion single quantitation algorithm allows automated, accurate quantitation of GC-MS electron impact (EI fragmentation spectra when a large number of experiments are being analyzed. PyMS implements parallel processing for by-row and by-column data processing tasks based on Message Passing Interface (MPI, allowing processing to scale on multiple CPUs in distributed computing environments. A set of specifically designed experiments was performed in-house and used to comparatively evaluate the performance of PyMS and three widely used software packages for GC-MS data processing (AMDIS, AnalyzerPro, and XCMS. Conclusions PyMS is a novel software package for the processing of raw GC-MS data, particularly suitable for scripting of customized processing pipelines and for data processing in batch mode. PyMS provides limited graphical capabilities and can be used both for routine data processing and interactive/exploratory data analysis. In real-life GC-MS data processing scenarios PyMS performs

  10. The Comparative Effectiveness of Cognitive Processing Therapy for Male Veterans Treated in a VHA Posttraumatic Stress Disorder Residential Rehabilitation Program

    Science.gov (United States)

    Alvarez, Jennifer; McLean, Caitlin; Harris, Alex H. S.; Rosen, Craig S.; Ruzek, Josef I.; Kimerling, Rachel

    2011-01-01

    Objective: To examine the effectiveness of group cognitive processing therapy (CPT) relative to trauma-focused group treatment as usual (TAU) in the context of a Veterans Health Administration (VHA) posttraumatic stress disorder (PTSD) residential rehabilitation program. Method: Participants were 2 cohorts of male patients in the same program…

  11. The Comparative Effectiveness of Cognitive Processing Therapy for Male Veterans Treated in a VHA Posttraumatic Stress Disorder Residential Rehabilitation Program

    Science.gov (United States)

    Alvarez, Jennifer; McLean, Caitlin; Harris, Alex H. S.; Rosen, Craig S.; Ruzek, Josef I.; Kimerling, Rachel

    2011-01-01

    Objective: To examine the effectiveness of group cognitive processing therapy (CPT) relative to trauma-focused group treatment as usual (TAU) in the context of a Veterans Health Administration (VHA) posttraumatic stress disorder (PTSD) residential rehabilitation program. Method: Participants were 2 cohorts of male patients in the same program…

  12. Comparing the effects of sustained and transient spatial attention on the orienting towards and the processing of electrical nociceptive stimuli

    NARCIS (Netherlands)

    Lubbe, van der Rob H.J.; Blom, Jorian H.G.; Kleine, de Elian; Bohlmeijer, Ernst T.

    2017-01-01

    We examined whether sustained vs. transient spatial attention differentially affect the processing of electrical nociceptive stimuli. Cued nociceptive stimuli of a relevant intensity (low or high) on the left or right forearm required a foot pedal press. The cued side varied trial wise in the transi

  13. The Comparative Effects of Processing Instruction and Dictogloss on the Acquisition of the English Passive by Speakers of Turkish

    Science.gov (United States)

    Uludag, Onur; Vanpatten, Bill

    2012-01-01

    The current study presents the results of an experiment investigating the effects of processing instruction (PI) and dictogloss (DG) on the acquisition of the English passive voice. Sixty speakers of Turkish studying English at university level were assigned to three groups: one receiving PI, the other receiving DG and the third serving as a…

  14. Comparing military and civilian critical thinking and information processes in operational risk management: what are the lessons?

    Science.gov (United States)

    VanVactor, Jerry D; Gill, Tony

    2010-03-01

    Business continuity has expanded into a discipline that spans most functional areas of large enterprises. Both the military and financial sectors have consistently demonstrated an aptitude to expand the boundaries of continuity planning and crisis mitigation. A comparison of both enterprises is provided to see how their respective methodologies compare. Interestingly, the similarities far outweigh the differences. The paper provides commentary related to comparative insight from risk practitioners' perspectives from within the US Army, one of the largest military organisations in the world, and the Bank of Montreal, one of Canada's leading financial institutions.

  15. Process performance and comparative metagenomic analysis during co-digestion of manure and lignocellulosic biomass for biogas production

    DEFF Research Database (Denmark)

    Tsapekos, Panagiotis; Kougias, Panagiotis; Treu, Laura

    2017-01-01

    -digestion of the silages with pig manure in continuously fed biogas reactors was examined. Metagenomic analysis for determining the microbial communities in the pig manure digestion system was performed by analysing unassembled shotgun genomic sequences. A comparative analysis allowed to identify the microbial species...

  16. COMPARATIVE STUDY ON THE CONSERVATION OF NUTRITIONAL ELEMENTS OF PEAS DURING STORAGE AND FOOD PROCESSING II. SENSORY ANALYSIS

    Directory of Open Access Journals (Sweden)

    CAMELIA VIZIREANU

    2014-05-01

    Full Text Available The Romanian market has been flooded with a wide range of frozen products or products preserved by sterilization. The quality of these products is reduced or altered by the storage modules, suppliers or customers, product type, and last but not least, by the quality of raw materials subjected to conservation. This study followed the evolution of the nutritional characteristics of three varieties of peas grown in the Galati region subjected to freezing or sterilization, and their behavior during food processing.

  17. Sterilization monitoring by biological indicators and conventional swab test of different sterilization processes used in orthodontics: A comparative study

    Directory of Open Access Journals (Sweden)

    Shantanu Khattri

    2015-01-01

    Full Text Available Introduction: The need of effective sterilization method and their monitoring is necessary. Biological indicators are specific microorganisms with high resistance toward particular sterilization methods. Their processes include steam autoclave, dry heat sterilizer, ethylene oxide sterilizer. This article has considered various methods to monitor the effectiveness of different sterilization methods used in orthodontics. Materials and Methods: The parameters for comparison were the control and experimental instruments utilized in orthodontic treatment. The efficacy of sterilization was evaluated by comparison of bacterial growth obtained in monitoring by biological indicators and swab test method. Results: No spore growth was found when sterilization process was evaluated by biological indicators in comparison to swab test where spore growth was present. Instruments dipped in Bioclenz-G solution for 10 min showed spore growth, but no spore growth was seen in 10 h cycle. Discussion: The result of the study verifies the established effectiveness of biological indicators over conventional swab test method in monitoring various sterilization processes used in orthodontics. Bioclenz-G solution can be used as an effective cold sterilization method for sterilization. Conclusion: For evaluating the effectiveness of sterilization, biological indicators preclude the drawbacks of incomplete verification of destruction of all vegetation and inordinate delay in procurement of results as is the case with chemical indicators and lab culture, respectively.

  18. Evaluation of Image Processing Technique for Measuring of Nitrogen and Yield in Paddy Rice and Comparing it with Standard Methods

    Directory of Open Access Journals (Sweden)

    M.R Larijani

    2016-04-01

    Full Text Available In order to use new and low cost methods in precision agriculture, nitrogen should be supplied for plants on time and precisely. For determining the required nitrogen of paddy rice in the clustering stage, a series of experiments were conducted using three different methods of: image processing, kjeldahl and chlorophyll meter set (SPAD-502, in a randomized complete block design with three replications during 2010 at Rice Research Center of Tonekabon, Iran. Four experimental treatments were different level of fertilizer (Urea with 46% nitrogen. In the clustering stage, some images from rice plants were taken vertically by a digital camera and were analyzed using image processing technique. Simultaneously the chlorophyll index of plants was measured by SPAD-502 chlorophyll meter set and the percentage amount of nitrogen was measured using of the so called kjeldahl laboratory method. The results showed that the three methods of determining nitrogen of rice plant were highly correlated. Moreover, the correlation among the three methods and crop yield were almost the same. In general, the method of image processing could have a high potential for nitrogen management in the field, while this method was low-cost, faster and also nondestructive in comparison to the other methods.

  19. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  20. Model selection for robust Bayesian mixture distributions%鲁棒贝叶斯混合分布的模型选择

    Institute of Scientific and Technical Information of China (English)

    卿湘运; 王行愚

    2009-01-01

    Bayesian approaches to robust mixture modelling based on Student-r distributions enable to be less sensitive to outliers, thereby preventing from over-estimating of the number of mixting components. However, there are two intractable problems in the previous methods for model selection under the variational Bayesian framework: (1) The variational approach converges to a local maximum of the low bound on the log-evidence that dependents on the initial parameter values. How can the variational approach guarantee that the initial settings for different models are consistency? (2) The low bound is sensitive to factorized approximation forms in the inference process. How can the variational approach guarantee that the approximate errors for different models are equivalent? In this paper, we present a model selection algorithm for robust bayesian mixture distributions based on deviance information criterion (DIC) proposed by Spiegelhalter et al. In 2002. Unlike the Bayesian Infromation Criterion (BIC), the DIC is straightforward in calculation, which has been adopted in many modern applications. Inspired by the works of McGrory et al. , which used the DIC values for model selection tasks of finite mixture Gaussian distributions and hidden Markov models, the calculation of a DIC for robust Bayesian mixture model is derived. The proposed algorithm can learn model parameters and perform model selection simultaneously, which avoids choosing an optimum one among a large set of candidate models. A method to initialize parameters of the algorithm is provided. Experimental results on simulated data and Old Faithful Geyser data containing a large amount of outliers show the good performance that the algorithm can learn parameters of mixture components robustly and the number of components precisely.%提出一种基于偏差信息准则(deriance information criterion,DIC)的鲁棒贝叶斯混合分布模型选择算法.在变分逼近框架下,给出鲁棒贝叶斯混合模型

  1. Public Key Infrastructure (PKI) And Virtual Private Network (VPN) Compared Using An Utility Function And The Analytic Hierarchy Process (AHP)

    OpenAIRE

    Wagner, Edward Dishman

    2002-01-01

    This paper compares two technologies, Public Key Infrastructure (PKI) and Virtual Private Network (VPN). PKI and VPN are two approaches currently in use to resolve the problem of securing data in computer networks. Making this comparison difficult is the lack of available data. Additionally, an organization will make their decision based on circumstances unique to their information security needs. Therefore, this paper will illustrate a method using a utility function and the Analytic Hie...

  2. Comparative and integrative environmental assessment of advanced wastewater treatment processes based on an average removal of pharmaceuticals.

    Science.gov (United States)

    Igos, Elorri; Benetto, Enrico; Venditti, Silvia; Köhler, Christian; Cornelissen, Alex

    2013-01-01

    Pharmaceuticals are normally barely removed by conventional wastewater treatments. Advanced technologies as a post-treatment, could prevent these pollutants reaching the environment and could be included in a centralized treatment plant or, alternatively, at the primary point source, e.g. hospitals. In this study, the environmental impacts of different options, as a function of several advanced treatments as well as the centralized/decentralized implementation options, have been evaluated using Life Cycle Assessment (LCA) methodology. In previous publications, the characterization of the toxicity of pharmaceuticals within LCA suffers from high uncertainties. In our study, LCA was therefore only used to quantify the generated impacts (electricity, chemicals, etc.) of different treatment scenarios. These impacts are then weighted by the average removal rate of pharmaceuticals using a new Eco-efficiency Indicator EFI. This new way of comparing the scenarios shows significant advantages of upgrading a centralized plant with ozonation as the post-treatment. The decentralized treatment option reveals no significant improvement on the avoided environmental impact, due to the comparatively small pollutant load coming from the hospital and the uncertainties in the average removal of the decentralized scenarios. When comparing the post-treatment technologies, UV radiation has a lower performance than both ozonation and activated carbon adsorption.

  3. A comparative study on the clinical decision-making processes of nurse practitioners vs. medical doctors using scenarios in a secondary care environment.

    Science.gov (United States)

    Thompson, Stephen; Moorley, Calvin; Barratt, Julian

    2017-05-01

    To investigate the decision-making skills of secondary care nurse practitioners compared with those of medical doctors. A literature review was conducted, searching for articles published from 1990 - 2012. The review found that nurse practitioners are key to the modernization of the National Health Service. Studies have shown that compared with doctors, nurse practitioners can be efficient and cost-effective in consultations. Qualitative research design. The information processing theory and think aloud approach were used to understand the cognitive processes of 10 participants (5 doctors and 5 nurse practitioners). One nurse practitioner was paired with one doctor from the same speciality and they were compared using a structured scenario-based interview. To ensure that all critical and relevant cues were covered by the individual participating in the scenario, a reference model was used to measure the degree of successful diagnosis, management and treatment. This study was conducted from May 2012 - January 2013. The data were processed for 5 months, from July to November 2012. The two groups of practitioners differed in the number of cue acquisitions obtained in the scenarios. In our study, nurse practitioners took 3 minutes longer to complete the scenarios. This study suggests that nurse practitioner consultations are comparable to those of medical doctors in a secondary care environment in terms of correct diagnoses and therapeutic treatments. The information processing theory highlighted that both groups of professionals had similar models for decision-making processes. © 2016 John Wiley & Sons Ltd.

  4. Close-coupling calculations of low-energy inelastic and elastic processes in $^4$He collisions with H$_2$: A comparative study of two potential energy surfaces

    CERN Document Server

    Lee, T G; Martin, R; Clark, T K; Forrey, R C; Balakrishnan, N; Stancil, P C; Schultz, D R; Dalgarno, A; Ferland, G J

    2004-01-01

    The two most recently published potential energy surfaces (PESs) for the HeH$_2$ complex, the so-called MR (Muchnick and Russek) and BMP (Boothroyd, Martin, and Peterson) surfaces, are quantitatively evaluated and compared through the investigation of atom-diatom collision processes. The BMP surface is expected to be an improvement, approaching chemical accuracy, over all conformations of the PES compared to that of the MR surface. We found significant differences in inelastic rovibrational cross sections computed on the two surfaces for processes dominated by large changes in target rotational angular momentum. In particular, the H$_2$($\

  5. Core-scale solute transport model selection using Monte Carlo analysis

    Science.gov (United States)

    Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.

    2013-06-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.

  6. Model selection as a tool for phylogeographic inference: an example from the willow Salix melanopsis.

    Science.gov (United States)

    Carstens, Bryan C; Brennan, Reid S; Chua, Vivien; Duffie, Caroline V; Harvey, Michael G; Koch, Rachel A; McMahan, Caleb D; Nelson, Bradley J; Newman, Catherine E; Satler, Jordan D; Seeholzer, Glenn; Posbic, Karine; Tank, David C; Sullivan, Jack

    2013-08-01

    Phylogeographic inference has typically relied on analyses of data from one or a few genes to provide estimates of demography and population histories. While much has been learned from these studies, all phylogeographic analysis is conditioned on the data, and thus, inferences derived from data that represent a small sample of the genome are unavoidably tenuous. Here, we demonstrate one approach for moving beyond classic phylogeographic research. We use sequence capture probes and Illumina sequencing to generate data from >400 loci in order to infer the phylogeographic history of Salix melanopsis, a riparian willow with a disjunct distribution in coastal and the inland Pacific Northwest. We evaluate a priori phylogeographic hypotheses using coalescent models for parameter estimation, and the results support earlier findings that identified post-Pleistocene dispersal as the cause of the disjunction in S. melanopsis. We also conduct a series of model selection exercises using IMa2, Migrate-n and ∂a∂i. The resulting ranking of models indicates that refugial dynamics were complex, with multiple regions in the inland regions serving as the source for postglacial colonization. Our results demonstrate that new sources of data and new approaches to data analysis can rejuvenate phylogeographic research by allowing for the identification of complex models that enable researchers to both identify and estimate the most relevant parameters for a given system.

  7. Model selection and assessment for multi­-species occupancy models

    Science.gov (United States)

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  8. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  9. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis.

  10. The evolutionary forces maintaining a wild polymorphism of Littorina saxatilis: model selection by computer simulations.

    Science.gov (United States)

    Pérez-Figueroa, A; Cruz, F; Carvajal-Rodríguez, A; Rolán-Alvarez, E; Caballero, A

    2005-01-01

    Two rocky shore ecotypes of Littorina saxatilis from north-west Spain live at different shore levels and habitats and have developed an incomplete reproductive isolation through size assortative mating. The system is regarded as an example of sympatric ecological speciation. Several experiments have indicated that different evolutionary forces (migration, assortative mating and habitat-dependent selection) play a role in maintaining the polymorphism. However, an assessment of the combined contributions of these forces supporting the observed pattern in the wild is absent. A model selection procedure using computer simulations was used to investigate the contribution of the different evolutionary forces towards the maintenance of the polymorphism. The agreement between alternative models and experimental estimates for a number of parameters was quantified by a least square method. The results of the analysis show that the fittest evolutionary model for the observed polymorphism is characterized by a high gene flow, intermediate-high reproductive isolation between ecotypes, and a moderate to strong selection against the nonresident ecotypes on each shore level. In addition, a substantial number of additive loci contributing to the selected trait and a narrow hybrid definition with respect to the phenotype are scenarios that better explain the polymorphism, whereas the ecotype fitnesses at the mid-shore, the level of phenotypic plasticity, and environmental effects are not key parameters.

  11. Bayesian model selection for testing the no-hair theorem with black hole ringdowns

    CERN Document Server

    Gossan, S; Sathyaprakash, B S

    2011-01-01

    General relativity predicts that a black hole that results from the merger of two compact stars (either black holes or neutron stars) is initially highly deformed but soon settles down to a quiescent state by emitting a superposition of quasi-normal modes (QNMs). The QNMs are damped sinusoids with characteristic frequencies and decay times that depend only on the mass and spin of the black hole and no other parameter - a statement of the no-hair theorem. In this paper we have examined the extent to which QNMs could be used to test the no-hair theorem with future ground- and space-based gravitational-wave detectors. We model departures from general relativity (GR) by introducing extra parameters which change the mode frequencies or decay times from their general relativistic values. With the aid of numerical simulations and Bayesian model selection, we assess the extent to which the presence of such a parameter could be inferred, and its value estimated. We find that it is harder to decipher the departure of d...

  12. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    Science.gov (United States)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  13. Comparative toxicities of oxygen, ozone, chlorine dioxide, and chlorine bleaching filtrates - microtox toxicities of raw and processed filtrates

    Energy Technology Data Exchange (ETDEWEB)

    Ard, T.A.; McDonough, T.J.

    1995-12-31

    It has claimed that effluents from the bleaching of kraft pulp with chlorine and its compounds have deleterious effects on the aquatic environment. It has been further suggested that bleaching without the use of chlorine or its compounds will produce innocuous effluents. To obtain information on the validity of these claims, we have conducted a laboratory study of the toxicity of filtrates from chlorine-based and nonchlorine bleaching processes. We have also examined two related issues. The first is whether any toxicants generated during bleaching are rendered harmless (by neutralization, storage, and biological treatment) before being discharged to the environment. The second related issue is whether any toxicity observed in mill effluents actually originates in the bleaching process, as opposed to being due to raw material components or compounds formed during the pulping step that precedes bleaching. Several conclusions were drawn from this study. (1) There is a background level of toxicity which originates in the oxygen stage, process steps prior to bleaching, or in the wood raw material. It is decreased by neutralization and storage, but residual toxicity may still be detected after two weeks. (2) If the sum of the first and second stage toxicities is taken as an indicator of overall toxicity, the untreated filtrates may be ranked as follows: Control (Background) > D(EO) > Z(EO) > C(EO). However, these toxicities are of no importance in regard to environmental effects because of their ephemeral nature and the likelihood of their being reduced or eliminated prior to effluent discharge. Evidence for this statement is the ease with which all except the C(EO) were detoxified by neutralization and storage. (3) After neutralization and storage for two weeks at room temperature the ranking of toxicities becomes: C(EO) > D(EO) > Z(EO) > Background. The last three are similar in magnitude.

  14. The effects of photobiomodulation and low-amplitude high-frequency vibration on bone healing process: a comparative study.

    Science.gov (United States)

    Rajaei Jafarabadi, M; Rouhi, G; Kaka, G; Sadraie, S H; Arum, J

    2016-12-01

    This study aimed at investigating the effects of photobiomodulation (PBM) and low-amplitude high-frequency (LAHF) whole body mechanical vibration on bone fracture healing process when metallic plates are implanted in rats' femurs. Forty male rats weighing between 250 and 350 g, 12 weeks old, were employed in this study. A transverse critical size defect (CSD) was made in their right femurs that were fixed by stainless steel plates. After the surgery, the rats were divided equally into four groups: low-level laser therapy group (GaAlAs laser, 830 nm, 40 mW, 4 J/cm(2), 0.35 cm beam diameter, LLLT), whole body vibration group (60 Hz, 0.1 mm amplitude, 1.5 g, WBV), a combination of laser and vibration group (LV), and the control group (C). Each group was divided into two subgroups based on sacrifice dates. The rats were sacrificed at intervals of 3 and 6 weeks after the surgery to extract their right femurs for radiography and biomechanical and histological analyses, and the results were analyzed using standard statistical methods. Radiographic analyses showed greater callus formation in the LLLT and WBV groups than in control group at both 3 (P low-amplitude high-frequency WBV both had a positive impact on bone healing process, for critical size defects in the presence of a stainless steel implant. But their combination, i.e., low-level laser therapy and low-amplitude high-frequency whole body vibration (LV), interestingly did not accelerate the fractured bone healing process.

  15. Comparative analyses of different variants of standard ground for automatic control systems of technical processes of oil and gas production

    Science.gov (United States)

    Gromakov, E. I.; Gazizov, A. T.; Lukin, V. P.; Chimrov, A. V.

    2017-01-01

    The paper analyses efficiency (interference resistance) of standard TT, TN, IT networks in control links of automatic control systems (ACS) of technical processes (TP) of oil and gas production. Electromagnetic compatibility (EMC) is a standard term used to describe the interference in grounding circuits. Improved EMC of ACS TP can significantly reduce risks and costs of malfunction of equipment that could have serious consequences. It has been proved that an IT network is the best type of grounds for protection of ACS TP in real life conditions. It allows reducing the interference down to the level that is stated in standards of oil and gas companies.

  16. COMPARATIVE ANALYSIS USING DIPIRONA DEGRADATION PROCESS WITH PHOTO-FENTON UV-C LIGHT AND SOLAR RADIATION

    Directory of Open Access Journals (Sweden)

    Daniella Carla Napoleão

    2015-01-01

    Full Text Available The contamination of water bodies is a major concern on the part of scientists from different parts of the world. Domestic and industrial activities are the cause of the daily pouring of various types of pollutants which are in most cases resistant to conventional treatments of waters. Among the contaminants, especially noteworthy are the drugs in which it is found that 50% to 90% are discarded without treatment. The concerns about these substances are the adverse effects to human health and animals, especially in aquatic environments. The advanced oxidation processes (AOP have been studied and applied as an efficient alternative treatment, in order that it can be applied to the degradation of the different pollutants, considering that can generate hydroxyl radicals, highly reactive even somewhat selective. This study evaluated the efficiency of the photo-Fenton process using UV-C radiation and sunlight to degradation of the drug dipyrone in aqueous solution contaminated with the active ingredient of the drug at a concentration of 20 mg.L-1. Assays were performed with 50 mL aliquots of the solution following 23 factorial designs with central point, and the variables studied: addition of H2O2, adding FeSO4.7H2O and time. The detection and quantification of dipyrone before and after the AOP was performed by high performance liquid chromatography (HPLC and verified that about DE100% degradation of the compound was obtained.

  17. Comparative analysis of cell populations involved in the proliferative and inflammatory processes in diffuse and localised pigmented villonodular synovitis.

    Science.gov (United States)

    Berger, I; Ehemann, V; Helmchen, B; Penzel, R; Weckauf, H

    2004-07-01

    The aim of the present study was a comparative quantitative evaluation of cell populations involved in the proliferative and inflammatory compartment in both localised and diffuse pigmented synovitis villonodularis (PVNS). 15 cases of each localised and diffuse PVNS were examined by flow cytometry, immunohistochemistry, double immuno-fluorescence and confocal microscopy with quantitative evaluation of CD3-, CD4-, CD8-, CD20-, CD57-, CD55-, CD68-, CD163- and h4Ph positive (+) cells. The proliferative compartment of localised and diffuse PVNS was mainly composed of double-positive CD68+/h4Ph+ (CD163+/CD55+) synoviocytes. The number of double-positive synoviocytes for macrophage and fibroblast markers was significantly higher in diffuse compared to localised PVNS. The accompanying inflammatory infiltrate showed a predominance of cytotoxic cells (CD8+, CD57+), whereby the number of CD3+ and CD20+ cells was significantly higher in localised PVNS. The number of CD57+ NK cells was significantly higher in diffuse PVNS. The proliferating macrophage- like synovial cells and the cytotoxic lymphocytes could contribute to the aggressive behaviour of localised and diffuse PVNS. Moreover, with regard to the quantitative differences in cell composition between diffuse and localised PVNS and their different clinical behaviour, further studies should continue to analyse localised and diffuse PVNS separately.

  18. Effect of processing parameters on fouling resistances during microfiltration of red plum and watermelon juices: a comparative study.

    Science.gov (United States)

    Nourbakhsh, Himan; Alemi, Azam; Emam-Djomeh, Zahra; Mirsaeedghazi, Hossein

    2014-01-01

    This study evaluated the total (R t ), reversible (R rev ), irreversible (R irr ), and cake (R c ) resistances during microfiltration of watermelon juice (as a juice with colloid particles) and red plum juice (as a juice without colloid particles). Results showed that the total resistance decreased by about 45% when the feed velocity was increased during clarification of red plum juice due to change in cake resistance. Also, increasing the feed temperature from 20 to 30°C decreased the total fouling resistance by about 9% due to decreases in the irreversible and reversible fouling resistances. Also, mixed cellulose ester (MCE) membrane (which is hydrophilic) had a lower cake resistance compared to polyvinylidene fluoride (PVDF) membrane (which is hydrophobic). Examination of the microfiltration of watermelon juice showed that R t decreased by about 54% when the feed temperature was increased from 20 to 50°C, partially due to the reduction of reversible fouling resistance by 78%. Also, increasing transmembrane pressures from 0.5 to 2.5 bars greatly increased total fouling resistance. The feed velocity had a different effect on fouling resistances during microfiltration of watermelon juice compared to red plum juice: in contrast with red plum juice, increasing the feed velocity for watermelon juice increased cake resistance.

  19. Sulfur and carbon geochemistry of the Santa Elena peridotites: Comparing oceanic and continental processes during peridotite alteration

    Science.gov (United States)

    Schwarzenbach, Esther M.; Gill, Benjamin C.; Gazel, Esteban; Madrigal, Pilar

    2016-05-01

    Ultramafic rocks exposed on the continent serve as a window into oceanic and continental processes of water-peridotite interaction, so called serpentinization. In both environments there are active carbon and sulfur cycles that contain abiogenic and biogenic processes, which are eventually imprinted in the geochemical signatures of the basement rocks and the calcite and magnesite deposits associated with fluids that issue from these systems. Here, we present the carbon and sulfur geochemistry of ultramafic rocks and carbonate deposits from the Santa Elena ophiolite in Costa Rica. The aim of this study is to leverage the geochemistry of the ultramafic sequence and associated deposits to distinguish between processes that were dominant during ocean floor alteration and those dominant during low-temperature, continental water-peridotite interaction. The peridotites are variably serpentinized with total sulfur concentrations up to 877 ppm that is typically dominated by sulfide over sulfate. With the exception of one sample the ultramafic rocks are characterized by positive δ34Ssulfide (up to + 23.1‰) and δ34Ssulfate values (up to + 35.0‰). Carbon contents in the peridotites are low and are isotopically distinct from typical oceanic serpentinites. In particular, δ13C of the inorganic carbon suggests that the carbon is not derived from seawater, but rather the product of the interaction of meteoric water with the ultramafic rocks. In contrast, the sulfur isotope data from sulfide minerals in the peridotites preserve evidence for interaction with a hydrothermal fluid. Specifically, they indicate closed system abiogenic sulfate reduction suggesting that oceanic serpentinization occurred with limited input of seawater. Overall, the geochemical signatures preserve evidence for both oceanic and continental water-rock interaction with the majority of carbon (and possibly sulfate) being incorporated during continental water-rock interaction. Furthermore, there is

  20. The decolorization and mineralization of Acid Orange 6 azo dye in aqueous solution by advanced oxidation processes: A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Hsing, H.-J. [Graduate Institute of Environmental Engineering, National Taiwan University, 71 Chou-Shan Road, Taipei 106, Taiwan (China); Chiang, P.-C. [Graduate Institute of Environmental Engineering, National Taiwan University, 71 Chou-Shan Road, Taipei 106, Taiwan (China)]. E-mail: pcchiang@ntu.edu.tw; Chang, E.-E. [Department of Biochemistry, Taipei Medical University, 25 Wu-Shin Street, Taipei 106, Taiwan (China); Chen, M.-Y. [Graduate Institute of Environmental Engineering, National Taiwan University, 71 Chou-Shan Road, Taipei 106, Taiwan (China)

    2007-03-06

    The comparison of different advanced oxidation processes (AOPs), i.e. ultraviolet (UV)/TiO{sub 2}, O{sub 3}, O{sub 3}/UV, O{sub 3}/UV/TiO{sub 2}, Fenton and electrocoagulation (EC), is of interest to determine the best removal performance for the destruction of the target compound in an Acid Orange 6 (AO6) solution, exploring the most efficient experimental conditions as well; on the other hand, the results may provide baseline information of the combination of different AOPs in treating industrial wastewater. The following conclusions can be drawn: (1) in the effects of individual and combined ozonation and photocatalytic UV irradiation, both O{sub 3}/UV and O{sub 3}/UV/TiO{sub 2} processes exhibit remarkable TOC removal capability that can achieve a 65% removal efficiency at pH 7 and O{sub 3} dose = 45 mg/L; (2) the optimum pH and ratio of [H{sub 2}O{sub 2}]/[Fe{sup 2+}] found for the Fenton process, are pH 4 and [H{sub 2}O{sub 2}]/[Fe{sup 2+}] = 6.58. The optimum [H{sub 2}O{sub 2}] and [Fe{sup 2+}] under the same HF value are 58.82 and 8.93 mM, respectively; (3) the optimum applied voltage found in the EC experiment is 80 V, and the initial pH will affect the AO6 and TOC removal rates in that acidic conditions may be favorable for a higher removal rate; (4) the AO6 decolorization rate ranking was obtained in the order of O{sub 3} < O{sub 3}/UV = O{sub 3}/UV/TiO{sub 2} < EC < Fenton; (5) the ranking of TOC removal efficiency of selected AOPs was in the order of O{sub 3} = Fenton < EC < O{sub 3}/UV < O{sub 3}/UV/TiO{sub 2} for 30 min of reaction time.

  1. A Comparative Study of the Application of Electronic Data Interchange and Internet Technology to Business Process Reengineering

    Institute of Scientific and Technical Information of China (English)

    Hamid Reza Ahadi

    2004-01-01

    This study investigates the role of information technology in business process reengineering (BPR) implementation.To increase the prospects of successful BPR implementation,the role of information technology in BPR should be thoroughly investigated to find the logical relationships between information technology and BPR.This study used a survey methodology to gather information from 72 BPR programs.The results show that different information technologies,such as those examined in this study,electronic data interchange (EDI) and Internet,provide different capabilities and can be useful in different ways and for different purposes.Lack of attention to these relationships may be due to the unacceptable high implementation failure rate in the previous BPR efforts.

  2. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  3. What's the Worry with Social Anxiety? Comparing Cognitive Processes in Children with Generalized Anxiety Disorder and Social Anxiety Disorder.

    Science.gov (United States)

    Hearn, Cate S; Donovan, Caroline L; Spence, Susan H; March, Sonja; Holmes, Monique C

    2016-12-05

    Social anxiety disorder (SAD) in children is often comorbid with generalized anxiety disorder (GAD). We investigated whether worry, intolerance of uncertainty, beliefs about worry, negative problem orientation and cognitive avoidance, that are typically associated with GAD, are present in children with SAD. Participants included 60 children (8-12 years), matched on age and gender. Groups included children: with primary GAD and without SAD (GAD); with primary SAD and without GAD (SAD); and without an anxiety disorder (NAD). GAD and SAD groups scored significantly higher than the NAD group on worry, intolerance of uncertainty, negative beliefs about worry and negative problem orientation, however, they did not score differently from each other. Only the GAD group scored significantly higher than the NAD group on cognitive avoidance. These findings further understanding of the structure of SAD and suggest that the high comorbidity between SAD and GAD may be due to similar underlying processes within the disorders.

  4. A comparative study between spiral-filter press and belt press implemented in a cloudy apple juice production process.

    Science.gov (United States)

    De Paepe, Domien; Coudijzer, Katleen; Noten, Bart; Valkenborg, Dirk; Servaes, Kelly; De Loose, Marc; Diels, Ludo; Voorspoels, Stefan; Van Droogenbroeck, Bart

    2015-04-15

    In this study, advantages and disadvantages of the innovative, low-oxygen spiral-filter press system were studied in comparison with the belt press, commonly applied in small and medium size enterprises for the production of cloudy apple juice. On the basis of equivalent throughput, a higher juice yield could be achieved with spiral-filter press. Also a more turbid juice with a higher content of suspended solids could be produced. The avoidance of enzymatic browning during juice extraction led to an attractive yellowish juice with an elevated phenolic content. Moreover, it was found that juice produced with spiral-filter press demonstrates a higher retention of phenolic compounds during the downstream processing steps and storage. The results demonstrates the advantage of the use of a spiral-filter press in comparison with belt press in the production of a high quality cloudy apple juice rich in phenolic compounds, without the use of oxidation inhibiting additives.

  5. A comparative study on aromatic profiles of strawberry vinegars obtained using different conditions in the production process.

    Science.gov (United States)

    Ubeda, Cristina; Callejón, Raquel M; Troncoso, Ana M; Moreno-Rojas, Jose M; Peña, Francisco; Morales, M Lourdes

    2016-02-01

    Impact odorants in strawberry vinegars produced in different containers (glass, oak and cherry barrels) were determined by gas chromatography-olfactometry using modified frequency (MF) technique, and dynamic headspace gas chromatography-mass spectrometry. Aromatic profile of vinegar from strawberry cooked must was also studied. All strawberry vinegars retained certain impact odorants from strawberries: 3-nonen-2-one, (E,E)-2,4-decadienal, guaiacol, nerolidol, pantolactone+furaneol, eugenol, γ-dodecalactone and phenylacetic acid. Isovaleric acid, pantolactone+furaneol, p-vinylguaiacol, phenylacetic acid and vanillin were the most important aroma-active compounds in all vinegars. The strawberry cooked must vinegar accounted for the highest number of impact odorants. Wood barrels provided more aroma complexity than glass containers. Impact odorants with grassy characteristics were predominant in vinegar from glass containers, and those with sweet and fruity characteristics in vinegars from wood barrels. Principal component analysis indicated that the production process led to differences in the impact odorants.

  6. A comparative evaluation of acute stress and corticosterone on the process of learning and emotional memory in rat

    Directory of Open Access Journals (Sweden)

    Vafaei AA

    2009-07-01

    Full Text Available "nBackground: Previous studies suggested that stressful events that release Glucocorticoid from adrenal cortex and also injection of agonists of glucocorticoids receptors probably affect emotional learning and memory process and modulate them. The aim of this study was to determine the effects of acute stress and systemic injection of Corticosterone (as agonist of glucocorticoid receptors on acquisition (ACQ, consolidation (CONS and retrieval (RET of emotional memory in rat. "nMethods: In this experimental study we used 180 male Wistar rats (220-250. At the first rats was training in one trial inhibitory avoidance task. On the retention test given 48 h after training, the latency to re-enter the dark compartment of the apparatus (Step-through latency, STL and the time spent in light chamber (TLC were recorded during 10 min test. Intraperitoneal corticosterone in doses of 0.5, 1 and 3mg/kg injected 30min before, immediately after instruction and 30min before retrieval test. Also some groups received 10min stressful stimulation by restrainer at the same time. At the end locomotor's activity was measured for all animals. "nResults: The data indicated that administration of corticosterone 30min before ACQ (1mg/kg, and immediately after CONS (1, 3mg/kg enhance and 30min before RET (1, 3mg/kg impair emotional memory (p<0.05. Acute stress impaired emotional memory in all phases (p<0.05. Also acute stress and injection of Corticosterone have not significantly affect motor activity.  "nConclusions: These findings show that Glucocorticoid receptors in activation dependently plays an important role in modulation of emotional spatial memory processes (ACQ, CONS and RET in new information for emotional events and these effects varies in different phases.

  7. Source processes at the Chilean subduction region: a comparative analysis of recent large earthquakes seismic sequences in Chile

    Science.gov (United States)

    Cesca, Simone; Tolga Sen, Ali; Dahm, Torsten

    2016-04-01

    Large intraplate megathrust events are common at the western margin of the Southamerican plate, and repeatedly affected the slab segment along Chile, driven by the subduction of the oceanic Nazca plate, with a convergence of almost 7 cm/y. The size and rate of seismicity, including the 1960 Mw 9.5 Chile earthquake, pose Chile among the most highly seismogenic regions worldwide. At the same time, thanks to the significant national and international effort in recent years, Chile is nowadays seismologically well equipped and monitored; the dense seismological network provides a valuable dataset to analyse details of the rupture processes not only for the main events, but also for weaker seismicity preceding, accompanying and following the largest earthquakes. The seismic sequences accompanying recent large earthquakes showed several differences. In some cases, as for the 2014 Iquique earthquake, an important precursor activity took place in the months preceding the main shock, with an accelerating pattern in the last days before the main shock. In other cases, as for the recent Illapel earthquake, the main shock occurred with few precursors. The 2010 Maule earthquake showed an even different patterns, with the activation of secondary faults after the main shock. Recent studies were able to resolve significant changes in specific source parameters, such as changes in the distribution of focal mechanisms, potentially revealing a rotation of the stress tensor, or a spatial variation of rupture velocity, supporting a depth dependence of the rupture speed. An advanced inversion of seismic source parameters and their combined interpretation for multiple sequences can help to understand the diversity of rupture processes along the Chilean slab, and in general for subduction environments. We combine here results of different recent studies to investigate similarity and anomalies of rupture parameters for different seismic sequences, and foreshocks-aftershocks activities

  8. Isovaleraldehyde elimination by UV/TiO2 photocatalysis: comparative study of the process at different reactors configurations and scales.

    Science.gov (United States)

    Assadi, Aymen Amine; Bouzaza, Abdelkrim; Wolbert, Dominique; Petit, Philippe

    2014-10-01

    A proposal for scaling-up the photocatalytic reactors is described and applied to the coated catalytic walls with a thin layer of titanium dioxide under the near ultraviolet (UV) irradiation. In this context, the photocatalytic degradation of isovaleraldehyde in gas phase is studied. In fact, the removal capacity is compared at different continuous reactors: a photocatalytic cylindrical reactor, planar reactor, and pilot unit. Results show that laboratory results can be useful for reactor design and scale-up. The flowrate increases lead to the removal capacity increases also. For example, with pilot unit, when flowrate extends four times, the degradation rate varies from 0.14 to 0.38 g h(-1) mcat (-2). The influence of UV intensity is also studied. When this parameter increases, both degradation rate and overall mineralization are enhanced. Moreover, the effects of inlet concentration, flowrate, geometries, and size of reactors on the removal capacity are also studied.

  9. Comparative study of energy-transfer processes in several porphyrin-based artificial light-harvesting molecules

    Energy Technology Data Exchange (ETDEWEB)

    Hauschild, R. [Institut fuer Angewandte Physik, Universitaet Karlsruhe Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany)]. E-mail: robert.hauschild@physik.uni-karlsruhe.de; Riedel, G. [Institut fuer Angewandte Physik, Universitaet Karlsruhe Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany); Zeller, J. [Institut fuer Angewandte Physik, Universitaet Karlsruhe Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Universitaet Karlsruhe (Germany); Balaban, T.S. [Center for Functional Nanostructures, Universitaet Karlsruhe (Germany); Forschungszentrum Karlsruhe, Institute for Nanotechnology (Germany); Prokhorenko, V.I. [Center for Functional Nanostructures, Universitaet Karlsruhe (Germany); Kalt, H. [Institut fuer Angewandte Physik, Universitaet Karlsruhe Wolfgang-Gaede-Str. 1, D-76131 Karlsruhe (Germany); Center for Functional Nanostructures, Universitaet Karlsruhe (Germany); Berova, N. [Columbia University, Department of Chemistry, New York (United States); Huang, X. [Columbia University, Department of Chemistry, New York (United States); Pescitelli, R. [Columbia University, Department of Chemistry, New York (United States); University of Pisa, Department of Chemistry, Pisa (Italy); Nakanishi, K. [Columbia University, Department of Chemistry, New York (United States)

    2005-04-15

    Time resolved fluorescence spectroscopy was carried out on porphyrin-based artificial light harvesting molecules (ALHM). The ALHMs are dyads consisting of zinc-tetraphenylporphyrin (Zn-TPP) as an antenna complex and a free-base tetraphenylporphyrin (TPP) as an energy trap. Both chromophores are covalently bound by a steroidal bridge in different configurations. We have observed energy transfer in the ALHMs with transfer times on the order of approximately 1 ns. We have also compared the energy transfer dynamics of these ALHMs differing in the link between the energy donor and the energy trap. Only when connected by a chemical bond does energy transfer from Zn-TPP to TPP occur. A spread in the energy transfer times was found which is attributed to the different configurations of the ALHMs under study.

  10. Target and Non-Target Processing during Oddball and Cyberball: A Comparative Event-Related Potential Study.

    Science.gov (United States)

    Weschke, Sarah; Niedeggen, Michael

    2016-01-01

    The phenomenon of social exclusion can be investigated by using a virtual ball-tossing game called Cyberball. In neuroimaging studies, structures have been identified which are activated during social exclusion. But to date the underlying mechanisms are not fully disclosed. In previous electrophysiological studies it was shown that the P3 complex is sensitive to exclusion manipulations in the Cyberball paradigm and that there is a correlation between P3 amplitude and self-reported social pain. Since this posterior event-related potential (ERP) was widely investigated using the oddball paradigm, we directly compared the ERP effects elicited by the target (Cyberball: "ball possession") and non-target (Cyberball: "ball possession of a co-player) events in both paradigms. Analyses mainly focused on the effect of altered stimulus probabilities of the target and non-target events between two consecutive blocks of the tasks. In the first block, the probability of the target and non-target event was 33% (Cyberball: inclusion), in the second block target probability was reduced to 17%, and accordingly, non-target probability was increased to 66% (Cyberball: exclusion). Our results indicate that ERP amplitude differences between inclusion and exclusion are comparable to ERP amplitude effects in a visual oddball task. We therefore suggest that ERP effects--especially in the P3 range--in the Oddball and Cyberball paradigm rely on similar mechanisms, namely the probability of target and non-target events. Since the simulation of social exclusion (Cyberball) did not trigger a unique ERP response, the idea of an exclusion-specific neural alarm system is not supported. The limitations of an ERP-based approach will be discussed.

  11. Semantic memory processing is enhanced in preadolescents breastfed compared to those formula-fed as infants: An ERP N400 study of sentential semantic congruity

    Science.gov (United States)

    Studies comparing child cognitive development and brain activity during cognitive functions between children who were fed breast milk (BF), milk formula (MF), or soy formula (SF) have not been reported. We recorded event-related scalp potentials reflecting semantic processing (N400 ERP) from 20 homo...

  12. Treatment of cutting fluid: comparative study of different processes of recycling; Tratamiento de fluidos de corte estudio comparativo de los diferentes procesos de reciclaje

    Energy Technology Data Exchange (ETDEWEB)

    Labarta Carreno, C.E.; Ipinar, E.

    1997-12-31

    The environmental concerning about cutting fluid (commonly known in Spain as taladrines) is deeply analyzed in this paper by the authors. They describe the history of these hazardous effluents what kind there, are their characteristics, and finally they study comparatively the different industrial processes for their treatment. (Author) 7 refs.

  13. Fractionally distilled SRC-I, SRC-II, EDS, H-Coal and ITSL direct coal liquefaction process materials: a comparative summary of chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.; Dauble, D.D.; Wilson, B.W.

    1985-07-01

    This document reports and compares the results compiled from chemical analyses and biological testing of coal liquefaction process materials which were fractionally distilled, after production, into various comparable boiling-point range cuts. Comparative analyses were performed on solvent refined coal (SRC)-I, SRC-II, H-Coal, EDS an integrated two-stage liquefaction (ITSL) distillate materials. Mutagenicity and carcinogenicity assays were conducted in conjunction with chromatographic and mass spectrometric analyses to provide detailed, comparative, chemical and biological assessments. Where possible, results obtained from the distillate cuts are compared to those from coal liquefaction materials with limited boiling ranges. Work reported here was conducted by investigators in the Biology and Chemistry Department at the Pacific Northwest Laboratory (PNL), Richland, WA. 38 refs., 16 figs., 27 tabs.

  14. Identification of light absorbing oligomers from glyoxal and methylglyoxal aqueous processing: a comparative study at the molecular level

    Science.gov (United States)

    Finessi, Emanuela; Hamilton, Jacqueline; Rickard, Andrew; Baeza-Romero, Maria; Healy, Robert; Peppe, Salvatore; Adams, Tom; Daniels, Mark; Ball, Stephen; Goodall, Iain; Monks, Paul; Borras, Esther; Munoz, Amalia

    2014-05-01

    Numerous studies point to the reactive uptake of gaseous low molecular weight carbonyls onto atmospheric waters (clouds/fog droplets and wet aerosols) as an important SOA formation route not yet included in current models. However, the evaluation of these processes is challenging because water provides a medium for a complex array of reactions to take place such as self-oligomerization, aldol condensation and Maillard-type browning reactions in the presence of ammonium salts. In addition to adding to SOA mass, aqueous chemistry products have been shown to include light absorbing, surface-active and high molecular weight oligomeric species, and can therefore affect climatically relevant aerosol properties such as light absorption and hygroscopicity. Glyoxal (GLY) and methylglyoxal (MGLY) are the gaseous carbonyls that have perhaps received the most attention to date owing to their ubiquity, abundance and reactivity in water, with the majority of studies focussing on bulk physical properties. However, very little is known at the molecular level, in particular for MGLY, and the relative potential of these species as aqueous SOA precursors in ambient air is still unclear. We have conducted experiments with both laboratory solutions and chamber-generated particles to simulate the aqueous processing of GLY and MGLY with ammonium sulphate (AS) under typical atmospheric conditions and investigated their respective aging products. Both high performance liquid chromatography coupled with UV-Vis detection and ion trap mass spectrometry (HPLC-DAD-MSn) and high resolution mass spectrometry (FTICRMS) have been used for molecular identification purposes. Comprehensive gas chromatography with nitrogen chemiluminescence detection (GCxGC-NCD) has been applied for the first time to these systems, revealing a surprisingly high number of nitrogen-containing organics (ONs), with a large extent of polarities. GCxGC-NCD proved to be a valuable tool to determine overall amount and rates of

  15. Comparative lipid production by oleaginous yeasts in hydrolyzates of lignocellulosic biomass and process strategy for high titers.

    Science.gov (United States)

    Slininger, Patricia J; Dien, Bruce S; Kurtzman, Cletus P; Moser, Bryan R; Bakota, Erica L; Thompson, Stephanie R; O'Bryan, Patricia J; Cotta, Michael A; Balan, Venkatesh; Jin, Mingjie; Sousa, Leonardo da Costa; Dale, Bruce E

    2016-08-01

    Oleaginous yeasts can convert sugars to lipids with fatty acid profiles similar to those of vegetable oils, making them attractive for production of biodiesel. Lignocellulosic biomass is an attractive source of sugars for yeast lipid production because it is abundant, potentially low cost, and renewable. However, lignocellulosic hydrolyzates are laden with byproducts which inhibit microbial growth and metabolism. With the goal of identifying oleaginous yeast strains able to convert plant biomass to lipids, we screened 32 strains from the ARS Culture Collection, Peoria, IL to identify four robust strains able to produce high lipid concentrations from both acid and base-pretreated biomass. The screening was arranged in two tiers using undetoxified enzyme hydrolyzates of ammonia fiber expansion (AFEX)-pretreated cornstover as the primary screening medium and acid-pretreated switch grass as the secondary screening medium applied to strains passing the primary screen. Hydrolyzates were prepared at ∼18-20% solids loading to provide ∼110 g/L sugars at ∼56:39:5 mass ratio glucose:xylose:arabinose. A two stage process boosting the molar C:N ratio from 60 to well above 400 in undetoxified switchgrass hydrolyzate was optimized with respect to nitrogen source, C:N, and carbon loading. Using this process three strains were able to consume acetic acid and nearly all available sugars to accumulate 50-65% of cell biomass as lipid (w/w), to produce 25-30 g/L lipid at 0.12-0.22 g/L/h and 0.13-0.15 g/g or 39-45% of the theoretical yield at pH 6 and 7, a performance unprecedented in lignocellulosic hydrolyzates. Three of the top strains have not previously been reported for the bioconversion of lignocellulose to lipids. The successful identification and development of top-performing lipid-producing yeast in lignocellulose hydrolyzates is expected to advance the economic feasibility of high quality biodiesel and jet fuels from renewable biomass, expanding the market

  16. International Conference on Harmonisation; guidance on Q5E Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process; availability. Notice.

    Science.gov (United States)

    2005-06-30

    The Food and Drug Administration (FDA) is announcing the availability of a guidance entitled "Q5E Comparability of Biotechnological/Biological Products Subject to Changes in Their Manufacturing Process." The guidance was prepared under the auspices of the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH). The purpose of the guidance is to provide principles for assessing the comparability of biotechnological/biological products before and after changes are made in the manufacturing process for the drug substance or drug product. The guidance is intended to assist in the collection of relevant technical information that serves as evidence that the manufacturing process changes will not have an adverse impact on the quality, safety, and efficacy of the drug product.

  17. Comparative Proteomic Analysis of Proteins Involved in the Tumorigenic Process of Seminal Vesicle Carcinoma in Transgenic Mice

    Directory of Open Access Journals (Sweden)

    Wei-Chao Chang

    2010-01-01

    Full Text Available We studied the seminal vesicle secretion (SVS of transgenic mice by using one-dimensional gel electrophoresis combined with LTQ-FT ICR MS analysis to explore protein expression profiles. Using unique peptide numbers as a cut-off criterion, 79 proteins were identified with high confidence in the SVS proteome. Label-free quantitative analysis was performed by using the IDEAL_Q software program. Furthermore, western blot assays were performed to validate the expression of seminal vesicle proteins. Sulfhydryl oxidase 1, glia-derived nexin, SVS1, SVS3, and SVS6 showed overexpression in SVS during cancer development. With high sequence similarity to human semenogelin, SVS2 is the most abundance protein in SVS and is dramatically decreased during the tumorigenic process. Our results indicate that these protein candidates could serve as potential targets for monitoring seminal vesicle carcinoma. Moreover, this information can provide clues for investigating seminal vesicle secretion-containing seminal plasma for related human diseases.

  18. A comparative analysis for multiattribute selection among renewable energy alternatives using fuzzy axiomatic design and fuzzy analytic hierarchy process

    Energy Technology Data Exchange (ETDEWEB)

    Kahraman, Cengiz; Kaya, Ihsan; Cebi, Selcuk [Istanbul Technical University, Department of Industrial Engineering, 34367, Macka-Istanbul (Turkey)

    2009-10-15

    Renewable energy is the energy generated from natural resources such as sunlight, wind, rain, tides and geothermal heat which are renewable. Energy resources are very important in perspective of economics and politics for all countries. Hence, the selection of the best alternative for any country takes an important role for energy investments. Among decision-making methodologies, axiomatic design (AD) and analytic hierarchy process (AHP) are often used in the literature. The fuzzy set theory is a powerful tool to treat the uncertainty in case of incomplete or vague information. In this paper, fuzzy multicriteria decision- making methodologies are suggested for the selection among renewable energy alternatives. The first methodology is based on the AHP which allows the evaluation scores from experts to be linguistic expressions, crisp, or fuzzy numbers, while the second is based on AD principles under fuzziness which evaluates the alternatives under objective or subjective criteria with respect to the functional requirements obtained from experts. The originality of the paper comes from the fuzzy AD application to the selection of the best renewable energy alternative and the comparison with fuzzy AHP. In the application of the proposed methodologies the most appropriate renewable energy alternative is determined for Turkey. (author)

  19. The joint flanker effect and the joint Simon effect: On the comparability of processes underlying joint compatibility effects.

    Science.gov (United States)

    Dittrich, Kerstin; Bossert, Marie-Luise; Rothe-Wulf, Annelie; Klauer, Karl Christoph

    2017-09-01

    Previous studies observed compatibility effects in different interference paradigms such as the Simon and flanker task even when the task was distributed across two co-actors. In both Simon and flanker tasks, performance is improved in compatible trials relative to incompatible trials if one actor works on the task alone as well as if two co-actors share the task. These findings have been taken to indicate that actors automatically co-represent their co-actor's task. However, recent research on the joint Simon and joint flanker effect suggests alternative non-social interpretations. To which degree both joint effects are driven by the same underlying processes is the question of the present study, and it was scrutinized by manipulating the visibility of the co-actor. While the joint Simon effect was not affected by the visibility of the co-actor, the joint flanker effect was reduced when participants did not see their co-actors but knew where the co-actors were seated. These findings provide further evidence for a spatial interpretation of the joint Simon effect. In contrast to recent claims, however, we propose a new explanation of the joint flanker effect that attributes the effect to an impairment in the focusing of spatial attention contingent on the visibility of the co-actor.

  20. A comparative simulation study of coupled THM processes and their effect on fractured rock permeability around nuclear waste repositories

    Energy Technology Data Exchange (ETDEWEB)

    Rutqvist, Jonny; Barr, Deborah; Birkholzer, Jens T.; Fujisaki, Kiyoshi; Kolditz, Olf; Liu, Quan-Shen; Fujita, tomoo; Wang, Wenqing; Zhang, Cheng-Yuan

    2008-10-23

    This paper presents an international, multiple-code, simulation study of coupled thermal, hydrological, and mechanical (THM) processes and their effect on permeability and fluid flow in fractured rock around heated underground nuclear waste emplacement drifts. Simulations were conducted considering two types of repository settings: (a) open emplacement drifts in relatively shallow unsaturated volcanic rock, and (b) backfilled emplacement drifts in deeper saturated crystalline rock. The results showed that for the two assumed repository settings, the dominant mechanism of changes in rock permeability was thermal-mechanically-induced closure (reduced aperture) of vertical fractures, caused by thermal stress resulting from repository-wide heating of the rock mass. The magnitude of thermal-mechanically-induced changes in permeability was more substantial in the case of an emplacement drift located in a relatively shallow, low-stress environment where the rock is more compliant, allowing more substantial fracture closure during thermal stressing. However, in both of the assumed repository settings in this study, the thermal-mechanically-induced changes in permeability caused relatively small changes in the flow field, with most changes occurring in the vicinity of the emplacement drifts.

  1. Model Selection Criteria for Missing-Data Problems Using the EM Algorithm.

    Science.gov (United States)

    Ibrahim, Joseph G; Zhu, Hongtu; Tang, Niansheng

    2008-12-01

    We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called IC(H) (,) (Q), which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of IC(H) (,) (Q) requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm used in obtaining maximum likelihood estimates. The approximation to the H-function leads to a large class of information criteria, called IC(H̃) (() (k) (),) (Q). Theoretical properties of IC(H̃) (() (k) (),) (Q), including consistency, are investigated in detail. To eliminate the analytic approximation to the H-function, a computationally simpler approximation to IC(H) (,) (Q), called IC(Q), is proposed, the computation of which depends solely on the Q-function of the EM algorithm. Advantages and disadvantages of IC(H̃) (() (k) (),) (Q) and IC(Q) are discussed and examined in detail in the context of missing-data problems. Extensive simulations are given to demonstrate the methodology and examine the small-sample and large-sample performance of IC(H̃) (() (k) (),) (Q) and IC(Q) in missing-data problems. An AIDS data set also is presented to illustrate the proposed methodology.

  2. Spectroscopic investigations of plasma nitriding processes: A comparative study using steel and carbon as active screen materials

    Science.gov (United States)

    Hamann, S.; Burlacov, I.; Spies, H.-J.; Biermann, H.; Röpcke, J.

    2017-04-01

    Low-pressure pulsed DC H2-N2 plasmas were investigated in the laboratory active screen plasma nitriding monitoring reactor, PLANIMOR, to compare the usage of two different active screen electrodes: (i) a steel screen with the additional usage of CH4 as carbon containing precursor in the feeding gas and (ii) a carbon screen without the usage of any additional gaseous carbon precursor. Applying the quantum cascade laser absorption spectroscopy, the evolution of the concentration of four stable molecular species, NH3, HCN, CH4, and C2H2, has been monitored. The concentrations were found to be in a range of 1012-1016 molecules cm-3. By analyzing the development of the molecular concentrations at variations of the screen plasma power, a similar behavior of the monitored reaction products has been found for both screen materials, with NH3 and HCN as the main reaction products. When using the carbon screen, the concentration of HCN and C2H2 was 30 and 70 times higher, respectively, compared to the usage of the steel screen with an admixture of 1% CH4. Considering the concentration of the three detected hydrocarbon reaction products, a combustion rate of the carbon screen of up to 69 mg h-1 has been found. The applied optical emission spectroscopy enabled the determination of the rotational temperature of the N2+ ion which has been in a range of 650-900 K increasing with the power in a similar way in the plasma of both screens. Also with power the ionic component of nitrogen molecules, represented by the N2+ (0-0) band of the first negative system, as well as the CN (0-0) band of the violet system increase strongly in relation to the intensity of the neutral nitrogen component, i.e., the N2 (0-0) band of the second positive system. In addition, steel samples have been treated with both the steel and the carbon screen resulting in a formation of a compound layer of up to 10 wt. % nitrogen and 10 wt. % carbon, respectively, depending on the screen material.

  3. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  4. From Reactionary to Responsive: Applying the Internal Environmental Scan Protocol to Lifelong Learning Strategic Planning and Operational Model Selection

    Science.gov (United States)

    Downing, David L.

    2009-01-01

    This study describes and implements a necessary preliminary strategic planning procedure, the Internal Environmental Scanning (IES), and discusses its relevance to strategic planning and university-sponsored lifelong learning program model selection. Employing a qualitative research methodology, a proposed lifelong learning-centric IES process…

  5. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    Science.gov (United States)

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  6. Microwave processed bulk and nano NiMg ferrites: A comparative study on X-band electromagnetic interference shielding properties

    Energy Technology Data Exchange (ETDEWEB)

    Chandra Babu Naidu, K., E-mail: chandrababu954@gmail.com [Ceramic Composite Laboratory, Centre for Crystal Growth, SAS, VIT University, Vellore 632014, Tamilnadu (India); Madhuri, W., E-mail: madhuriw12@gmail.com [Ceramic Composite Laboratory, Centre for Crystal Growth, SAS, VIT University, Vellore 632014, Tamilnadu (India); IFW, Leibniz Institute for Solid State and Materials Research, Technische Universität Dresden, 01069 Dresden (Germany)

    2017-02-01

    Bulk and nano Ni{sub 1-x}Mg{sub x}Fe{sub 2}O{sub 4} (x = 0–1) samples were synthesized via microwave double sintering and microwave assisted hydrothermal techniques respectively. The diffraction pattern confirmed the formation of cubic spinel phases in case of both the ferrites. The larger bulk densities were achieved to the bulk than that of nano. In addition, a comparative study on X-band (8.4–12 GHz) electromagnetic interference shielding properties of current bulk and nanomaterials was elucidated. The results showed that the bulk Ni{sub 0.6}Mg{sub 0.4}Fe{sub 2}O{sub 4} composition revealed the highest total shielding efficiency (SE{sub T}) of ∼17 dB. In comparison, the shielding efficiency values of all bulk contents were higher than that of nano because of larger bulk densities. Moreover, the ac-electromagnetic parameters such as electrical conductivity (σ{sub ac}), the respective real (ε′ & μ′) and imaginary parts (ε″ & μ″) of complex permittivity and permeability were investigated as a function of gigahertz frequency. The bulk ferrites of x = 0.4 & 0.6 showed the high ε″ of 10.26 & 6.71 and μ″ of 3.65 & 3.09 respectively at 12 GHz which can work as promising microwave absorber materials. Interestingly, nanoferrites exhibited negative μ″ values at few frequencies due to geometrical effects which improves the microwave absorption. - Highlights: • Bulk and nano NiMg ferrites are prepared by microwave and hydrothermal method. • X-band EMI shielding properties are studied for both bulk and nano ferrites. • Bulk Ni{sub 0.6}Mg{sub 0.4}Fe{sub 2}O{sub 4} revealed the highest SE{sub T} of ∼17 dB at 8.4 GHz. • Bulk x = 0.4 & 0.6 showed the high ε″ and μ″ at 12 GHz for absorber applications.

  7. A Comparative Study of Applying Active-Set and Interior Point Methods in MPC for Controlling Nonlinear pH Process

    Directory of Open Access Journals (Sweden)

    Syam Syafiie

    2014-06-01

    Full Text Available A comparative study of Model Predictive Control (MPC using active-set method and interior point methods is proposed as a control technique for highly non-linear pH process. The process is a strong acid-strong base system. A strong acid of hydrochloric acid (HCl and a strong base of sodium hydroxide (NaOH with the presence of buffer solution sodium bicarbonate (NaHCO3 are used in a neutralization process flowing into reactor. The non-linear pH neutralization model governed in this process is presented by multi-linear models. Performance of both controllers is studied by evaluating its ability of set-point tracking and disturbance-rejection. Besides, the optimization time is compared between these two methods; both MPC shows the similar performance with no overshoot, offset, and oscillation. However, the conventional active-set method gives a shorter control action time for small scale optimization problem compared to MPC using IPM method for pH control.

  8. On the Evidence for Cosmic Variation of the Fine Structure Constant (II): A Semi-Parametric Bayesian Model Selection Analysis of the Quasar Dataset

    CERN Document Server

    Cameron, Ewan

    2013-01-01

    In the second paper of this series we extend our Bayesian reanalysis of the evidence for a cosmic variation of the fine structure constant to the semi-parametric modelling regime. By adopting a mixture of Dirichlet processes prior for the unexplained errors in each instrumental subgroup of the benchmark quasar dataset we go some way towards freeing our model selection procedure from the apparent subjectivity of a fixed distributional form. Despite the infinite-dimensional domain of the error hierarchy so constructed we are able to demonstrate a recursive scheme for marginal likelihood estimation with prior-sensitivity analysis directly analogous to that presented in Paper I, thereby allowing the robustness of our posterior Bayes factors to hyper-parameter choice and model specification to be readily verified. In the course of this work we elucidate various similarities between unexplained error problems in the seemingly disparate fields of astronomy and clinical meta-analysis, and we highlight a number of sop...

  9. Multiscale Model Selection for High-Frequency Financial Data of a Large Tick Stock by Means of the Jensen–Shannon Metric

    Directory of Open Access Journals (Sweden)

    Gianbiagio Curato

    2014-01-01

    Full Text Available Modeling financial time series at different time scales is still an open challenge. The choice of a suitable indicator quantifying the distance between the model and the data is therefore of fundamental importance for selecting models. In this paper, we propose a multiscale model selection method based on the Jensen–Shannon distance in order to select the model that is able to better reproduce the distribution of price changes at different time scales. Specifically, we consider the problem of modeling the ultra high frequency dynamics of an asset with a large tick-to-price ratio. We study the price process at different time scales and compute the Jensen–Shannon distance between the original dataset and different models, showing that the coupling between spread and returns is important to model return distribution at different time scales of observation, ranging from the scale of single transactions to the daily time scale.

  10. Influence of model selection on the predicted distribution of the seagrass Zostera marina

    Science.gov (United States)

    Downie, Anna-Leena; von Numers, Mikael; Boström, Christoffer

    2013-04-01

    There is an increasing need to model the distribution of species and habitats for effective conservation planning, but there is a paucity of models for the marine environment. We used presence (131) and absence (219) records of the marine angiosperm Zostera marina L. from the archipelago of SW Finland, northern Baltic Sea, to model its distribution in a 5400 km2 area. We used depth, slope, turbidity, wave exposure and distance to sandy shores as environmental predictors, and compared a presence-absence method: generalised additive model (GAM), with a presence only method: maximum entropy (Maxent). Models were validated using semi-independent data sets. Both models performed well and described the niche of Z. marina fairly consistently, although there were differences in the way the models weighted the environmental variables, and consequently the spatial predictions differed somewhat. A notable outcome from the process was that with relatively equal model performance, the area actually predicted in geographical space can vary by twofold. The area predicted as suitable for Z. marina by the ensemble was almost half of that predicted by the GAM model by itself. The ensemble of model predictions increased the model predictive capability marginally and clearly shifted the model towards a more conservative prediction, increasing specificity, but at the same time sacrificing sensitivity. The environmental predictors selected into the final models described the potential distribution of Z. marina well and showed that in the northern Baltic the species occupies a narrow niche, typically thriving in shallow and moderately exposed to exposed locations near sandy shores. We conclude that a prediction based on a combination of model results provides a more realistic estimate of the core area suitable for Z. marina and should be the modelling approach implemented in conservation planning and management.

  11. Modified release itraconazole amorphous solid dispersion to treat Aspergillus fumigatus: importance of the animal model selection.

    Science.gov (United States)

    Maincent, Julien P; Najvar, Laura K; Kirkpatrick, William R; Huang, Siyuan; Patterson, Thomas F; Wiederhold, Nathan P; Peters, Jay I; Williams, Robert O

    2017-02-01

    Previously, modified release itraconazole in the form of a melt-extruded amorphous solid dispersion based on a pH dependent enteric polymer combined with hydrophilic additives (HME-ITZ), exhibited improved in vitro dissolution properties. These properties agreed with pharmacokinetic results in rats showing high and sustained itraconazole (ITZ) systemic levels. The objective of the present study was to better understand the best choice of rodent model for evaluating the pharmacokinetic and efficacy of this orally administered modified release ITZ dosage form against invasive Aspergillus fumigatus. A mouse model and a guinea pig model were investigated and compared to results previously published. In the mouse model, despite similar levels as previously reported values, plasma and lung levels were variable and fungal burden was not statistically different for placebo controls, HME-ITZ and Sporanox(®) (ITZ oral solution). This study demonstrated that the mouse model is a poor choice for studying modified release ITZ dosage forms based on pH dependent enteric polymers due to low fluid volume available for dissolution and low intestinal pH. To the contrary, guinea pig was a suitable model to evaluate modified release ITZ dosage forms. Indeed, a significant decrease in lung fungal burden as a result of high and sustained ITZ tissue levels was measured. Sufficiently high intestinal pH and fluids available for dissolution likely facilitated the dissolution process. Despite high ITZ tissue level, the primary therapeutic agent voriconazole exhibited an even more pronounced decrease in fungal burden due to its reported higher clinical efficacy specifically against Aspergillus fumigatus.

  12. Different routes to the same ending: comparing the N-glycosylation processes of Haloferax volcanii and Haloarcula marismortui, two halophilic archaea from the Dead Sea

    OpenAIRE

    Calo, Doron; Guan, Ziqiang; Naparstek, Shai; Eichler, Jerry

    2011-01-01

    Recent insight into the N-glycosylation pathway of the haloarchaeon, Haloferax volcanii, is helping to bridge the gap between our limited understanding of the archaeal version of this universal post-translational modification and the better-described eukaryal and bacterial processes. To delineate as yet undefined steps of the Hfx. volcanii N-glycosylation pathway, a comparative approach was taken with the initial characterization of N-glycosylation in Haloarcula marismortui, a second haloarch...

  13. The advantages of using activated flux-cored wire compared to solid wire in the MAG welding process from the aspect of metallurgical characteristics

    Directory of Open Access Journals (Sweden)

    N. Bajić

    2014-07-01

    Full Text Available This paper analyzes, from the metallurgical aspect, the quality of the new flux-cored wire intended for the MAG welding process in function of changes in shielding gas composition and changes in welding parameters. The results of comparative analysis of the microstructure of the weld metal and Heat Affected Zone (HAZ allow drawing conclusions about the feasibility of introducing a new quality flux-cored wire in industrial applications.

  14. Comparative Studies of Electrospinning and Solution Blow Spinning Processes for the Production of Nanofibrous Poly(L-Lactic Acid Materials for Biomedical Engineering

    Directory of Open Access Journals (Sweden)

    Wojasiński Michal

    2014-06-01

    Full Text Available Comparative statistical analysis of the infiuence of processing parameters, for electrospinning (ES and solution blow spinning (SBS processes, on nanofibrous poly(L-lactic acid (PLLA material morphology and average fiber diameter was conducted in order to identify the key processing parameter for tailoring the product properties. Further, a comparative preliminary biocompatibility evaluation was performed. Based on Design of Experiment (DOE principles, analysis of standard effects of voltage, air pressure, solution feed rate and concentration, on nanofibers average diameter was performed with the Pareto’s charts and the best fitted surface charts. Nanofibers were analyzed by scanning electron microscopy (SEM. The preliminary biocompatibility comparative tests were performed based on SEM microphotographs of CP5 cells cultured on materials derived from ES and SBS. Polymer solution concentration was identified as the key parameter infiuencing morphology and dimensions of nanofibrous mat produced from both techniques. In both cases, when polymer concentration increases the average fiber diameter increase. The preliminary biocompatibility test suggests that nanofibers produced by ES as well as SBS are suitable as the biomedical engineering scaffold material.

  15. Should processed or raw image data be used in mammographic image quality analyses? A comparative study of three full-field digital mammography systems.

    Science.gov (United States)

    Borg, Mark; Badr, Ishmail; Royle, Gary

    2015-01-01

    The purpose of this study is to compare a number of measured image quality parameters using processed and unprocessed or raw images in two full-field direct digital units and one computed radiography mammography system. This study shows that the difference between raw and processed image data is system specific. The results have shown that there are no significant differences between raw and processed data in the mean threshold contrast values using the contrast-detail mammography phantom in all the systems investigated; however, these results cannot be generalised to all available systems. Notable differences were noted in contrast-to-noise ratios and in other tests including: response function, modulation transfer function , noise equivalent quanta, normalised noise power spectra and detective quantum efficiency as specified in IEC 62220-1-2. Consequently, the authors strongly recommend the use of raw data for all image quality analyses in digital mammography.

  16. Microarray-based gene expression analysis as a process characterization tool to establish comparability of complex biological products: scale-up of a whole-cell immunotherapy product.

    Science.gov (United States)

    Wang, Min; Senger, Ryan S; Paredes, Carlos; Banik, Gautam G; Lin, Andy; Papoutsakis, Eleftherios T

    2009-11-01

    Whole-cell immunotherapies and other cellular therapies have shown promising results in clinical trials. Due to the complex nature of the whole cell product and of the sometimes limited correlation of clinical potency with the proposed mechanism of action, these cellular immunotherapy products are generally not considered well characterized. Therefore, one major challenge in the product development of whole cell therapies is the ability to demonstrate comparability of product after changes in the manufacturing process. Such changes are nearly inevitable with increase in manufacturing experience leading to improved and robust processes that may have higher commercial feasibility. In order to comprehensively assess the impact of the process changes on the final product, and thus establish comparability, a matrix of characterization assays (in addition to lot release assays) assessing the various aspects of the cellular product are required. In this study, we assessed the capability of DNA-microarray-based, gene-expression analysis as a characterization tool using GVAX cancer immunotherapy cells manufactured by Cell Genesys, Inc. The GVAX immunotherapy product consists two prostate cancer cell lines (CG1940 and CG8711) engineered to secrete human GM-CSF. To demonstrate the capability of the assay, we assessed the transcriptional changes in the product when produced in the presence or absence of fetal bovine serum, and under normal and hypoxic conditions, both changes intended to stress the cell lines. We then assessed the impact of an approximately 10-fold process scale-up on the final product at the transcriptional level. These data were used to develop comparisons and statistical analyses suitable for characterizing culture reproducibility and cellular product similarity. Use of gene-expression data for process characterization proved to be a reproducible and sensitive method for detecting differences due to small or large changes in culture conditions as might be

  17. A Framework for Parameter Estimation and Model Selection from Experimental Data in Systems Biology Using Approximate Bayesian Computation

    Science.gov (United States)

    Liepe, Juliane; Kirk, Paul; Filippi, Sarah; Toni, Tina; Barnes, Chris P.; Stumpf, Michael P.H.

    2016-01-01

    As modeling becomes a more widespread practice in the life- and biomedical sciences, we require reliable tools to calibrate models against ever more complex and detailed data. Here we present an approximate Bayesian computation framework and software environment, ABC-SysBio, which enables parameter estimation and model selection in the Bayesian formalism using Sequential Monte-Carlo approaches. We outline the underlying rationale, discuss the computational and practical issues, and provide detailed guidance as to how the important tasks of parameter inference and model selection can be carried out in practice. Unlike other available packages, ABC-SysBio is highly suited for investigating in particular the challenging problem of fitting stochastic models to data. Although computationally expensive, the additional insights gained in the Bayesian formalism more than make up for this cost, especially in complex problems. PMID:24457334

  18. The time-profile of cell growth in fission yeast: model selection criteria favoring bilinear models over exponential ones

    OpenAIRE

    Sveiczer Akos; Buchwald Peter

    2006-01-01

    Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential) F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, whi...

  19. Comparative Pharmacokinetic Profiles of Three Protoberberine-type Alkaloids from Raw and Bile-processed Rhizoma coptidis in Heat Syndrome Rats

    Science.gov (United States)

    Zi-min, Yuan; Yue, Chen; Hui, Gao; Jia, Lv; Gui-rong, Chen; Wang, Jing

    2017-01-01

    Background: The Bile-processed Rhizoma coptidis (BRC), which has a colder drug property than Rhizoma coptidis (RC), is widely used for the treatment of heat syndrome. We compared the pharmacokinetics of the protoberberine-type alkaloids in BRC and RC in rats with heat syndrome to elucidate the bile-processing mechanism. Material and Methods: We established a rapid and sensitive method for simultaneously determining three alkaloids: berberine, palmatine, and jatrorrhizine, in rat plasma based on ultra-performance liquid chromatography/tandem mass spectrometry. The separation was carried out on a Waters ACQUITY BEA C18 column. The mobile phase consisted of acetonitrile (containing 0.1% formic acid) and water (containing 0.1% formic acid and 10 mmol/L ammonium acetate) and carbamazepine was used as an internal standard. The detection was carried out in a multiple reaction monitoring mode (MRM) using electrospray ionization in the positive ion mode. Results: Pharmacokinetic profiles indicated that the Cmax of berberine and palmatine increased two times and the Tmax of the three alkaloids decreased three times after bile processing. AUC0→∞ and AUC0→t of the alkaloids were similar between RC and BRC. Conclusion: The results suggest that bile processing could increase the absorption rate of alkaloids. This study broadens our understanding of Chinese herbal medicine processing. SUMMARY Contents of berberine, palmatine and jatrorrhizine, in heat syndrome rats’ plasma between the raw and bile-processed Rhizoma coptidis (RC) were determined by UPLC-MS/MS.The whole pharmacokinetic profiles of three alkaloids in the bile-processed Rhizoma coptidis (BRC) were similar to those of RC.The shorter Tmax and increased 2-fold Cmax were obtained after RC bile-processing.Bile-processing could promote the absorption rate of alkaloids in a certain degree. Abbreviation Used: RC: Rhizoma coptidis, BRC: Bile-processed Rhizoma coptidis, HPLC: high-performance liquid chromatography

  20. Implementation of the Nutrition Care Process and International Dietetics and Nutrition Terminology in a single-center hemodialysis unit: comparing paper vs electronic records.

    Science.gov (United States)

    Rossi, Megan; Campbell, Katrina Louise; Ferguson, Maree

    2014-01-01

    There is little doubt surrounding the benefits of the Nutrition Care Process and International Dietetics and Nutrition Terminology (IDNT) to dietetics practice; however, evidence to support the most efficient method of incorporating these into practice is lacking. The main objective of our study was to compare the efficiency and effectiveness of an electronic and a manual paper-based system for capturing the Nutrition Care Process and IDNT in a single in-center hemodialysis unit. A cohort of 56 adult patients receiving maintenance hemodialysis were followed for 12 months. During the first 6 months, patients received the usual standard care, with documentation via a manual paper-based system. During the following 6-month period (Months 7 to 12), nutrition care was documented by an electronic system. Workload efficiency, number of IDNT codes used related to nutrition-related diagnoses, interventions, monitoring and evaluation using IDNT, nutritional status using the scored Patient-Generated Subjective Global Assessment Tool of Quality of Life were the main outcome measures. Compared with paper-based documentation of nutrition care, our study demonstrated that an electronic system improved the efficiency of total time spent by the dietitian by 13 minutes per consultation. There were also a greater number of nutrition-related diagnoses resolved using the electronic system compared with the paper-based documentation (PDietetics. Published by Elsevier Inc. All rights reserved.