WorldWideScience

Sample records for based model-free analysis

  1. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  2. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    Science.gov (United States)

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  3. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Rapid acquisition and model-based analysis of cell-free transcription–translation reactions from nonmodel bacteria

    Science.gov (United States)

    Wienecke, Sarah; Ishwarbhai, Alka; Tsipa, Argyro; Aw, Rochelle; Kylilis, Nicolas; Bell, David J.; McClymont, David W.; Jensen, Kirsten; Biedendieck, Rebekka

    2018-01-01

    Native cell-free transcription–translation systems offer a rapid route to characterize the regulatory elements (promoters, transcription factors) for gene expression from nonmodel microbial hosts, which can be difficult to assess through traditional in vivo approaches. One such host, Bacillus megaterium, is a giant Gram-positive bacterium with potential biotechnology applications, although many of its regulatory elements remain uncharacterized. Here, we have developed a rapid automated platform for measuring and modeling in vitro cell-free reactions and have applied this to B. megaterium to quantify a range of ribosome binding site variants and previously uncharacterized endogenous constitutive and inducible promoters. To provide quantitative models for cell-free systems, we have also applied a Bayesian approach to infer ordinary differential equation model parameters by simultaneously using time-course data from multiple experimental conditions. Using this modeling framework, we were able to infer previously unknown transcription factor binding affinities and quantify the sharing of cell-free transcription–translation resources (energy, ribosomes, RNA polymerases, nucleotides, and amino acids) using a promoter competition experiment. This allows insights into resource limiting-factors in batch cell-free synthesis mode. Our combined automated and modeling platform allows for the rapid acquisition and model-based analysis of cell-free transcription–translation data from uncharacterized microbial cell hosts, as well as resource competition within cell-free systems, which potentially can be applied to a range of cell-free synthetic biology and biotechnology applications. PMID:29666238

  5. Equation-free analysis of agent-based models and systematic parameter determination

    Science.gov (United States)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for

  6. Model-based and model-free Pavlovian reward learning: revaluation, revision, and revelation.

    Science.gov (United States)

    Dayan, Peter; Berridge, Kent C

    2014-06-01

    Evidence supports at least two methods for learning about reward and punishment and making predictions for guiding actions. One method, called model-free, progressively acquires cached estimates of the long-run values of circumstances and actions from retrospective experience. The other method, called model-based, uses representations of the environment, expectations, and prospective calculations to make cognitive predictions of future value. Extensive attention has been paid to both methods in computational analyses of instrumental learning. By contrast, although a full computational analysis has been lacking, Pavlovian learning and prediction has typically been presumed to be solely model-free. Here, we revise that presumption and review compelling evidence from Pavlovian revaluation experiments showing that Pavlovian predictions can involve their own form of model-based evaluation. In model-based Pavlovian evaluation, prevailing states of the body and brain influence value computations, and thereby produce powerful incentive motivations that can sometimes be quite new. We consider the consequences of this revised Pavlovian view for the computational landscape of prediction, response, and choice. We also revisit differences between Pavlovian and instrumental learning in the control of incentive motivation.

  7. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    Science.gov (United States)

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  8. Tuning SISO offset-free Model Predictive Control based on ARX models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2012-01-01

    , the proposed controller is simple to tune as it has only one free tuning parameter. These two features are advantageous in predictive process control as they simplify industrial commissioning of MPC. Disturbance rejection and offset-free control is important in industrial process control. To achieve offset......In this paper, we present a tuning methodology for a simple offset-free SISO Model Predictive Controller (MPC) based on autoregressive models with exogenous inputs (ARX models). ARX models simplify system identification as they can be identified from data using convex optimization. Furthermore......-free control in face of unknown disturbances or model-plant mismatch, integrators must be introduced in either the estimator or the regulator. Traditionally, offset-free control is achieved using Brownian disturbance models in the estimator. In this paper we achieve offset-free control by extending the noise...

  9. Structural analysis of gluten-free doughs by fractional rheological model

    Science.gov (United States)

    Orczykowska, Magdalena; Dziubiński, Marek; Owczarz, Piotr

    2015-02-01

    This study examines the effects of various components of tested gluten-free doughs, such as corn starch, amaranth flour, pea protein isolate, and cellulose in the form of plantain fibers on rheological properties of such doughs. The rheological properties of gluten-free doughs were assessed by using the rheological fractional standard linear solid model (FSLSM). Parameter analysis of the Maxwell-Wiechert fractional derivative rheological model allows to state that gluten-free doughs present a typical behavior of viscoelastic quasi-solid bodies. We obtained the contribution dependence of each component used in preparations of gluten-free doughs (either hard-gel or soft-gel structure). The complicate analysis of the mechanical structure of gluten-free dough was done by applying the FSLSM to explain quite precisely the effects of individual ingredients of the dough on its rheological properties.

  10. Variability in Dopamine Genes Dissociates Model-Based and Model-Free Reinforcement Learning.

    Science.gov (United States)

    Doll, Bradley B; Bath, Kevin G; Daw, Nathaniel D; Frank, Michael J

    2016-01-27

    Considerable evidence suggests that multiple learning systems can drive behavior. Choice can proceed reflexively from previous actions and their associated outcomes, as captured by "model-free" learning algorithms, or flexibly from prospective consideration of outcomes that might occur, as captured by "model-based" learning algorithms. However, differential contributions of dopamine to these systems are poorly understood. Dopamine is widely thought to support model-free learning by modulating plasticity in striatum. Model-based learning may also be affected by these striatal effects, or by other dopaminergic effects elsewhere, notably on prefrontal working memory function. Indeed, prominent demonstrations linking striatal dopamine to putatively model-free learning did not rule out model-based effects, whereas other studies have reported dopaminergic modulation of verifiably model-based learning, but without distinguishing a prefrontal versus striatal locus. To clarify the relationships between dopamine, neural systems, and learning strategies, we combine a genetic association approach in humans with two well-studied reinforcement learning tasks: one isolating model-based from model-free behavior and the other sensitive to key aspects of striatal plasticity. Prefrontal function was indexed by a polymorphism in the COMT gene, differences of which reflect dopamine levels in the prefrontal cortex. This polymorphism has been associated with differences in prefrontal activity and working memory. Striatal function was indexed by a gene coding for DARPP-32, which is densely expressed in the striatum where it is necessary for synaptic plasticity. We found evidence for our hypothesis that variations in prefrontal dopamine relate to model-based learning, whereas variations in striatal dopamine function relate to model-free learning. Decisions can stem reflexively from their previously associated outcomes or flexibly from deliberative consideration of potential choice outcomes

  11. The "proactive" model of learning: Integrative framework for model-free and model-based reinforcement learning utilizing the associative learning-based proactive brain concept.

    Science.gov (United States)

    Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf

    2016-02-01

    Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).

  12. Performance analysis on free-piston Stirling cryocooler based on an idealized mathematical model

    Science.gov (United States)

    Guo, Y. X.; Chao, Y. J.; Gan, Z. H.; Li, S. Z.; Wang, B.

    2017-12-01

    Free-piston Stirling cryocoolers have extensive applications for its simplicity in structure and decrease in mass. However, the elimination of the motor and the crankshaft has made its thermodynamic characteristic different from that of Stirling cryocoolers with displacer driving mechanism. Therefore, an idealized mathematical model has been established, and with this model, an attempt has been made to analyse the thermodynamic characteristic and the performance of free-piston Stirling cryocooler. To certify this mathematical model, a comparison has been made between the model and a numerical model. This study reveals that due to the displacer damping force necessary for the production of cooling capacity, the free-piston Stirling cryocooler is inherently less efficient than Stirling cryocooler with displacer driving mechanism. Viscous flow resistance and incomplete heat transfer in the regenerator are the two major causes of the discrepancy between the results of the idealized mathematical model and the numerical model.

  13. Model-based and model-free “plug-and-play” building energy efficient control

    International Nuclear Information System (INIS)

    Baldi, Simone; Michailidis, Iakovos; Ravanis, Christos; Kosmatopoulos, Elias B.

    2015-01-01

    Highlights: • “Plug-and-play” Building Optimization and Control (BOC) driven by building data. • Ability to handle the large-scale and complex nature of the BOC problem. • Adaptation to learn the optimal BOC policy when no building model is available. • Comparisons with rule-based and advanced BOC strategies. • Simulation and real-life experiments in a ten-office building. - Abstract: Considerable research efforts in Building Optimization and Control (BOC) have been directed toward the development of “plug-and-play” BOC systems that can achieve energy efficiency without compromising thermal comfort and without the need of qualified personnel engaged in a tedious and time-consuming manual fine-tuning phase. In this paper, we report on how a recently introduced Parametrized Cognitive Adaptive Optimization – abbreviated as PCAO – can be used toward the design of both model-based and model-free “plug-and-play” BOC systems, with minimum human effort required to accomplish the design. In the model-based case, PCAO assesses the performance of its control strategy via a simulation model of the building dynamics; in the model-free case, PCAO optimizes its control strategy without relying on any model of the building dynamics. Extensive simulation and real-life experiments performed on a 10-office building demonstrate the effectiveness of the PCAO–BOC system in providing significant energy efficiency and improved thermal comfort. The mechanisms embedded within PCAO render it capable of automatically and quickly learning an efficient BOC strategy either in the presence of complex nonlinear simulation models of the building dynamics (model-based) or when no model for the building dynamics is available (model-free). Comparative studies with alternative state-of-the-art BOC systems show the effectiveness of the PCAO–BOC solution

  14. Free convection film flows and heat transfer laminar free convection of phase flows and models for heat-transfer analysis

    CERN Document Server

    Shang, De-Yi

    2012-01-01

    This book presents recent developments in our systematic studies of hydrodynamics and heat and mass transfer in laminar free convection, accelerating film boiling and condensation of Newtonian fluids, as well as accelerating film flow of non-Newtonian power-law fluids (FFNF). These new developments provided in this book are (i) novel system of analysis models based on the developed New Similarity Analysis Method; (ii) a system of advanced methods for treatment of gas temperature- dependent physical properties, and liquid temperature- dependent physical properties; (iii) the organically combined models of the governing mathematical models with those on treatment model of variable physical properties; (iv) rigorous approach of overcoming a challenge on accurate solution of three-point boundary value problem related to two-phase film boiling and condensation; and (v) A pseudo-similarity method of dealing with thermal boundary layer of FFNF for greatly simplifies the heat-transfer analysis and numerical calculati...

  15. Free-energy analysis of spin models on hyperbolic lattice geometries.

    Science.gov (United States)

    Serina, Marcel; Genzor, Jozef; Lee, Yoju; Gendiar, Andrej

    2016-04-01

    We investigate relations between spatial properties of the free energy and the radius of Gaussian curvature of the underlying curved lattice geometries. For this purpose we derive recurrence relations for the analysis of the free energy normalized per lattice site of various multistate spin models in the thermal equilibrium on distinct non-Euclidean surface lattices of the infinite sizes. Whereas the free energy is calculated numerically by means of the corner transfer matrix renormalization group algorithm, the radius of curvature has an analytic expression. Two tasks are considered in this work. First, we search for such a lattice geometry, which minimizes the free energy per site. We conjecture that the only Euclidean flat geometry results in the minimal free energy per site regardless of the spin model. Second, the relations among the free energy, the radius of curvature, and the phase transition temperatures are analyzed. We found out that both the free energy and the phase transition temperature inherit the structure of the lattice geometry and asymptotically approach the profile of the Gaussian radius of curvature. This achievement opens new perspectives in the AdS-CFT correspondence theories.

  16. Implicit methods for equation-free analysis: convergence results and analysis of emergent waves in microscopic traffic models

    DEFF Research Database (Denmark)

    Marschler, Christian; Sieber, Jan; Berkemer, Rainer

    2014-01-01

    We introduce a general formulation for an implicit equation-free method in the setting of slow-fast systems. First, we give a rigorous convergence result for equation-free analysis showing that the implicitly defined coarse-level time stepper converges to the true dynamics on the slow manifold...... against the direction of traffic. Equation-free analysis enables us to investigate the behavior of the microscopic traffic model on a macroscopic level. The standard deviation of cars' headways is chosen as the macroscopic measure of the underlying dynamics such that traveling wave solutions correspond...... to equilibria on the macroscopic level in the equation-free setup. The collapse of the traffic jam to the free flow then corresponds to a saddle-node bifurcation of this macroscopic equilibrium. We continue this bifurcation in two parameters using equation-free analysis....

  17. Model-free prediction and regression a transformation-based approach to inference

    CERN Document Server

    Politis, Dimitris N

    2015-01-01

    The Model-Free Prediction Principle expounded upon in this monograph is based on the simple notion of transforming a complex dataset to one that is easier to work with, e.g., i.i.d. or Gaussian. As such, it restores the emphasis on observable quantities, i.e., current and future data, as opposed to unobservable model parameters and estimates thereof, and yields optimal predictors in diverse settings such as regression and time series. Furthermore, the Model-Free Bootstrap takes us beyond point prediction in order to construct frequentist prediction intervals without resort to unrealistic assumptions such as normality. Prediction has been traditionally approached via a model-based paradigm, i.e., (a) fit a model to the data at hand, and (b) use the fitted model to extrapolate/predict future data. Due to both mathematical and computational constraints, 20th century statistical practice focused mostly on parametric models. Fortunately, with the advent of widely accessible powerful computing in the late 1970s, co...

  18. Comparing Free-Free and Shaker Table Model Correlation Methods Using Jim Beam

    Science.gov (United States)

    Ristow, James; Smith, Kenneth Wayne, Jr.; Johnson, Nathaniel; Kinney, Jackson

    2018-01-01

    Finite element model correlation as part of a spacecraft program has always been a challenge. For any NASA mission, the coupled system response of the spacecraft and launch vehicle can be determined analytically through a Coupled Loads Analysis (CLA), as it is not possible to test the spacecraft and launch vehicle coupled system before launch. The value of the CLA is highly dependent on the accuracy of the frequencies and mode shapes extracted from the spacecraft model. NASA standards require the spacecraft model used in the final Verification Loads Cycle to be correlated by either a modal test or by comparison of the model with Frequency Response Functions (FRFs) obtained during the environmental qualification test. Due to budgetary and time constraints, most programs opt to correlate the spacecraft dynamic model during the environmental qualification test, conducted on a large shaker table. For any model correlation effort, the key has always been finding a proper definition of the boundary conditions. This paper is a correlation case study to investigate the difference in responses of a simple structure using a free-free boundary, a fixed boundary on the shaker table, and a base-drive vibration test, all using identical instrumentation. The NAVCON Jim Beam test structure, featured in the IMAC round robin modal test of 2009, was selected as a simple, well recognized and well characterized structure to conduct this investigation. First, a free-free impact modal test of the Jim Beam was done as an experimental control. Second, the Jim Beam was mounted to a large 20,000 lbf shaker, and an impact modal test in this fixed configuration was conducted. Lastly, a vibration test of the Jim Beam was conducted on the shaker table. The free-free impact test, the fixed impact test, and the base-drive test were used to assess the effect of the shaker modes, evaluate the validity of fixed-base modeling assumptions, and compare final model correlation results between these

  19. Tuning of methods for offset free MPC based on ARX model representations

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2010-01-01

    In this paper we investigate model predictive control (MPC) based on ARX models. ARX models can be identified from data using convex optimization technologies and is linear in the system parameters. Compared to other model parameterizations this feature is an advantage in embedded applications...... for robust and automatic system identification. Standard MPC is not able to reject a sustained, unmeasured, non zero mean disturbance and will therefore not provide offset free tracking. Offset free tracking can be guaranteed for this type of disturbances if Δ variables are used or if the state space...... is extended with a disturbance model state. The relation between the base case and the two extended methods are illustrated which provides good understanding and a platform for discussing tuning for good closed loop performance....

  20. Binding free energy analysis of protein-protein docking model structures by evERdock.

    Science.gov (United States)

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-14

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  1. Incorporating free-floating car-sharing into an activity-based dynamic user equilibrium model : a demand-side model

    NARCIS (Netherlands)

    Li, Q.; Liao, F.; Timmermans, H.J.P.; Huang, H-J; Zhou, J.

    2018-01-01

    Free-floating car-sharing (FFC) has recently received increasing attention due to the flexibility in mobility services. Existing studies related to FFC mainly focus on the analysis of operational management and user preferences. Efforts to model the dynamic choices of free-floating shared cars (SCs)

  2. Model-Free Coordinated Control for MHTGR-Based Nuclear Steam Supply Systems

    Directory of Open Access Journals (Sweden)

    Zhe Dong

    2016-01-01

    Full Text Available The modular high temperature gas-cooled reactor (MHTGR is a typical small modular reactor (SMR that offers simpler, standardized and safer modular design by being factory built, requiring smaller initial capital investment, and having a shorter construction period. Thanks to its small size, the MHTGRs could be beneficial in providing electric power to remote areas that are deficient in transmission or distribution and in generating local power for large population centers. Based on the multi-modular operation scheme, the inherent safety feature of the MHTGRs can be applicable to large nuclear plants of any desired power rating. The MHTGR-based nuclear steam supplying system (NSSS is constituted by an MHTGR, a side-by-side arranged helical-coil once-through steam generator (OTSG and some connecting pipes. Due to the side-by-side arrangement, there is a tight coupling effect between the MHTGR and OTSG. Moreover, there always exists the parameter perturbation of the NSSSs. Thus, it is meaningful to study the model-free coordinated control of MHTGR-based NSSSs for safe, stable, robust and efficient operation. In this paper, a new model-free coordinated control strategy that regulates the nuclear power, MHTGR outlet helium temperature and OTSG outlet overheated steam temperature by properly adjusting the control rod position, helium flowrate and feed-water flowrate is established for the MHTGR-based NSSSs. Sufficient conditions for the globally asymptotic closed-loop stability is given. Finally, numerical simulation results in the cases of large range power decrease and increase illustrate the satisfactory performance of this newly-developed model-free coordinated NSSS control law.

  3. Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Agus Naba

    2016-12-01

    Full Text Available Designs of adaptive fuzzy controllers (AFC are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark verified the proposed scheme’s efficacy.

  4. Thermodynamic Model and Experimental Study of Oil-free Scroll Compressor

    Science.gov (United States)

    Peng, Bin; Zhao, Shengxian; Li, Yaohong

    2017-10-01

    In order to study the performance characteristics of oil-free scroll compressor, this paper is based on the basic equation of circle involute profile, and uses the differential geometry theory to calculate the variation law of pressure with volume. Based on the basic law of thermodynamics, the thermodynamic model of the oil-free scroll compressor is established by considering the heat transfer model and the gas leakage model, considering the mass, energy conservation equation and gas state equation. The change of the mass flow rate of the gas in each chamber is obtained by solving the established model by using the improved Euler method. The experiment results show that with the increase of frequency, the temperature, the displacement and the power show a clear upward trend. The thermodynamic model has some guidance and reference for the development and performance analysis of oil-free scroll compressors.

  5. Mathematical Modeling of Tin-Free Chemically-Active Antifouling Paint Behavior

    DEFF Research Database (Denmark)

    Yebra, Diego Meseguer; Kiil, Søren; Dam-Johansen, Kim

    2006-01-01

    Mathematical modeling has been used to characterize and validate the working mechanisms of tin-free, chemically-active antifouling (AF) paints. The model-based analysis of performance data from lab-scale rotary experiments has shown significant differences between antifouling technologies...... of Chemical Engineers....

  6. Distribution of free radical products among the bases of x-irradiated DNA model systems: an ESR study

    International Nuclear Information System (INIS)

    Spalletta, R.A.

    1984-01-01

    Exposure of solid state DNA to ionizing radiation results in an ESR spectrum that has been attributed to a nonstoichiometric distribution of free radicals among the bases. At low temperatures radical cations appear to be stabilized on the purines while radical anions are stabilized on the pyrimidines. This distribution could arise from at least two different mechanisms. The first, charge transfer, involves the transfer of electrons and/or holes between stacked bases. In the second, saturation asymmetry, the free radical distribution arises from differences in the dose saturation characteristics of individual bases. The present study addresses the relative importance of charge transfer versus saturation asymmetry in the production of these population differences. Radicals formed by dissolving irradiated polycrystalline pyrimidines in aqueous solutions containing NtB or PBN spin traps were analyzed using ESR. The relative importance of the two free radical production and distribution mechanisms was assessed using DNA model systems. Saturation asymmetry plays a significant role in determining the free radical population while charge transfer was unambiguously observed in only one, the complex of dAMP and TMP. The results demonstrate that any quantitative analysis of charge transfer must take saturation asymmetry into account

  7. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  8. Application of the Modified Vlasov Model to the Free Vibration Analysis of Thick Plates Resting on Elastic Foundations

    OpenAIRE

    Ozgan, Korhan; Daloglu, Ayse T.

    2009-01-01

    The Modified Vlasov Model is applied to the free vibration analysis of thick plates resting on elastic foundations. The effects of the subsoil depth, plate dimensions and their ratio, the value of the vertical deformation parameter within the subsoil on the frequency parameters of plates on elastic foundations are investigated. A four-noded, twelve degrees of freedom quadrilateral finite element (PBQ4) is used for plate bending analysis based on Mindlin plate theory which is effectively appli...

  9. Parameters Tuning of Model Free Adaptive Control Based on Minimum Entropy

    Institute of Scientific and Technical Information of China (English)

    Chao Ji; Jing Wang; Liulin Cao; Qibing Jin

    2014-01-01

    Dynamic linearization based model free adaptive control(MFAC) algorithm has been widely used in practical systems, in which some parameters should be tuned before it is successfully applied to process industries. Considering the random noise existing in real processes, a parameter tuning method based on minimum entropy optimization is proposed,and the feature of entropy is used to accurately describe the system uncertainty. For cases of Gaussian stochastic noise and non-Gaussian stochastic noise, an entropy recursive optimization algorithm is derived based on approximate model or identified model. The extensive simulation results show the effectiveness of the minimum entropy optimization for the partial form dynamic linearization based MFAC. The parameters tuned by the minimum entropy optimization index shows stronger stability and more robustness than these tuned by other traditional index,such as integral of the squared error(ISE) or integral of timeweighted absolute error(ITAE), when the system stochastic noise exists.

  10. Can model-free reinforcement learning explain deontological moral judgments?

    Science.gov (United States)

    Ayars, Alisabeth

    2016-05-01

    Dual-systems frameworks propose that moral judgments are derived from both an immediate emotional response, and controlled/rational cognition. Recently Cushman (2013) proposed a new dual-system theory based on model-free and model-based reinforcement learning. Model-free learning attaches values to actions based on their history of reward and punishment, and explains some deontological, non-utilitarian judgments. Model-based learning involves the construction of a causal model of the world and allows for far-sighted planning; this form of learning fits well with utilitarian considerations that seek to maximize certain kinds of outcomes. I present three concerns regarding the use of model-free reinforcement learning to explain deontological moral judgment. First, many actions that humans find aversive from model-free learning are not judged to be morally wrong. Moral judgment must require something in addition to model-free learning. Second, there is a dearth of evidence for central predictions of the reinforcement account-e.g., that people with different reinforcement histories will, all else equal, make different moral judgments. Finally, to account for the effect of intention within the framework requires certain assumptions which lack support. These challenges are reasonable foci for future empirical/theoretical work on the model-free/model-based framework. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Pitchcontrol of wind turbines using model free adaptivecontrol based on wind turbine code

    DEFF Research Database (Denmark)

    Zhang, Yunqian; Chen, Zhe; Cheng, Ming

    2011-01-01

    value is only based on I/O data of the wind turbine is identified and then the wind turbine system is replaced by a dynamic linear time-varying model. In order to verify the correctness and robustness of the proposed model free adaptive pitch controller, the wind turbine code FAST which can predict......As the wind turbine is a nonlinear high-order system, to achieve good pitch control performance, model free adaptive control (MFAC) approach which doesn't need the mathematical model of the wind turbine is adopted in the pitch control system in this paper. A pseudo gradient vector whose estimation...... the wind turbine loads and response in high accuracy is used. The results show that the controller produces good dynamic performance, good robustness and adaptability....

  12. Stability analysis of free piston Stirling engines

    Science.gov (United States)

    Bégot, Sylvie; Layes, Guillaume; Lanzetta, François; Nika, Philippe

    2013-03-01

    This paper presents a stability analysis of a free piston Stirling engine. The model and the detailed calculation of pressures losses are exposed. Stability of the machine is studied by the observation of the eigenvalues of the model matrix. Model validation based on the comparison with NASA experimental results is described. The influence of operational and construction parameters on performance and stability issues is exposed. The results show that most parameters that are beneficial for machine power seem to induce irregular mechanical characteristics with load, suggesting that self-sustained oscillations could be difficult to maintain and control.

  13. Refined 2D and Exact 3D Shell Models for the Free Vibration Analysis of Single- and Double-Walled Carbon Nanotubes

    Directory of Open Access Journals (Sweden)

    Salvatore Brischetto

    2015-12-01

    Full Text Available The present paper talks about the free vibration analysis of simply supported Single- and Double-Walled Carbon Nanotubes (SWCNTs and DWCNTs. Refined 2D Generalized Differential Quadrature (GDQ shell methods and an exact 3D shell model are compared. A continuum approach (based on an elastic three-dimensional shell model is used for natural frequency investigation of SWCNTs and DWCNTs. SWCNTs are defined as isotropic cylinders with an equivalent thickness and Young modulus. DWCNTs are defined as two concentric isotropic cylinders (with an equivalent thickness and Young modulus which can be linked by means of the interlaminar continuity conditions or by means of van der Waals interactions. Layer wise approaches are mandatory for the analysis of van der Waals forces in DWCNTs. The effect of van der Waals interaction between the two cylinders is shown for different DWCNT lengths, diameters and vibration modes. The accuracy of beam models and classical 2D shell models in the free vibration analysis of SWCNTs and DWCNTs is also investigated.

  14. Model-free control

    Science.gov (United States)

    Fliess, Michel; Join, Cédric

    2013-12-01

    'Model-free control'and the corresponding 'intelligent' PID controllers (iPIDs), which already had many successful concrete applications, are presented here for the first time in an unified manner, where the new advances are taken into account. The basics of model-free control is now employing some old functional analysis and some elementary differential algebra. The estimation techniques become quite straightforward via a recent online parameter identification approach. The importance of iPIs and especially of iPs is deduced from the presence of friction. The strange industrial ubiquity of classic PIDs and the great difficulty for tuning them in complex situations is deduced, via an elementary sampling, from their connections with iPIDs. Several numerical simulations are presented which include some infinite-dimensional systems. They demonstrate not only the power of our intelligent controllers but also the great simplicity for tuning them.

  15. A numerical model on thermodynamic analysis of free piston Stirling engines

    Science.gov (United States)

    Mou, Jian; Hong, Guotong

    2017-02-01

    In this paper, a new numerical thermodynamic model which bases on the energy conservation law has been used to analyze the free piston Stirling engine. In the model all data was taken from a real free piston Stirling engine which has been built in our laboratory. The energy conservation equations have been applied to expansion space and compression space of the engine. The equation includes internal energy, input power, output power, enthalpy and the heat losses. The heat losses include regenerative heat conduction loss, shuttle heat loss, seal leakage loss and the cavity wall heat conduction loss. The numerical results show that the temperature of expansion space and the temperature of compression space vary with the time. The higher regeneration effectiveness, the higher efficiency and bigger output work. It is also found that under different initial pressures, the heat source temperature, phase angle and engine work frequency pose different effects on the engine’s efficiency and power. As a result, the model is expected to be a useful tool for simulation, design and optimization of Stirling engines.

  16. Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation

    Science.gov (United States)

    Ekin Aydin, Boran; Rutten, Martine

    2016-04-01

    Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.

  17. Model-free stochastic processes studied with q-wavelet-based informational tools

    International Nuclear Information System (INIS)

    Perez, D.G.; Zunino, L.; Martin, M.T.; Garavaglia, M.; Plastino, A.; Rosso, O.A.

    2007-01-01

    We undertake a model-free investigation of stochastic processes employing q-wavelet based quantifiers, that constitute a generalization of their Shannon counterparts. It is shown that (i) interesting physical information becomes accessible in such a way (ii) for special q values the quantifiers are more sensitive than the Shannon ones and (iii) there exist an implicit relationship between the Hurst parameter H and q within this wavelet framework

  18. Guidelines for the analysis of free energy calculations.

    Science.gov (United States)

    Klimovich, Pavel V; Shirts, Michael R; Mobley, David L

    2015-05-01

    Free energy calculations based on molecular dynamics simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical-analysis.py, freely available on GitHub as part of the pymbar package (located at http://github.com/choderalab/pymbar), that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope this tool and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations.

  19. Model-free methods of analyzing domain motions in proteins from simulation : A comparison of normal mode analysis and molecular dynamics simulation of lysozyme

    NARCIS (Netherlands)

    Hayward, S.; Kitao, A.; Berendsen, H.J.C.

    Model-free methods are introduced to determine quantities pertaining to protein domain motions from normal mode analyses and molecular dynamics simulations, For the normal mode analysis, the methods are based on the assumption that in low frequency modes, domain motions can be well approximated by

  20. Free-free and fixed base modal survey tests of the Space Station Common Module Prototype

    Science.gov (United States)

    Driskill, T. C.; Anderson, J. B.; Coleman, A. D.

    1992-01-01

    This paper describes the testing aspects and the problems encountered during the free-free and fixed base modal surveys completed on the original Space Station Common Module Prototype (CMP). The CMP is a 40-ft long by 14.5-ft diameter 'waffle-grid' cylinder built by the Boeing Company and housed at the Marshall Space Flight Center (MSFC) near Huntsville, AL. The CMP modal survey tests were conducted at MSFC by the Dynamics Test Branch. The free-free modal survey tests (June '90 to Sept. '90) included interface verification tests (IFVT), often referred to as impedance measurements, mass-additive testing and linearity studies. The fixed base modal survey tests (Feb. '91 to April '91), including linearity studies, were conducted in a fixture designed to constrain the CMP in 7 total degrees-of-freedom at five trunnion interfaces (two primary, two secondary, and the keel). The fixture also incorporated an airbag off-load system designed to alleviate the non-linear effects of friction in the primary and secondary trunnion interfaces. Numerous test configurations were performed with the objective of providing a modal data base for evaluating the various testing methodologies to verify dynamic finite element models used for input to coupled load analysis.

  1. ANALYSIS OF FREE ROUTE AIRSPACE AND PERFORMANCE BASED NAVIGATION IMPLEMENTATION IN THE EUROPEAN AIR NAVIGATION SYSTEM

    Directory of Open Access Journals (Sweden)

    Svetlana Pavlova

    2014-12-01

    Full Text Available European Air Traffic Management system requires continuous improvements as air traffic is increasingday by day. For this purpose it was developed by international organizations Free Route Airspace and PerformanceBased Navigation concepts that allow to offer a required level of safety, capacity, environmental performance alongwith cost-effectiveness. The aim of the article is to provide detailed analysis of Free Route Airspace and PerformanceBased Navigation implementation status within European region including Ukrainian air navigation system.

  2. Model-Free Autotuning Testing on a Model of a Three-Tank Cascade

    Directory of Open Access Journals (Sweden)

    Stanislav VRÁNA

    2009-06-01

    Full Text Available A newly developed model-free autotuning method based on frequency response analysis has been tested on a laboratory set-up that represents a physical model of a three-tank cascade. This laboratory model was chosen for the following reasons: a the laboratory model was ready for computer control; b simultaneously, computer simulation could be effectively utilized, because a mathematical description of the cascade based on quite exactly valid relations was available; c the set-up provided the necessary degree of nonlinearity and changeable properties. The improvement of the laboratory set-up instrumentation presented here was necessary because the results obtained from the first experimental identification did not correspond to the results provided by the simulation. The data was evidently imprecise, because the available sensors and the conditions for process settling were inadequate.

  3. Characterizing structural transitions using localized free energy landscape analysis.

    Directory of Open Access Journals (Sweden)

    Nilesh K Banavali

    Full Text Available Structural changes in molecules are frequently observed during biological processes like replication, transcription and translation. These structural changes can usually be traced to specific distortions in the backbones of the macromolecules involved. Quantitative energetic characterization of such distortions can greatly advance the atomic-level understanding of the dynamic character of these biological processes.Molecular dynamics simulations combined with a variation of the Weighted Histogram Analysis Method for potential of mean force determination are applied to characterize localized structural changes for the test case of cytosine (underlined base flipping in a GTCAGCGCATGG DNA duplex. Free energy landscapes for backbone torsion and sugar pucker degrees of freedom in the DNA are used to understand their behavior in response to the base flipping perturbation. By simplifying the base flipping structural change into a two-state model, a free energy difference of upto 14 kcal/mol can be attributed to the flipped state relative to the stacked Watson-Crick base paired state. This two-state classification allows precise evaluation of the effect of base flipping on local backbone degrees of freedom.The calculated free energy landscapes of individual backbone and sugar degrees of freedom expectedly show the greatest change in the vicinity of the flipping base itself, but specific delocalized effects can be discerned upto four nucleotide positions away in both 5' and 3' directions. Free energy landscape analysis thus provides a quantitative method to pinpoint the determinants of structural change on the atomic scale and also delineate the extent of propagation of the perturbation along the molecule. In addition to nucleic acids, this methodology is anticipated to be useful for studying conformational changes in all macromolecules, including carbohydrates, lipids, and proteins.

  4. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Directory of Open Access Journals (Sweden)

    Marcos Economides

    2015-09-01

    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  5. OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis

    Directory of Open Access Journals (Sweden)

    Nielsen Lars K

    2009-05-01

    Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and

  6. Free-free Gaunt factors: comparison of various models

    International Nuclear Information System (INIS)

    Collins, L.A.; Merts, A.L.

    1986-01-01

    We develop the general theory of free-free absorption processes in terms of basic quantum mechanical principles. We perform calculations of the free-free Gaunt factor for several models of the electron-atom (ion) interaction in a variety of systems including rare gases, alkali, and aluminum. In addition, we investigate plasma-screening effects in such models as the Yukawa potential. Our calculations compare well with those of other authors, and our comparative study of various models allows a more thorough understanding of their range of validity. 38 refs., 2 figs., 14 tabs

  7. Free Publishing Culture. Sustainable Models?

    Directory of Open Access Journals (Sweden)

    Silvia Nanclares Escudero

    2013-03-01

    Full Text Available As a result of the collective research on the possibilities for publishing production and distribution offered nowadays by the Free Culture scenario, we present here a mapping of symptoms in order to propose a transitory diagnostic of the question: Is it possible to generate an economically sustainable publishing model based on the uses and customs generated and provided by Free Culture? Data, intuitions, experiences and ideas attempt to back up our affirmative answer.

  8. Optical modeling based on mean free path calculations for quantum dot phosphors applied to optoelectronic devices.

    Science.gov (United States)

    Shin, Min-Ho; Kim, Hyo-Jun; Kim, Young-Joo

    2017-02-20

    We proposed an optical simulation model for the quantum dot (QD) nanophosphor based on the mean free path concept to understand precisely the optical performance of optoelectronic devices. A measurement methodology was also developed to get the desired optical characteristics such as the mean free path and absorption spectra for QD nanophosphors which are to be incorporated into the simulation. The simulation results for QD-based white LED and OLED displays show good agreement with the experimental values from the fabricated devices in terms of spectral power distribution, chromaticity coordinate, CCT, and CRI. The proposed simulation model and measurement methodology can be applied easily to the design of lots of optoelectronics devices using QD nanophosphors to obtain high efficiency and the desired color characteristics.

  9. IMMAN: free software for information theory-based chemometric analysis.

    Science.gov (United States)

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA

  10. Free volume model: High-temperature deformation of a Zr-based bulk metallic glass

    International Nuclear Information System (INIS)

    Bletry, M.; Guyot, P.; Blandin, J.J.; Soubeyroux, J.L.

    2006-01-01

    The homogeneous deformation of a zirconium-based bulk metallic glass is investigated in the glass transition region. Compression tests at different temperatures and strain rates have been conducted. The mechanical behavior is analyzed in the framework of the free volume model, taking into account the dependence of the flow defect concentration on deformation. The activation volume is evaluated and allows one to gather the viscosity data (for the different strain rates and temperatures) on a unique master curve. It is also shown that, due to the relation between flow defect concentration and free volume, it is not possible to deduce the equilibrium flow defect concentration directly from mechanical measurements. However, if this parameter is arbitrarily chosen, mechanical measurements give access to the other parameters of the model, these parameters for the alloy under investigation being of the same order of magnitude as those for other metallic glasses

  11. Dual RBFNNs-Based Model-Free Adaptive Control With Aspen HYSYS Simulation.

    Science.gov (United States)

    Zhu, Yuanming; Hou, Zhongsheng; Qian, Feng; Du, Wenli

    2017-03-01

    In this brief, we propose a new data-driven model-free adaptive control (MFAC) method with dual radial basis function neural networks (RBFNNs) for a class of discrete-time nonlinear systems. The main novelty lies in that it provides a systematic design method for controller structure by the direct usage of I/O data, rather than using the first-principle model or offline identified plant model. The controller structure is determined by equivalent-dynamic-linearization representation of the ideal nonlinear controller, and the controller parameters are tuned by the pseudogradient information extracted from the I/O data of the plant, which can deal with the unknown nonlinear system. The stability of the closed-loop control system and the stability of the training process for RBFNNs are guaranteed by rigorous theoretical analysis. Meanwhile, the effectiveness and the applicability of the proposed method are further demonstrated by the numerical example and Aspen HYSYS simulation of distillation column in crude styrene produce process.

  12. Static and free vibration analysis of carbon nano wires based on Timoshenko beam theory using differential quadrature method

    Directory of Open Access Journals (Sweden)

    Maziar Janghorban

    Full Text Available Static and free vibration analysis of carbon nano wires with rectangular cross section based on Timoshenko beam theory is studied in this research. Differential quadrature method (DQM is employed to solve the governing equations. From the knowledge of author, it is the first time that free vibration of nano wires is investigated. It is also the first time that differential quadrature method is used for bending analysis of nano wires.

  13. A method of LED free-form tilted lens rapid modeling based on scheme language

    Science.gov (United States)

    Dai, Yidan

    2017-10-01

    According to nonimaging optical principle and traditional LED free-form surface lens, a new kind of LED free-form tilted lens was designed. And a method of rapid modeling based on Scheme language was proposed. The mesh division method was applied to obtain the corresponding surface configuration according to the character of the light source and the desired energy distribution on the illumination plane. Then 3D modeling software and the Scheme language programming are used to generate lens model respectively. With the help of optical simulation software, a light source with the size of 1mm*1mm*1mm in volume is used in experiment, and the lateral migration distance of illumination area is 0.5m, in which total one million rays are computed. We could acquire the simulated results of both models. The simulated output result shows that the Scheme language can prevent the model deformation problems caused by the process of the model transfer, and the degree of illumination uniformity is reached to 82%, and the offset angle is 26°. Also, the efficiency of modeling process is greatly increased by using Scheme language.

  14. Optimization based tuning approach for offset free MPC

    DEFF Research Database (Denmark)

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  15. A Model-free Approach to Fault Detection of Continuous-time Systems Based on Time Domain Data

    Institute of Scientific and Technical Information of China (English)

    Ping Zhang; Steven X. Ding

    2007-01-01

    In this paper, a model-free approach is presented to design an observer-based fault detection system of linear continuoustime systems based on input and output data in the time domain. The core of the approach is to directly identify parameters of the observer-based residual generator based on a numerically reliable data equation obtained by filtering and sampling the input and output signals.

  16. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  17. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  18. Free energy analysis of cell spreading.

    Science.gov (United States)

    McEvoy, Eóin; Deshpande, Vikram S; McGarry, Patrick

    2017-10-01

    In this study we present a steady-state adaptation of the thermodynamically motivated stress fiber (SF) model of Vigliotti et al. (2015). We implement this steady-state formulation in a non-local finite element setting where we also consider global conservation of the total number of cytoskeletal proteins within the cell, global conservation of the number of binding integrins on the cell membrane, and adhesion limiting ligand density on the substrate surface. We present a number of simulations of cell spreading in which we consider a limited subset of the possible deformed spread-states assumed by the cell in order to examine the hypothesis that free energy minimization drives the process of cell spreading. Simulations suggest that cell spreading can be viewed as a competition between (i) decreasing cytoskeletal free energy due to strain induced assembly of cytoskeletal proteins into contractile SFs, and (ii) increasing elastic free energy due to stretching of the mechanically passive components of the cell. The computed minimum free energy spread area is shown to be lower for a cell on a compliant substrate than on a rigid substrate. Furthermore, a low substrate ligand density is found to limit cell spreading. The predicted dependence of cell spread area on substrate stiffness and ligand density is in agreement with the experiments of Engler et al. (2003). We also simulate the experiments of Théry et al. (2006), whereby initially circular cells deform and adhere to "V-shaped" and "Y-shaped" ligand patches. Analysis of a number of different spread states reveals that deformed configurations with the lowest free energy exhibit a SF distribution that corresponds to experimental observations, i.e. a high concentration of highly aligned SFs occurs along free edges, with lower SF concentrations in the interior of the cell. In summary, the results of this study suggest that cell spreading is driven by free energy minimization based on a competition between decreasing

  19. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  20. Model based defect detection for free stator of ultrasonic motor

    DEFF Research Database (Denmark)

    Amini, Rouzbeh; Mojallali, Hamed; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, measurements of admittance magnitude and phase are used to identify the complex values of equivalent circuit model for free stator of an ultrasonic motor. The model is used to evaluate the changes in the admittance and relative changes in the values of equivalent circuit elements. ...

  1. An analytical method for free vibration analysis of Timoshenko beam theory applied to cracked nanobeams using a nonlocal elasticity model

    International Nuclear Information System (INIS)

    Torabi, K.; Nafar Dastgerdi, J.

    2012-01-01

    This paper is concerned with the free transverse vibration of cracked nanobeams modeled after Eringen's nonlocal elasticity theory and Timoshenko beam theory. The cracked beam is modeled as two segments connected by a rotational spring located at the cracked section. This model promotes discontinuities in rotational displacement due to bending which is proportional to bending moment transmitted by the cracked section. The governing equations of cracked nanobeams with two symmetric and asymmetric boundary conditions are derived; then these equations are solved analytically based on concerning basic standard trigonometric and hyperbolic functions. Besides, the frequency parameters and the vibration modes of cracked nanobeams for variant crack positions, crack ratio, and small scale effect parameters are calculated. The vibration solutions obtained provide a better representation of the vibration behavior of short, stubby, micro/nanobeams where the effects of small scale, transverse shear deformation and rotary inertia are significant. - Highlights: ► The free vibration analysis of cracked nanobeams is investigated. ► This study is based on the theory of nonlocal elasticity and Timoshenko beam theory. ► The small scale effect parameter greatly affects the value of natural frequencies. ► Crack reduces the natural frequencies, causes a discontinuity in the cracked section.

  2. An analytical method for free vibration analysis of Timoshenko beam theory applied to cracked nanobeams using a nonlocal elasticity model

    Energy Technology Data Exchange (ETDEWEB)

    Torabi, K., E-mail: kvntrb@KashanU.ac.ir; Nafar Dastgerdi, J., E-mail: J.nafardastgerdi@me.iut.ac.ir

    2012-08-31

    This paper is concerned with the free transverse vibration of cracked nanobeams modeled after Eringen's nonlocal elasticity theory and Timoshenko beam theory. The cracked beam is modeled as two segments connected by a rotational spring located at the cracked section. This model promotes discontinuities in rotational displacement due to bending which is proportional to bending moment transmitted by the cracked section. The governing equations of cracked nanobeams with two symmetric and asymmetric boundary conditions are derived; then these equations are solved analytically based on concerning basic standard trigonometric and hyperbolic functions. Besides, the frequency parameters and the vibration modes of cracked nanobeams for variant crack positions, crack ratio, and small scale effect parameters are calculated. The vibration solutions obtained provide a better representation of the vibration behavior of short, stubby, micro/nanobeams where the effects of small scale, transverse shear deformation and rotary inertia are significant. - Highlights: Black-Right-Pointing-Pointer The free vibration analysis of cracked nanobeams is investigated. Black-Right-Pointing-Pointer This study is based on the theory of nonlocal elasticity and Timoshenko beam theory. Black-Right-Pointing-Pointer The small scale effect parameter greatly affects the value of natural frequencies. Black-Right-Pointing-Pointer Crack reduces the natural frequencies, causes a discontinuity in the cracked section.

  3. The sagittal stem alignment and the stem version clearly influence the impingement-free range of motion in total hip arthroplasty: a computer model-based analysis.

    Science.gov (United States)

    Müller, Michael; Duda, Georg; Perka, Carsten; Tohtz, Stephan

    2016-03-01

    The component alignment in total hip arthroplasty influences the impingement-free range of motion (ROM). While substantiated data is available for the cup positioning, little is known about the stem alignment. Especially stem rotation and the sagittal alignment influence the position of the cone in relation to the edge of the socket and thus the impingement-free functioning. Hence, the question arises as to what influence do these parameters have on the impingement-free ROM? With the help of a computer model the influence of the sagittal stem alignment and rotation on the impingement-free ROM were investigated. The computer model was based on the CT dataset of a patient with a non-cemented THA. In the model the stem version was set at 10°/0°/-10° and the sagittal alignment at 5°/0°/-5°, which resulted in nine alternative stem positions. For each position, the maximum impingement-free ROM was investigated. Both stem version and sagittal stem alignment have a relevant influence on the impingement-free ROM. In particular, flexion and extension as well as internal and external rotation capability present evident differences. In the position intervals of 10° sagittal stem alignment and 20° stem version a difference was found of about 80° in the flexion and 50° in the extension capability. Likewise, differences were evidenced of up to 72° in the internal and up to 36° in the external rotation. The sagittal stem alignment and the stem torsion have a relevant influence on the impingement-free ROM. To clarify the causes of an impingement or accompanying problems, both parameters should be examined and, if possible, a combined assessment of these factors should be made.

  4. Strand Analysis, a free online program for the computational identification of the best RNA interference (RNAi targets based on Gibbs free energy

    Directory of Open Access Journals (Sweden)

    Tiago Campos Pereira

    2007-01-01

    Full Text Available The RNA interference (RNAi technique is a recent technology that uses double-stranded RNA molecules to promote potent and specific gene silencing. The application of this technique to molecular biology has increased considerably, from gene function identification to disease treatment. However, not all small interfering RNAs (siRNAs are equally efficient, making target selection an essential procedure. Here we present Strand Analysis (SA, a free online software tool able to identify and classify the best RNAi targets based on Gibbs free energy (deltaG. Furthermore, particular features of the software, such as the free energy landscape and deltaG gradient, may be used to shed light on RNA-induced silencing complex (RISC activity and RNAi mechanisms, which makes the SA software a distinct and innovative tool.

  5. BER Analysis of Coherent Free-Space Optical Communication Systems with a Focal-Plane-Based Wavefront Sensor

    Science.gov (United States)

    Cao, Jingtai; Zhao, Xiaohui; Liu, Wei; Gu, Haijun

    2018-03-01

    A wavefront sensor is one of most important units for an adaptive optics system. Based on our previous works, in this paper, we discuss the bit-error-rate (BER) performance of coherent free space optical communication systems with a focal-plane-based wavefront sensor. Firstly, the theory of a focal-plane-based wavefront sensor is given. Then the relationship between the BER and the mixing efficiency with a homodyne receiver is discussed on the basis of binary-phase-shift-keying (BPSK) modulation. Finally, the numerical simulation results are shown that the BER will be decreased obviously after aberrations correction with the focal-plane-based wavefront sensor. In addition, the BER will decrease along with increasing number of photons received within a single bit. These analysis results will provide a reference for the design of the coherent Free space optical communication (FSOC) system.

  6. Covariance-based synaptic plasticity in an attractor network model accounts for fast adaptation in free operant learning.

    Science.gov (United States)

    Neiman, Tal; Loewenstein, Yonatan

    2013-01-23

    In free operant experiments, subjects alternate at will between targets that yield rewards stochastically. Behavior in these experiments is typically characterized by (1) an exponential distribution of stay durations, (2) matching of the relative time spent at a target to its relative share of the total number of rewards, and (3) adaptation after a change in the reward rates that can be very fast. The neural mechanism underlying these regularities is largely unknown. Moreover, current decision-making neural network models typically aim at explaining behavior in discrete-time experiments in which a single decision is made once in every trial, making these models hard to extend to the more natural case of free operant decisions. Here we show that a model based on attractor dynamics, in which transitions are induced by noise and preference is formed via covariance-based synaptic plasticity, can account for the characteristics of behavior in free operant experiments. We compare a specific instance of such a model, in which two recurrently excited populations of neurons compete for higher activity, to the behavior of rats responding on two levers for rewarding brain stimulation on a concurrent variable interval reward schedule (Gallistel et al., 2001). We show that the model is consistent with the rats' behavior, and in particular, with the observed fast adaptation to matching behavior. Further, we show that the neural model can be reduced to a behavioral model, and we use this model to deduce a novel "conservation law," which is consistent with the behavior of the rats.

  7. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  8. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  9. Analysis Of Agricultural Productivity And Growth On Safta (South Asian Free Trade Agreement And Its Imact On Economy Of Pakistan By Using CGE Model

    Directory of Open Access Journals (Sweden)

    Nazir Ahmed GOPANG

    2010-02-01

    Full Text Available This research explore the opportunities and analyzing the cost andbenefit on Pak-India trade on South Asian Free Trade Agreement (SAFTAand its possible impact on the welfare of both countries. Pak-India trade on SAFTA create opportunities for the both countries in export Laid growth. In First Scenario when normal trading relation will be restores and given MFN(Most Favored Nations status given to each other to attack the trade between two countries. The Global trade analysis GTAP model is used to analyze the possible impact of SAFTA on Pakistan in a multi country, multi sector applied General equilibrium frame work. After employing the simplified static analysis framework, the analysis based on simulations revealsthat current demand for Pakistani Basmati Rice and other consumer items like leather and cotton-made garments will expand after the FTA and consumer surplus will increase. The export of Rice, leather and cotton-made garments may be conducted by two scenarios, i.e. when normal trading relations between Pakistan and India will be restored and when there will be a free trade between Pakistan and India in the presence of South Asian Free Trade Agreement (SAFTA. Results based on this research reveal that on SAFTA, grounds, there will be net export benefi ts in Pakistan’s economy.

  10. Regression-based model of skin diffuse reflectance for skin color analysis

    Science.gov (United States)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  11. Logic-based models in systems biology: a predictive and parameter-free network analysis method.

    Science.gov (United States)

    Wynn, Michelle L; Consul, Nikita; Merajver, Sofia D; Schnell, Santiago

    2012-11-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network's dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples.

  12. Model based process-product design and analysis

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    This paper gives a perspective on modelling and the important role it has within product-process design and analysis. Different modelling issues related to development and application of systematic model-based solution approaches for product-process design is discussed and the need for a hybrid...... model-based framework is highlighted. This framework should be able to manage knowledge-data, models, and associated methods and tools integrated with design work-flows and data-flows for specific product-process design problems. In particular, the framework needs to manage models of different types......, forms and complexity, together with their associated parameters. An example of a model-based system for design of chemicals based formulated products is also given....

  13. [Model-based biofuels system analysis: a review].

    Science.gov (United States)

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  14. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    Science.gov (United States)

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  15. Free Base Lysine Increases Survival and Reduces Metastasis in Prostate Cancer Model.

    Science.gov (United States)

    Ibrahim-Hashim, Arig; Wojtkowiak, Jonathan W; de Lourdes Coelho Ribeiro, Maria; Estrella, Veronica; Bailey, Kate M; Cornnell, Heather H; Gatenby, Robert A; Gillies, Robert J

    2011-11-19

    Malignant tumor cells typically metabolize glucose anaerobically to lactic acid even under normal oxygen tension, a phenomenon called aerobic glycolysis or the Warburg effect. This results in increased acid production and the acidification of the extracellular microenvironment in solid tumors. H + ions tend to flow along concentration gradients into peritumoral normal tissue causing extracellular matrix degradation and increased tumor cell motility thus promoting invasion and metastasis. We have shown that reducing this acidity with sodium bicarbonate buffer decreases the metastatic fitness of circulating tumor cells in prostate cancer and other cancer models. Mathematical models of the tumor-host dynamics predicted that buffers with a pka around 7 will be more effective in reducing intra- and peri-tumoral acidosis and, thus, and possibly more effective in inhibiting tumor metastasis than sodium bicarbonate which has a pKa around 6. Here we test this prediction the efficacy of free base lysine; a non-bicarbonate/non-volatile buffer with a higher pKa (~10), on prostate tumor metastases model. Oxygen consumption and acid production rate of PC3M prostate cancer cells and normal prostate cells were determined using the Seahorse Extracellular Flux (XF-96) analyzer. In vivo effect of 200 mM lysine started four days prior to inoculation on inhibition of metastasis was examined in PC3M-LUC-C6 prostate cancer model using SCID mice. Metastases were followed by bioluminescence imaging. PC3M prostate cancer cells are highly acidic in comparison to a normal prostate cell line indicating that reduction of intra- and perit-tumoral acidosis should inhibit metastases formation. In vivo administration of 200 mM free base lysine increased survival and reduced metastasis. PC3M prostate cancer cells are highly glycolytic and produce large amounts of acid when compared to normal prostate cells. Administration of non-volatile buffer decreased growth of metastases and improved survival

  16. Application of the Modified Vlasov Model to the Free Vibration Analysis of Thick Plates Resting on Elastic Foundations

    Directory of Open Access Journals (Sweden)

    Korhan Ozgan

    2009-01-01

    Full Text Available The Modified Vlasov Model is applied to the free vibration analysis of thick plates resting on elastic foundations. The effects of the subsoil depth, plate dimensions and their ratio, the value of the vertical deformation parameter within the subsoil on the frequency parameters of plates on elastic foundations are investigated. A four-noded, twelve degrees of freedom quadrilateral finite element (PBQ4 is used for plate bending analysis based on Mindlin plate theory which is effectively applied to the analysis of thin and thick plates when selective reduced integration technique is used. The first ten natural frequency parameters are presented in tabular and graphical forms to show the effects of the parameters considered in the study. It is concluded that the effect of the subsoil depth on the frequency parameters of the plates on elastic foundation is generally larger than that of the other parameters considered in the study.

  17. Dissolution process analysis using model-free Noyes-Whitney integral equation.

    Science.gov (United States)

    Hattori, Yusuke; Haruna, Yoshimasa; Otsuka, Makoto

    2013-02-01

    Drug dissolution process of solid dosages is theoretically described by Noyes-Whitney-Nernst equation. However, the analysis of the process is demonstrated assuming some models. Normally, the model-dependent methods are idealized and require some limitations. In this study, Noyes-Whitney integral equation was proposed and applied to represent the drug dissolution profiles of a solid formulation via the non-linear least squares (NLLS) method. The integral equation is a model-free formula involving the dissolution rate constant as a parameter. In the present study, several solid formulations were prepared via changing the blending time of magnesium stearate (MgSt) with theophylline monohydrate, α-lactose monohydrate, and crystalline cellulose. The formula could excellently represent the dissolution profile, and thereby the rate constant and specific surface area could be obtained by NLLS method. Since the long time blending coated the particle surface with MgSt, it was found that the water permeation was disturbed by its layer dissociating into disintegrant particles. In the end, the solid formulations were not disintegrated; however, the specific surface area gradually increased during the process of dissolution. The X-ray CT observation supported this result and demonstrated that the rough surface was dominant as compared to dissolution, and thus, specific surface area of the solid formulation gradually increased. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Thermodynamic analysis of regulation in metabolic networks using constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Mahadevan Radhakrishnan

    2010-05-01

    Full Text Available Abstract Background Geobacter sulfurreducens is a member of the Geobacter species, which are capable of oxidation of organic waste coupled to the reduction of heavy metals and electrode with applications in bioremediation and bioenergy generation. While the metabolism of this organism has been studied through the development of a stoichiometry based genome-scale metabolic model, the associated regulatory network has not yet been well studied. In this manuscript, we report on the implementation of a thermodynamics based metabolic flux model for Geobacter sulfurreducens. We use this updated model to identify reactions that are subject to regulatory control in the metabolic network of G. sulfurreducens using thermodynamic variability analysis. Findings As a first step, we have validated the regulatory sites and bottleneck reactions predicted by the thermodynamic flux analysis in E. coli by evaluating the expression ranges of the corresponding genes. We then identified ten reactions in the metabolic network of G. sulfurreducens that are predicted to be candidates for regulation. We then compared the free energy ranges for these reactions with the corresponding gene expression fold changes under conditions of different environmental and genetic perturbations and show that the model predictions of regulation are consistent with data. In addition, we also identify reactions that operate close to equilibrium and show that the experimentally determined exchange coefficient (a measure of reversibility is significant for these reactions. Conclusions Application of the thermodynamic constraints resulted in identification of potential bottleneck reactions not only from the central metabolism but also from the nucleotide and amino acid subsystems, thereby showing the highly coupled nature of the thermodynamic constraints. In addition, thermodynamic variability analysis serves as a valuable tool in estimating the ranges of ΔrG' of every reaction in the model

  19. Entropy-based model for miRNA isoform analysis.

    Directory of Open Access Journals (Sweden)

    Shengqin Wang

    Full Text Available MiRNAs have been widely studied due to their important post-transcriptional regulatory roles in gene expression. Many reports have demonstrated the evidence of miRNA isoform products (isomiRs in high-throughput small RNA sequencing data. However, the biological function involved in these molecules is still not well investigated. Here, we developed a Shannon entropy-based model to estimate isomiR expression profiles of high-throughput small RNA sequencing data extracted from miRBase webserver. By using the Kolmogorov-Smirnov statistical test (KS test, we demonstrated that the 5p and 3p miRNAs present more variants than the single arm miRNAs. We also found that the isomiR variant, except the 3' isomiR variant, is strongly correlated with Minimum Free Energy (MFE of pre-miRNA, suggesting the intrinsic feature of pre-miRNA should be one of the important factors for the miRNA regulation. The functional enrichment analysis showed that the miRNAs with high variation, particularly the 5' end variation, are enriched in a set of critical functions, supporting these molecules should not be randomly produced. Our results provide a probabilistic framework for miRNA isoforms analysis, and give functional insights into pre-miRNA processing.

  20. Offset Free Tracking Predictive Control Based on Dynamic PLS Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2017-10-01

    Full Text Available This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method.

  1. Textural features of dynamic contrast-enhanced MRI derived model-free and model-based parameter maps in glioma grading.

    Science.gov (United States)

    Xie, Tian; Chen, Xiao; Fang, Jingqin; Kang, Houyi; Xue, Wei; Tong, Haipeng; Cao, Peng; Wang, Sumei; Yang, Yizeng; Zhang, Weiguo

    2018-04-01

    Presurgical glioma grading by dynamic contrast-enhanced MRI (DCE-MRI) has unresolved issues. The aim of this study was to investigate the ability of textural features derived from pharmacokinetic model-based or model-free parameter maps of DCE-MRI in discriminating between different grades of gliomas, and their correlation with pathological index. Retrospective. Forty-two adults with brain gliomas. 3.0T, including conventional anatomic sequences and DCE-MRI sequences (variable flip angle T1-weighted imaging and three-dimensional gradient echo volumetric imaging). Regions of interest on the cross-sectional images with maximal tumor lesion. Five commonly used textural features, including Energy, Entropy, Inertia, Correlation, and Inverse Difference Moment (IDM), were generated. All textural features of model-free parameters (initial area under curve [IAUC], maximal signal intensity [Max SI], maximal up-slope [Max Slope]) could effectively differentiate between grade II (n = 15), grade III (n = 13), and grade IV (n = 14) gliomas (P textural features, Entropy and IDM, of four DCE-MRI parameters, including Max SI, Max Slope (model-free parameters), vp (Extended Tofts), and vp (Patlak) could differentiate grade III and IV gliomas (P textural features of any DCE-MRI parameter maps could discriminate between subtypes of grade II and III gliomas (P features revealed relatively lower inter-observer agreement. No significant correlation was found between microvascular density and textural features, compared with a moderate correlation found between cellular proliferation index and those features. Textural features of DCE-MRI parameter maps displayed a good ability in glioma grading. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1099-1111. © 2017 International Society for Magnetic Resonance in Medicine.

  2. General Form of Model-Free Control Law and Convergence Analyzing

    Directory of Open Access Journals (Sweden)

    Xiuying Li

    2012-01-01

    Full Text Available The general form of model-free control law is introduced, and its convergence is analyzed. Firstly, the necessity to improve the basic form of model free control law is explained, and the functional combination method as the approach of improvement is presented. Then, a series of sufficient conditions of convergence are given. The analysis denotes that these conditions can be satisfied easily in the engineering practice.

  3. Trojan detection model based on network behavior analysis

    International Nuclear Information System (INIS)

    Liu Junrong; Liu Baoxu; Wang Wenjin

    2012-01-01

    Based on the analysis of existing Trojan detection technology, this paper presents a Trojan detection model based on network behavior analysis. First of all, we abstract description of the Trojan network behavior, then according to certain rules to establish the characteristic behavior library, and then use the support vector machine algorithm to determine whether a Trojan invasion. Finally, through the intrusion detection experiments, shows that this model can effectively detect Trojans. (authors)

  4. An Exact and Grid-free Numerical Scheme for the Hybrid Two Phase Traffic Flow Model Based on the Lighthill-Whitham-Richards Model with Bounded Acceleration

    KAUST Repository

    Qiu, Shanwen

    2012-07-01

    In this article, we propose a new grid-free and exact solution method for computing solutions associated with an hybrid traffic flow model based on the Lighthill- Whitham-Richards (LWR) partial differential equation. In this hybrid flow model, the vehicles satisfy the LWR equation whenever possible, and have a fixed acceleration otherwise. We first present a grid-free solution method for the LWR equation based on the minimization of component functions. We then show that this solution method can be extended to compute the solutions to the hybrid model by proper modification of the component functions, for any concave fundamental diagram. We derive these functions analytically for the specific case of a triangular fundamental diagram. We also show that the proposed computational method can handle fixed or moving bottlenecks.

  5. Chemometric analysis of minerals in gluten-free products.

    Science.gov (United States)

    Gliszczyńska-Świgło, Anna; Klimczak, Inga; Rybicka, Iga

    2018-06-01

    Numerous studies indicate mineral deficiencies in people on a gluten-free (GF) diet. These deficiencies may indicate that GF products are a less valuable source of minerals than gluten-containing products. In the study, the nutritional quality of 50 GF products is discussed taking into account the nutritional requirements for minerals expressed as percentage of recommended daily allowance (%RDA) or percentage of adequate intake (%AI) for a model celiac patient. Elements analyzed were calcium, potassium, magnesium, sodium, copper, iron, manganese, and zinc. Analysis of %RDA or %AI was performed using principal component analysis (PCA) and hierarchical cluster analysis (HCA). Using PCA, the differentiation between products based on rice, corn, potato, GF wheat starch and based on buckwheat, chickpea, millet, oats, amaranth, teff, quinoa, chestnut, and acorn was possible. In the HCA, four clusters were created. The main criterion determining the adherence of the sample to the cluster was the content of all minerals included to HCA (K, Mg, Cu, Fe, Mn); however, only the Mn content differentiated four formed groups. GF products made of buckwheat, chickpea, millet, oats, amaranth, teff, quinoa, chestnut, and acorn are better source of minerals than based on other GF raw materials, what was confirmed by PCA and HCA. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  6. Free vibration analysis of embedded magneto-electro-thermo-elastic cylindrical nanoshell based on the modified couple stress theory

    Science.gov (United States)

    Ghadiri, Majid; Safarpour, Hamed

    2016-09-01

    In this paper, size-dependent effect of an embedded magneto-electro-elastic (MEE) nanoshell subjected to thermo-electro-magnetic loadings on free vibration behavior is investigated. Also, the surrounding elastic medium has been considered as the model of Winkler characterized by the spring. The size-dependent MEE nanoshell is investigated on the basis of the modified couple stress theory. Taking attention to the first-order shear deformation theory (FSDT), the modeled nanoshell and its equations of motion are derived using principle of minimum potential energy. The accuracy of the presented model is validated with some cases in the literature. Finally, using the Navier-type method, an analytical solution of governing equations for vibration behavior of simply supported MEE cylindrical nanoshell under combined loadings is presented and the effects of material length scale parameter, temperature changes, external electric potential, external magnetic potential, circumferential wave numbers, constant of spring, shear correction factor and length-to-radius ratio of the nanoshell on natural frequency are identified. Since there has been no research about size-dependent analysis MEE cylindrical nanoshell under combined loadings based on FSDT, numerical results are presented to be served as benchmarks for future analysis of MEE nanoshells using the modified couple stress theory.

  7. A comparison of scaffold-free and scaffold-based reconstructed human skin models as alternatives to animal use.

    Science.gov (United States)

    Kinikoglu, Beste

    2017-12-01

    Tissue engineered full-thickness human skin substitutes have various applications in the clinic and in the laboratory, such as in the treatment of burns or deep skin defects, and as reconstructed human skin models in the safety testing of drugs and cosmetics and in the fundamental study of skin biology and pathology. So far, different approaches have been proposed for the generation of reconstructed skin, each with its own advantages and disadvantages. Here, the classic tissue engineering approach, based on cell-seeded polymeric scaffolds, is compared with the less-studied cell self-assembly approach, where the cells are coaxed to synthesise their own extracellular matrix (ECM). The resulting full-thickness human skin substitutes were analysed by means of histological and immunohistochemical analyses. It was found that both the scaffold-free and the scaffold-based skin equivalents successfully mimicked the functionality and morphology of native skin, with complete epidermal differentiation (as determined by the expression of filaggrin), the presence of a continuous basement membrane expressing collagen VII, and new ECM deposition by dermal fibroblasts. On the other hand, the scaffold-free model had a thicker epidermis and a significantly higher number of Ki67-positive proliferative cells, indicating a higher capacity for self-renewal, as compared to the scaffold-based model. 2017 FRAME.

  8. Numerical analysis of free surface instabilities in the IFMIF lithium target

    International Nuclear Information System (INIS)

    Gordeev, S.; Heinzel, V.; Moeslang, A.

    2007-01-01

    The International Fusion Materials Facility (IFMIF) facility uses a high speed (10-20 m/s) Lithium (Li) jet flow as a target for two 40 MeV/125 mA deuteron beams. The major function of the Li target is to provide a stable Li jet for the production of an intense neutron flux. For the understanding the lithium jet behaviour and elimination of the free-surface flow instabilities a detailed analysis of the Li jet flow is necessary. Different kinds of instability mechanisms in the liquid jet flow have been evaluated and classified based on analytical and experimental data. Numerical investigations of the target free surface flow have been performed. Previous numerical investigations have shown in principle the suitability of CFD code Star- CD for the simulation of the Li-target flow. The main objective of this study is detailed numerical analysis of instabilities in the Li-jet flow caused by boundary layer relaxation near the nozzle exit, transition to the turbulence flow and back wall curvature. A number of CFD models are developed to investigate the formation of instabilities on the target surface. Turbulence models are validated on the experimental data. Experimental observations have shown that the change of the nozzle geometry at the outlet such as a slight divergence of the nozzle surfaces or nozzle edge defects causes the flow separation and occurrence of longitudinal periodic structures on the free surface with an amplitude up to 5 mm. Target surface fluctuations of this magnitude can lead to the penetration of the deuteron beam in the target structure and cause the local overheating of the back plat. Analysis of large instabilities in the Li-target flow combined with the heat distribution in lithium depending on the free surface shape is performed in this study. (orig.)

  9. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  10. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  11. A scale-free structure prior for graphical models with applications in functional genomics.

    Directory of Open Access Journals (Sweden)

    Paul Sheridan

    Full Text Available The problem of reconstructing large-scale, gene regulatory networks from gene expression data has garnered considerable attention in bioinformatics over the past decade with the graphical modeling paradigm having emerged as a popular framework for inference. Analysis in a full Bayesian setting is contingent upon the assignment of a so-called structure prior-a probability distribution on networks, encoding a priori biological knowledge either in the form of supplemental data or high-level topological features. A key topological consideration is that a wide range of cellular networks are approximately scale-free, meaning that the fraction, , of nodes in a network with degree is roughly described by a power-law with exponent between and . The standard practice, however, is to utilize a random structure prior, which favors networks with binomially distributed degree distributions. In this paper, we introduce a scale-free structure prior for graphical models based on the formula for the probability of a network under a simple scale-free network model. Unlike the random structure prior, its scale-free counterpart requires a node labeling as a parameter. In order to use this prior for large-scale network inference, we design a novel Metropolis-Hastings sampler for graphical models that includes a node labeling as a state space variable. In a simulation study, we demonstrate that the scale-free structure prior outperforms the random structure prior at recovering scale-free networks while at the same time retains the ability to recover random networks. We then estimate a gene association network from gene expression data taken from a breast cancer tumor study, showing that scale-free structure prior recovers hubs, including the previously unknown hub SLC39A6, which is a zinc transporter that has been implicated with the spread of breast cancer to the lymph nodes. Our analysis of the breast cancer expression data underscores the value of the scale-free

  12. Fundamental Study on the Impact of Gluten-Free Starches on the Quality of Gluten-Free Model Breads

    Directory of Open Access Journals (Sweden)

    Stefan W. Horstmann

    2016-04-01

    Full Text Available Starch is widely used as an ingredient and significantly contributes to texture, appearance, and overall acceptability of cereal based foods, playing an important role due to its ability to form a matrix, entrapping air bubbles. A detailed characterisation of five gluten-free starches (corn, wheat, rice, tapioca, potato was performed in this study. In addition, the influence of these starches, with different compositional and morphological properties, was evaluated on a simple gluten-free model bread system. The morphological characterisation, evaluated using scanning electron microscopy, revealed some similarities among the starches, which could be linked to the baking performance of the breads. Moreover, the lipid content, though representing one of the minor components in starch, was found to have an influence on pasting, bread making, and staling. Quality differences in cereal root and tuber starch based breads were observed. However, under the baking conditions used, gluten-free rendered wheat starch performed best, followed by potato starch, in terms of loaf volume and cell structure. Tapioca starch and rice starch based breads were not further analysed, due to an inferior baking performance. This is the first study to evaluate gluten-free starch on a simple model bread system.

  13. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...

  14. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Introduction, comparison, and validation of Meta‐Essentials: A free and simple tool for meta‐analysis

    Science.gov (United States)

    van Rhee, Henk; Hak, Tony

    2017-01-01

    We present a new tool for meta‐analysis, Meta‐Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta‐analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta‐Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta‐analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp‐Hartung adjustment of the DerSimonian‐Laird estimator. However, more advanced meta‐analysis methods such as meta‐analytical structural equation modelling and meta‐regression with multiple covariates are not available. In summary, Meta‐Essentials may prove a valuable resource for meta‐analysts, including researchers, teachers, and students. PMID:28801932

  16. A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis

    International Nuclear Information System (INIS)

    Wang, Zequn; Wang, Pingfeng

    2015-01-01

    Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy

  17. Evaporation model for beam based additive manufacturing using free surface lattice Boltzmann methods

    International Nuclear Information System (INIS)

    Klassen, Alexander; Scharowsky, Thorsten; Körner, Carolin

    2014-01-01

    Evaporation plays an important role in many technical applications including beam-based additive manufacturing processes, such as selective electron beam or selective laser melting (SEBM/SLM). In this paper, we describe an evaporation model which we employ within the framework of a two-dimensional free surface lattice Boltzmann method. With this method, we solve the hydrodynamics as well as thermodynamics of the molten material taking into account the mass and energy losses due to evaporation and the recoil pressure acting on the melt pool. Validation of the numerical model is performed by measuring maximum melt depths and evaporative losses in samples of pure titanium and Ti–6Al–4V molten by an electron beam. Finally, the model is applied to create processing maps for an SEBM process. The results predict that the penetration depth of the electron beam, which is a function of the acceleration voltage, has a significant influence on evaporation effects. (paper)

  18. Sideband instability analysis based on a one-dimensional high-gain free electron laser model

    Science.gov (United States)

    Tsai, Cheng-Ying; Wu, Juhao; Yang, Chuan; Yoon, Moohyun; Zhou, Guanqun

    2017-12-01

    When an untapered high-gain free electron laser (FEL) reaches saturation, the exponential growth ceases and the radiation power starts to oscillate about an equilibrium. The FEL radiation power or efficiency can be increased by undulator tapering. For a high-gain tapered FEL, although the power is enhanced after the first saturation, it is known that there is a so-called second saturation where the FEL power growth stops even with a tapered undulator system. The sideband instability is one of the primary reasons leading to this second saturation. In this paper, we provide a quantitative analysis on how the gradient of undulator tapering can mitigate the sideband growth. The study is carried out semianalytically and compared with one-dimensional numerical simulations. The physical parameters are taken from Linac Coherent Light Source-like electron bunch and undulator systems. The sideband field gain and the evolution of the radiation spectra for different gradients of undulator tapering are examined. It is found that a strong undulator tapering (˜10 %) provides effective suppression of the sideband instability in the postsaturation regime.

  19. A stochastic context free grammar based framework for analysis of protein sequences

    Directory of Open Access Journals (Sweden)

    Nebel Jean-Christophe

    2009-10-01

    Full Text Available Abstract Background In the last decade, there have been many applications of formal language theory in bioinformatics such as RNA structure prediction and detection of patterns in DNA. However, in the field of proteomics, the size of the protein alphabet and the complexity of relationship between amino acids have mainly limited the application of formal language theory to the production of grammars whose expressive power is not higher than stochastic regular grammars. However, these grammars, like other state of the art methods, cannot cover any higher-order dependencies such as nested and crossing relationships that are common in proteins. In order to overcome some of these limitations, we propose a Stochastic Context Free Grammar based framework for the analysis of protein sequences where grammars are induced using a genetic algorithm. Results This framework was implemented in a system aiming at the production of binding site descriptors. These descriptors not only allow detection of protein regions that are involved in these sites, but also provide insight in their structure. Grammars were induced using quantitative properties of amino acids to deal with the size of the protein alphabet. Moreover, we imposed some structural constraints on grammars to reduce the extent of the rule search space. Finally, grammars based on different properties were combined to convey as much information as possible. Evaluation was performed on sites of various sizes and complexity described either by PROSITE patterns, domain profiles or a set of patterns. Results show the produced binding site descriptors are human-readable and, hence, highlight biologically meaningful features. Moreover, they achieve good accuracy in both annotation and detection. In addition, findings suggest that, unlike current state-of-the-art methods, our system may be particularly suited to deal with patterns shared by non-homologous proteins. Conclusion A new Stochastic Context Free

  20. Wavelet based free-form deformations for nonrigid registration

    Science.gov (United States)

    Sun, Wei; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In nonrigid registration, deformations may take place on the coarse and fine scales. For the conventional B-splines based free-form deformation (FFD) registration, these coarse- and fine-scale deformations are all represented by basis functions of a single scale. Meanwhile, wavelets have been proposed as a signal representation suitable for multi-scale problems. Wavelet analysis leads to a unique decomposition of a signal into its coarse- and fine-scale components. Potentially, this could therefore be useful for image registration. In this work, we investigate whether a wavelet-based FFD model has advantages for nonrigid image registration. We use a B-splines based wavelet, as defined by Cai and Wang.1 This wavelet is expressed as a linear combination of B-spline basis functions. Derived from the original B-spline function, this wavelet is smooth, differentiable, and compactly supported. The basis functions of this wavelet are orthogonal across scales in Sobolev space. This wavelet was previously used for registration in computer vision, in 2D optical flow problems,2 but it was not compared with the conventional B-spline FFD in medical image registration problems. An advantage of choosing this B-splines based wavelet model is that the space of allowable deformation is exactly equivalent to that of the traditional B-spline. The wavelet transformation is essentially a (linear) reparameterization of the B-spline transformation model. Experiments on 10 CT lung and 18 T1-weighted MRI brain datasets show that wavelet based registration leads to smoother deformation fields than traditional B-splines based registration, while achieving better accuracy.

  1. Has Childhood Smoking Reduced Following Smoke-Free Public Places Legislation? A Segmented Regression Analysis of Cross-Sectional UK School-Based Surveys.

    Science.gov (United States)

    Katikireddi, Srinivasa Vittal; Der, Geoff; Roberts, Chris; Haw, Sally

    2016-07-01

    Smoke-free legislation has been a great success for tobacco control but its impact on smoking uptake remains under-explored. We investigated if trends in smoking uptake amongst adolescents differed before and after the introduction of smoke-free legislation in the United Kingdom. Prevalence estimates for regular smoking were obtained from representative school-based surveys for the four countries of the United Kingdom. Post-intervention status was represented using a dummy variable and to allow for a change in trend, the number of years since implementation was included. To estimate the association between smoke-free legislation and adolescent smoking, the percentage of regular smokers was modeled using linear regression adjusted for trends over time and country. All models were stratified by age (13 and 15 years) and sex. For 15-year-old girls, the implementation of smoke-free legislation in the United Kingdom was associated with a 4.3% reduction in the prevalence of regular smoking (P = .029). In addition, regular smoking fell by an additional 1.5% per annum post-legislation in this group (P = .005). Among 13-year-old girls, there was a reduction of 2.8% in regular smoking (P = .051), with no evidence of a change in trend post-legislation. Smaller and nonsignificant reductions in regular smoking were observed for 15- and 13-year-old boys (P = .175 and P = .113, respectively). Smoke-free legislation may help reduce smoking uptake amongst teenagers, with stronger evidence for an association seen in females. Further research that analyses longitudinal data across more countries is required. Previous research has established that smoke-free legislation has led to many improvements in population health, including reductions in heart attack, stroke, and asthma. However, the impacts of smoke-free legislation on the rates of smoking amongst children have been less investigated. Analysis of repeated cross-sectional surveys across the four countries of the United Kingdom

  2. Language acquisition is model-based rather than model-free.

    Science.gov (United States)

    Wang, Felix Hao; Mintz, Toben H

    2016-01-01

    Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.

  3. A chemo-mechanical free-energy-based approach to model durotaxis and extracellular stiffness-dependent contraction and polarization of cells.

    Science.gov (United States)

    Shenoy, Vivek B; Wang, Hailong; Wang, Xiao

    2016-02-06

    We propose a chemo-mechanical model based on stress-dependent recruitment of myosin motors to describe how the contractility, polarization and strain in cells vary with the stiffness of their surroundings and their shape. A contractility tensor, which depends on the distribution of myosin motors, is introduced to describe the chemical free energy of the cell due to myosin recruitment. We explicitly include the contributions to the free energy that arise from mechanosensitive signalling pathways (such as the SFX, Rho-Rock and MLCK pathways) through chemo-mechanical coupling parameters. Taking the variations of the total free energy, which consists of the chemical and mechanical components, in accordance with the second law of thermodynamics provides equations for the temporal evolution of the active stress and the contractility tensor. Following this approach, we are able to recover the well-known Hill relation for active stresses, based on the fundamental principles of irreversible thermodynamics rather than phenomenology. We have numerically implemented our free energy-based approach to model spatial distribution of strain and contractility in (i) cells supported by flexible microposts, (ii) cells on two-dimensional substrates, and (iii) cells in three-dimensional matrices. We demonstrate how the polarization of the cells and the orientation of stress fibres can be deduced from the eigenvalues and eigenvectors of the contractility tensor. Our calculations suggest that the chemical free energy of the cell decreases with the stiffness of the extracellular environment as the cytoskeleton polarizes in response to stress-dependent recruitment of molecular motors. The mechanical energy, which includes the strain energy and motor potential energy, however, increases with stiffness, but the overall energy is lower for cells in stiffer environments. This provides a thermodynamic basis for durotaxis, whereby cells preferentially migrate towards stiffer regions of the

  4. Generating clustered scale-free networks using Poisson based localization of edges

    Science.gov (United States)

    Türker, İlker

    2018-05-01

    We introduce a variety of network models using a Poisson-based edge localization strategy, which result in clustered scale-free topologies. We first verify the success of our localization strategy by realizing a variant of the well-known Watts-Strogatz model with an inverse approach, implying a small-world regime of rewiring from a random network through a regular one. We then apply the rewiring strategy to a pure Barabasi-Albert model and successfully achieve a small-world regime, with a limited capacity of scale-free property. To imitate the high clustering property of scale-free networks with higher accuracy, we adapted the Poisson-based wiring strategy to a growing network with the ingredients of both preferential attachment and local connectivity. To achieve the collocation of these properties, we used a routine of flattening the edges array, sorting it, and applying a mixing procedure to assemble both global connections with preferential attachment and local clusters. As a result, we achieved clustered scale-free networks with a computational fashion, diverging from the recent studies by following a simple but efficient approach.

  5. The analysis of single-electron orbits in a free electron laser based upon a rectangular hybrid wiggler

    International Nuclear Information System (INIS)

    Kordbacheh, A.; Ghahremaninezhad, Roghayeh; Maraghechi, B.

    2012-01-01

    A three-dimensional analysis of a novel free-electron laser (FEL) based upon a rectangular hybrid wiggler (RHW) is presented. This RHW is designed in a configuration composed of rectangular rings with alternating ferrite and dielectric spacers immersed in a solenoidal magnetic field. An analytic model of RHW is introduced by solution of Laplace's equation for the magnetostatic fields under the appropriate boundary conditions. The single-electron orbits in combined RHW and axial guide magnetic fields are studied when only the first and the third spatial harmonic components of the RHW field are taken into account and the higher order terms are ignored. The results indicate that the third spatial harmonic leads to group III orbits with a strong negative mass regime particularly in large solenoidal magnetic fields. RHW is found to be a promising candidate with favorable characteristics to be used in microwave FEL.

  6. The analysis of single-electron orbits in a free electron laser based upon a rectangular hybrid wiggler

    Science.gov (United States)

    Kordbacheh, A.; Ghahremaninezhad, Roghayeh; Maraghechi, B.

    2012-09-01

    A three-dimensional analysis of a novel free-electron laser (FEL) based upon a rectangular hybrid wiggler (RHW) is presented. This RHW is designed in a configuration composed of rectangular rings with alternating ferrite and dielectric spacers immersed in a solenoidal magnetic field. An analytic model of RHW is introduced by solution of Laplace's equation for the magnetostatic fields under the appropriate boundary conditions. The single-electron orbits in combined RHW and axial guide magnetic fields are studied when only the first and the third spatial harmonic components of the RHW field are taken into account and the higher order terms are ignored. The results indicate that the third spatial harmonic leads to group III orbits with a strong negative mass regime particularly in large solenoidal magnetic fields. RHW is found to be a promising candidate with favorable characteristics to be used in microwave FEL.

  7. Total Variation Based Parameter-Free Model for Impulse Noise Removal

    DEFF Research Database (Denmark)

    Sciacchitano, Federica; Dong, Yiqiu; Andersen, Martin Skovgaard

    2017-01-01

    We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained...... total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i. e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained...... reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities...

  8. Free web-based modelling platform for managed aquifer recharge (MAR) applications

    Science.gov (United States)

    Stefan, Catalin; Junghanns, Ralf; Glaß, Jana; Sallwey, Jana; Fatkhutdinov, Aybulat; Fichtner, Thomas; Barquero, Felix; Moreno, Miguel; Bonilla, José; Kwoyiga, Lydia

    2017-04-01

    Managed aquifer recharge represents a valuable instrument for sustainable water resources management. The concept implies purposeful infiltration of surface water into underground for later recovery or environmental benefits. Over decades, MAR schemes were successfully installed worldwide for a variety of reasons: to maximize the natural storage capacity of aquifers, physical aquifer management, water quality management, and ecological benefits. The INOWAS-DSS platform provides a collection of free web-based tools for planning, management and optimization of main components of MAR schemes. The tools are grouped into 13 specific applications that cover most relevant challenges encountered at MAR sites, both from quantitative and qualitative perspectives. The applications include among others the optimization of MAR site location, the assessment of saltwater intrusion, the restoration of groundwater levels in overexploited aquifers, the maximization of natural storage capacity of aquifers, the improvement of water quality, the design and operational optimization of MAR schemes, clogging development and risk assessment. The platform contains a collection of about 35 web-based tools of various degrees of complexity, which are either included in application specific workflows or used as standalone modelling instruments. Among them are simple tools derived from data mining and empirical equations, analytical groundwater related equations, as well as complex numerical flow and transport models (MODFLOW, MT3DMS and SEAWAT). Up to now, the simulation core of the INOWAS-DSS, which is based on the finite differences groundwater flow model MODFLOW, is implemented and runs on the web. A scenario analyser helps to easily set up and evaluate new management options as well as future development such as land use and climate change and compare them to previous scenarios. Additionally simple tools such as analytical equations to assess saltwater intrusion are already running online

  9. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    Science.gov (United States)

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  10. Model-free stabilization by extremum seeking

    CERN Document Server

    Scheinker, Alexander

    2017-01-01

    With this brief, the authors present algorithms for model-free stabilization of unstable dynamic systems. An extremum-seeking algorithm assigns the role of a cost function to the dynamic system’s control Lyapunov function (clf) aiming at its minimization. The minimization of the clf drives the clf to zero and achieves asymptotic stabilization. This approach does not rely on, or require knowledge of, the system model. Instead, it employs periodic perturbation signals, along with the clf. The same effect is achieved as by using clf-based feedback laws that profit from modeling knowledge, but in a time-average sense. Rather than use integrals of the systems vector field, we employ Lie-bracket-based (i.e., derivative-based) averaging. The brief contains numerous examples and applications, including examples with unknown control directions and experiments with charged particle accelerators. It is intended for theoretical control engineers and mathematicians, and practitioners working in various industrial areas ...

  11. Free-free opacity in dense plasmas with an average atom model

    International Nuclear Information System (INIS)

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; Kilcrease, David Parker; Starrett, Charles Edward

    2017-01-01

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  12. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  13. Modelling Free Flow Speed on Two-Lane Rural Highways in Bosnia and Herzegovina

    Directory of Open Access Journals (Sweden)

    Ivan Lovrić

    2014-04-01

    Full Text Available Free flow speed is used as a parameter in transportation planning and capacity analysis models, as well as speed-flow diagrams. Many of these models suggest estimating free flow speed according to measurements from similar highways, which is not a practical method for use in B&H. This paper first discusses problems with using these methodologies in conditions prevailing in B&H and then presents a free flow speed evaluation model developed from a comprehensive field survey conducted on nine homogeneous sections of state and regional roads.

  14. Structure and sensitivity analysis of individual-based predator–prey models

    International Nuclear Information System (INIS)

    Imron, Muhammad Ali; Gergs, Andre; Berger, Uta

    2012-01-01

    The expensive computational cost of sensitivity analyses has hampered the use of these techniques for analysing individual-based models in ecology. A relatively cheap computational cost, referred to as the Morris method, was chosen to assess the relative effects of all parameters on the model’s outputs and to gain insights into predator–prey systems. Structure and results of the sensitivity analysis of the Sumatran tiger model – the Panthera Population Persistence (PPP) and the Notonecta foraging model (NFM) – were compared. Both models are based on a general predation cycle and designed to understand the mechanisms behind the predator–prey interaction being considered. However, the models differ significantly in their complexity and the details of the processes involved. In the sensitivity analysis, parameters that directly contribute to the number of prey items killed were found to be most influential. These were the growth rate of prey and the hunting radius of tigers in the PPP model as well as attack rate parameters and encounter distance of backswimmers in the NFM model. Analysis of distances in both of the models revealed further similarities in the sensitivity of the two individual-based models. The findings highlight the applicability and importance of sensitivity analyses in general, and screening design methods in particular, during early development of ecological individual-based models. Comparison of model structures and sensitivity analyses provides a first step for the derivation of general rules in the design of predator–prey models for both practical conservation and conceptual understanding. - Highlights: ► Structure of predation processes is similar in tiger and backswimmer model. ► The two individual-based models (IBM) differ in space formulations. ► In both models foraging distance is among the sensitive parameters. ► Morris method is applicable for the sensitivity analysis even of complex IBMs.

  15. SLC beam line error analysis using a model-based expert system

    International Nuclear Information System (INIS)

    Lee, M.; Kleban, S.

    1988-02-01

    Commissioning particle beam line is usually a very time-consuming and labor-intensive task for accelerator physicists. To aid in commissioning, we developed a model-based expert system that identifies error-free regions, as well as localizing beam line errors. This paper will give examples of the use of our system for the SLC commissioning. 8 refs., 5 figs

  16. Analysis of conformations and ESR spectra of free radicals in carbohydrates

    International Nuclear Information System (INIS)

    Abaghyan, G.V.; Abaghyan, A.G.; Apresyan, A.S.

    1998-01-01

    The conformations of free radicals arising when the unpaired electron is localized on carbon atoms in pyranose ring of carbohydrate molecule are considered. On the base of the analysis of expected conformations of radicals a possible contribution of β-protons in hyperfine structure of ESP spectra is predicted. The results of conformational analysis for different types of free radicals are in satisfactory agreement with the corresponding experimental data for the liquid phase. 17 refs

  17. Microstructural discovery of Al addition on Sn–0.5Cu-based Pb-free solder design

    International Nuclear Information System (INIS)

    Koo, Jahyun; Lee, Changsoo; Hong, Sung Jea; Kim, Keun-Soo; Lee, Hyuck Mo

    2015-01-01

    It is important to develop Pb-free solder alloys suitable for automotive use instead of traditional Sn–Pb solder due to environmental regulations (e.g., Restriction of Hazardous Substances (RoHS)). Al addition has been spotlighted to enhance solder properties. In this study, we investigated the microstructural change of Sn–0.5Cu wt.% based Pb-free solder alloys with Al addition (0.01–0.05 wt.%). The small amount of Al addition caused a remarkable microstructural change. The Al was favored to form Cu–Al intermetallic compounds inside the solder matrix. We identified the Cu–Al intermetallic compound as Cu_3_3Al_1_7, which has a rhombohedral structure, using EPMA and TEM analyses. This resulted in refined Cu_6Sn_5 networks in the Sn–0.5Cu based solder alloy. In addition, we conducted thermal analysis to confirm its stability at a high temperature of approximately 230 °C, which is the necessary temperature range for automotive applications. The solidification results were substantiated thermodynamically using the Scheil solidification model. We can provide criteria for the minimum aluminum content to modify the microstructure of Pb-free solder alloys. - Graphical abstract: The minor Al additions refined eutectic Cu_6Sn_5 IMC networks on the Sn–0.5Cu based solder alloys. The microstructure was dramatically changed with the minor Al addition. - Highlights: • We observed dramatic microstructure-change with Al additions. • We defined Cu_3_3Al_1_7 IMC with Al additions using TEM analysis. • We investigated grain refinement with Al additions using EBSD. • We discussed the refinement based on Scheil solidification model.

  18. Microstructural discovery of Al addition on Sn–0.5Cu-based Pb-free solder design

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Jahyun; Lee, Changsoo [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of); Hong, Sung Jea [MK Electron Co., Ltd., Yongin Cheoin-gu 316-2 (Korea, Republic of); Kim, Keun-Soo, E-mail: keunsookim@hoseo.edu [Department of Display Engineering, Hoseo University, Asan 336-795 (Korea, Republic of); Lee, Hyuck Mo, E-mail: hmlee@kaist.ac.kr [Department of Materials Science and Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)

    2015-11-25

    It is important to develop Pb-free solder alloys suitable for automotive use instead of traditional Sn–Pb solder due to environmental regulations (e.g., Restriction of Hazardous Substances (RoHS)). Al addition has been spotlighted to enhance solder properties. In this study, we investigated the microstructural change of Sn–0.5Cu wt.% based Pb-free solder alloys with Al addition (0.01–0.05 wt.%). The small amount of Al addition caused a remarkable microstructural change. The Al was favored to form Cu–Al intermetallic compounds inside the solder matrix. We identified the Cu–Al intermetallic compound as Cu{sub 33}Al{sub 17}, which has a rhombohedral structure, using EPMA and TEM analyses. This resulted in refined Cu{sub 6}Sn{sub 5} networks in the Sn–0.5Cu based solder alloy. In addition, we conducted thermal analysis to confirm its stability at a high temperature of approximately 230 °C, which is the necessary temperature range for automotive applications. The solidification results were substantiated thermodynamically using the Scheil solidification model. We can provide criteria for the minimum aluminum content to modify the microstructure of Pb-free solder alloys. - Graphical abstract: The minor Al additions refined eutectic Cu{sub 6}Sn{sub 5} IMC networks on the Sn–0.5Cu based solder alloys. The microstructure was dramatically changed with the minor Al addition. - Highlights: • We observed dramatic microstructure-change with Al additions. • We defined Cu{sub 33}Al{sub 17} IMC with Al additions using TEM analysis. • We investigated grain refinement with Al additions using EBSD. • We discussed the refinement based on Scheil solidification model.

  19. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  20. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  1. A prospective gating method to acquire a diverse set of free-breathing CT images for model-based 4DCT

    Science.gov (United States)

    O'Connell, D.; Ruan, D.; Thomas, D. H.; Dou, T. H.; Lewis, J. H.; Santhanam, A.; Lee, P.; Low, D. A.

    2018-02-01

    Breathing motion modeling requires observation of tissues at sufficiently distinct respiratory states for proper 4D characterization. This work proposes a method to improve sampling of the breathing cycle with limited imaging dose. We designed and tested a prospective free-breathing acquisition protocol with a simulation using datasets from five patients imaged with a model-based 4DCT technique. Each dataset contained 25 free-breathing fast helical CT scans with simultaneous breathing surrogate measurements. Tissue displacements were measured using deformable image registration. A correspondence model related tissue displacement to the surrogate. Model residual was computed by comparing predicted displacements to image registration results. To determine a stopping criteria for the prospective protocol, i.e. when the breathing cycle had been sufficiently sampled, subsets of N scans where 5  ⩽  N  ⩽  9 were used to fit reduced models for each patient. A previously published metric was employed to describe the phase coverage, or ‘spread’, of the respiratory trajectories of each subset. Minimum phase coverage necessary to achieve mean model residual within 0.5 mm of the full 25-scan model was determined and used as the stopping criteria. Using the patient breathing traces, a prospective acquisition protocol was simulated. In all patients, phase coverage greater than the threshold necessary for model accuracy within 0.5 mm of the 25 scan model was achieved in six or fewer scans. The prospectively selected respiratory trajectories ranked in the (97.5  ±  4.2)th percentile among subsets of the originally sampled scans on average. Simulation results suggest that the proposed prospective method provides an effective means to sample the breathing cycle with limited free-breathing scans. One application of the method is to reduce the imaging dose of a previously published model-based 4DCT protocol to 25% of its original value while

  2. Sorption kinetics and microbial biodegradation activity of hydrophobic chemicals in sewage sludge: Model and measurements based on free concentrations

    NARCIS (Netherlands)

    Artola-Garicano, E.; Borkent, I.; Damen, K.; Jager, T.; Vaes, W.H.J.

    2003-01-01

    In the current study, a new method is introduced with which the rate-limiting factor of biodegradation processes of hydrophobic chemicals in organic and aqueous systems can be determined. The novelty of this approach lies in the combination of a free concentration-based kinetic model with

  3. Gaussian-Based Smooth Dielectric Function: A Surface-Free Approach for Modeling Macromolecular Binding in Solvents

    Directory of Open Access Journals (Sweden)

    Arghya Chakravorty

    2018-03-01

    Full Text Available Conventional modeling techniques to model macromolecular solvation and its effect on binding in the framework of Poisson-Boltzmann based implicit solvent models make use of a geometrically defined surface to depict the separation of macromolecular interior (low dielectric constant from the solvent phase (high dielectric constant. Though this simplification saves time and computational resources without significantly compromising the accuracy of free energy calculations, it bypasses some of the key physio-chemical properties of the solute-solvent interface, e.g., the altered flexibility of water molecules and that of side chains at the interface, which results in dielectric properties different from both bulk water and macromolecular interior, respectively. Here we present a Gaussian-based smooth dielectric model, an inhomogeneous dielectric distribution model that mimics the effect of macromolecular flexibility and captures the altered properties of surface bound water molecules. Thus, the model delivers a smooth transition of dielectric properties from the macromolecular interior to the solvent phase, eliminating any unphysical surface separating the two phases. Using various examples of macromolecular binding, we demonstrate its utility and illustrate the comparison with the conventional 2-dielectric model. We also showcase some additional abilities of this model, viz. to account for the effect of electrolytes in the solution and to render the distribution profile of water across a lipid membrane.

  4. Model-based safety analysis of a control system using Simulink and Simscape extended models

    Directory of Open Access Journals (Sweden)

    Shao Nian

    2017-01-01

    Full Text Available The aircraft or system safety assessment process is an integral part of the overall aircraft development cycle. It is usually characterized by a very high timely and financial effort and can become a critical design driver in certain cases. Therefore, an increasing demand of effective methods to assist the safety assessment process arises within the aerospace community. One approach is the utilization of model-based technology, which is already well-established in the system development, for safety assessment purposes. This paper mainly describes a new tool for Model-Based Safety Analysis. A formal model for an example system is generated and enriched with extended models. Then, system safety analyses are performed on the model with the assistance of automation tools and compared to the results of a manual analysis. The objective of this paper is to improve the increasingly complex aircraft systems development process. This paper develops a new model-based analysis tool in Simulink/Simscape environment.

  5. A Newton-based Jacobian-free approach for neutronic-Monte Carlo/thermal-hydraulic static coupled analysis

    International Nuclear Information System (INIS)

    Mylonakis, Antonios G.; Varvayanni, M.; Catsaros, N.

    2017-01-01

    Highlights: •A Newton-based Jacobian-free Monte Carlo/thermal-hydraulic coupling approach is introduced. •OpenMC is coupled with COBRA-EN with a Newton-based approach. •The introduced coupling approach is tested in numerical experiments. •The performance of the new approach is compared with the traditional “serial” coupling approach. -- Abstract: In the field of nuclear reactor analysis, multi-physics calculations that account for the bonded nature of the neutronic and thermal-hydraulic phenomena are of major importance for both reactor safety and design. So far in the context of Monte-Carlo neutronic analysis a kind of “serial” algorithm has been mainly used for coupling with thermal-hydraulics. The main motivation of this work is the interest for an algorithm that could maintain the distinct treatment of the involved fields within a tight coupling context that could be translated into higher convergence rates and more stable behaviour. This work investigates the possibility of replacing the usually used “serial” iteration with an approximate Newton algorithm. The selected algorithm, called Approximate Block Newton, is actually a version of the Jacobian-free Newton Krylov method suitably modified for coupling mono-disciplinary solvers. Within this Newton scheme the linearised system is solved with a Krylov solver in order to avoid the creation of the Jacobian matrix. A coupling algorithm between Monte-Carlo neutronics and thermal-hydraulics based on the above-mentioned methodology is developed and its performance is analysed. More specifically, OpenMC, a Monte-Carlo neutronics code and COBRA-EN, a thermal-hydraulics code for sub-channel and core analysis, are merged in a coupling scheme using the Approximate Block Newton method aiming to examine the performance of this scheme and compare with that of the “traditional” serial iterative scheme. First results show a clear improvement of the convergence especially in problems where significant

  6. Hyper-chaos encryption using convolutional masking and model free unmasking

    International Nuclear Information System (INIS)

    Qi Guo-Yuan; Matondo Sandra Bazebo

    2014-01-01

    In this paper, during the masking process the encrypted message is convolved and embedded into a Qi hyper-chaotic system characterizing a high disorder degree. The masking scheme was tested using both Qi hyper-chaos and Lorenz chaos and indicated that Qi hyper-chaos based masking can resist attacks of the filtering and power spectrum analysis, while the Lorenz based scheme fails for high amplitude data. To unmask the message at the receiving end, two methods are proposed. In the first method, a model-free synchronizer, i.e. a multivariable higher-order differential feedback controller between the transmitter and receiver is employed to de-convolve the message embedded in the receiving signal. In the second method, no synchronization is required since the message is de-convolved using the information of the estimated derivative. (general)

  7. Equation-Free Analysis of Macroscopic Behavior in Traffic and Pedestrian Flow

    DEFF Research Database (Denmark)

    Marschler, Christian; Sieber, Jan; Hjorth, Poul G.

    2014-01-01

    Equation-free methods make possible an analysis of the evolution of a few coarse-grained or macroscopic quantities for a detailed and realistic model with a large number of fine-grained or microscopic variables, even though no equations are explicitly given on the macroscopic level. This will fac......Equation-free methods make possible an analysis of the evolution of a few coarse-grained or macroscopic quantities for a detailed and realistic model with a large number of fine-grained or microscopic variables, even though no equations are explicitly given on the macroscopic level....... This will facilitate a study of how the model behavior depends on parameter values including an understanding of transitions between different types of qualitative behavior. These methods are introduced and explained for traffic jam formation and emergence of oscillatory pedestrian counter flow in a corridor...

  8. Cloud-Based Model Calibration Using OpenStudio: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hale, E.; Lisell, L.; Goldwasser, D.; Macumber, D.; Dean, J.; Metzger, I.; Parker, A.; Long, N.; Ball, B.; Schott, M.; Weaver, E.; Brackney, L.

    2014-03-01

    OpenStudio is a free, open source Software Development Kit (SDK) and application suite for performing building energy modeling and analysis. The OpenStudio Parametric Analysis Tool has been extended to allow cloud-based simulation of multiple OpenStudio models parametrically related to a baseline model. This paper describes the new cloud-based simulation functionality and presents a model cali-bration case study. Calibration is initiated by entering actual monthly utility bill data into the baseline model. Multiple parameters are then varied over multiple iterations to reduce the difference between actual energy consumption and model simulation results, as calculated and visualized by billing period and by fuel type. Simulations are per-formed in parallel using the Amazon Elastic Cloud service. This paper highlights model parameterizations (measures) used for calibration, but the same multi-nodal computing architecture is available for other purposes, for example, recommending combinations of retrofit energy saving measures using the calibrated model as the new baseline.

  9. Is there any correlation between model-based perfusion parameters and model-free parameters of time-signal intensity curve on dynamic contrast enhanced MRI in breast cancer patients?

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Boram; Kang, Doo Kyoung; Kim, Tae Hee [Ajou University School of Medicine, Department of Radiology, Suwon, Gyeonggi-do (Korea, Republic of); Yoon, Dukyong [Ajou University School of Medicine, Department of Biomedical Informatics, Suwon (Korea, Republic of); Jung, Yong Sik; Kim, Ku Sang [Ajou University School of Medicine, Department of Surgery, Suwon (Korea, Republic of); Yim, Hyunee [Ajou University School of Medicine, Department of Pathology, Suwon (Korea, Republic of)

    2014-05-15

    To find out any correlation between dynamic contrast-enhanced (DCE) model-based parameters and model-free parameters, and evaluate correlations between perfusion parameters with histologic prognostic factors. Model-based parameters (Ktrans, Kep and Ve) of 102 invasive ductal carcinomas were obtained using DCE-MRI and post-processing software. Correlations between model-based and model-free parameters and between perfusion parameters and histologic prognostic factors were analysed. Mean Kep was significantly higher in cancers showing initial rapid enhancement (P = 0.002) and a delayed washout pattern (P = 0.001). Ve was significantly lower in cancers showing a delayed washout pattern (P = 0.015). Kep significantly correlated with time to peak enhancement (TTP) (ρ = -0.33, P < 0.001) and washout slope (ρ = 0.39, P = 0.002). Ve was significantly correlated with TTP (ρ = 0.33, P = 0.002). Mean Kep was higher in tumours with high nuclear grade (P = 0.017). Mean Ve was lower in tumours with high histologic grade (P = 0.005) and in tumours with negative oestrogen receptor status (P = 0.047). TTP was shorter in tumours with negative oestrogen receptor status (P = 0.037). We could acquire general information about the tumour vascular physiology, interstitial space volume and pathologic prognostic factors by analyzing time-signal intensity curve without a complicated acquisition process for the model-based parameters. (orig.)

  10. Cloud-Based Orchestration of a Model-Based Power and Data Analysis Toolchain

    Science.gov (United States)

    Post, Ethan; Cole, Bjorn; Dinkel, Kevin; Kim, Hongman; Lee, Erich; Nairouz, Bassem

    2016-01-01

    The proposed Europa Mission concept contains many engineering and scientific instruments that consume varying amounts of power and produce varying amounts of data throughout the mission. System-level power and data usage must be well understood and analyzed to verify design requirements. Numerous cross-disciplinary tools and analysis models are used to simulate the system-level spacecraft power and data behavior. This paper addresses the problem of orchestrating a consistent set of models, tools, and data in a unified analysis toolchain when ownership is distributed among numerous domain experts. An analysis and simulation environment was developed as a way to manage the complexity of the power and data analysis toolchain and to reduce the simulation turnaround time. A system model data repository is used as the trusted store of high-level inputs and results while other remote servers are used for archival of larger data sets and for analysis tool execution. Simulation data passes through numerous domain-specific analysis tools and end-to-end simulation execution is enabled through a web-based tool. The use of a cloud-based service facilitates coordination among distributed developers and enables scalable computation and storage needs, and ensures a consistent execution environment. Configuration management is emphasized to maintain traceability between current and historical simulation runs and their corresponding versions of models, tools and data.

  11. Examining the Internet-Based Free Talk in College English Classes from the Motivation Perspective

    Directory of Open Access Journals (Sweden)

    Li Ming

    2017-01-01

    Full Text Available Free Talk is recognized as an effective approach to teaching college English in China to improve students’ English speaking. With the popularity of Internet around the world, the Internet-based Free Talk demonstrates more advantages in motivating students to engage in English learning. In this paper, the author compares the main features of the Internet-based Free Talk with the five components of the MUSIC Model of Motivation synthesized from current research and theory in the field of motivation. Furthermore, the author illustrates how Internet facilitates Free Talk through online writing service system and online QQ community. The comparison reveals that the success of the Internet-based Free Talk is consistent with the key motivation principles. This paper indicates that professors and researchers in higher education could design and evaluate their instruction according to the components of the MUSIC model of motivation.

  12. Cost analysis of roll-to-roll fabricated ITO free single and tandem organic solar modules based on data from manufacture

    DEFF Research Database (Denmark)

    Machui, Florian; Hösel, Markus; Li, Ning

    2014-01-01

    We present a cost analysis based on state of the art printing and coating processes to fully encapsulated, flexible ITO- and vacuum-free polymer solar cell modules. Manufacturing data for both single junctions and tandem junctions are presented and analyzed. Within this calculation the most...

  13. FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.

    Science.gov (United States)

    Desai, Trunil S; Srivastava, Shireesh

    2018-01-01

    13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.

  14. Reducing Spread in Climate Model Projections of a September Ice-Free Arctic

    Science.gov (United States)

    Liu, Jiping; Song, Mirong; Horton, Radley M.; Hu, Yongyun

    2013-01-01

    This paper addresses the specter of a September ice-free Arctic in the 21st century using newly available simulations from the Coupled Model Intercomparison Project Phase 5 (CMIP5). We find that large spread in the projected timing of the September ice-free Arctic in 30 CMIP5 models is associated at least as much with different atmospheric model components as with initial conditions. Here we reduce the spread in the timing of an ice-free state using two different approaches for the 30 CMIP5 models: (i) model selection based on the ability to reproduce the observed sea ice climatology and variability since 1979 and (ii) constrained estimation based on the strong and persistent relationship between present and future sea ice conditions. Results from the two approaches show good agreement. Under a high-emission scenario both approaches project that September ice extent will drop to approx. 1.7 million sq km in the mid 2040s and reach the ice-free state (defined as 1 million sq km) in 2054-2058. Under a medium-mitigation scenario, both approaches project a decrease to approx.1.7 million sq km in the early 2060s, followed by a leveling off in the ice extent.

  15. Reliability Analysis of Free Jet Scour Below Dams

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2012-12-01

    Full Text Available Current formulas for calculating scour depth below of a free over fall are mostly deterministic in nature and do not adequately consider the uncertainties of various scouring parameters. A reliability-based assessment of scour, taking into account uncertainties of parameters and coefficients involved, should be performed. This paper studies the reliability of a dam foundation under the threat of scour. A model for calculating the reliability of scour and estimating the probability of failure of the dam foundation subjected to scour is presented. The Maximum Entropy Method is applied to construct the probability density function (PDF of the performance function subject to the moment constraints. Monte Carlo simulation (MCS is applied for uncertainty analysis. An example is considered, and there liability of its scour is computed, the influence of various random variables on the probability failure is analyzed.

  16. Physics Implications of Flat Directions in Free Fermionic Superstring Models; 2, Renormalization Group Analysis

    CERN Document Server

    Cleaver, G.; Espinosa, J.R.; Everett, L.L.; Langacker, P.; Wang, J.

    1999-01-01

    We continue the investigation of the physics implications of a class of flat directions for a prototype quasi-realistic free fermionic string model (CHL5), building upon the results of the previous paper in which the complete mass spectrum and effective trilinear couplings of the observable sector were calculated to all orders in the superpotential. We introduce soft supersymmetry breaking mass parameters into the model, and investigate the gauge symmetry breaking patterns and the renormalization group analysis for two representative flat directions, which leave an additional $U(1)'$ as well as the SM gauge group unbroken at the string scale. We study symmetry breaking patterns that lead to a phenomenologically acceptable $Z-Z'$ hierarchy, $M_{Z^{'}} \\sim {\\cal O}(1~{\\rm TeV})$ and $ 10^{12}~{\\rm GeV}$ for electroweak and intermediate scale $U(1)^{'}$ symmetry breaking, respectively, and the associated mass spectra after electroweak symmetry breaking. The fermion mass spectrum exhibits unrealistic features, i...

  17. A predictive framework for evaluating models of semantic organization in free recall

    Science.gov (United States)

    Morton, Neal W; Polyn, Sean M.

    2016-01-01

    Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243

  18. An Analysis of a Hard Real-Time Execution Environment Extension for FreeRTOS

    Directory of Open Access Journals (Sweden)

    STANGACIU, C.

    2015-08-01

    Full Text Available FreeRTOS is a popular real-time operating system, which has been under a significant attention in the last years due to its main advantages: it is open source, portable, well documented and implemented on more than 30 architectures. FreeRTOS execution environment is dynamic, preemptive and priority based, but it is not suitable for hard real-time tasks, because it provides task execution determinism only to a certain degree and cannot guarantee the absence of task execution jitter. As a solution to this problem, we propose a hard real time execution extension to FreeRTOS in order to support a particular model of HRT tasks, called ModXs, which are executed with no jitter. This article presents a detailed analysis, in terms of scheduling, task execution and memory usage of this hard real time execution environment extension. The article is concluding with the advantages this extension brings to the system compared to the small memory and timing overhead introduced.

  19. Integrating model checking with HiP-HOPS in model-based safety analysis

    International Nuclear Information System (INIS)

    Sharvia, Septavera; Papadopoulos, Yiannis

    2015-01-01

    The ability to perform an effective and robust safety analysis on the design of modern safety–critical systems is crucial. Model-based safety analysis (MBSA) has been introduced in recent years to support the assessment of complex system design by focusing on the system model as the central artefact, and by automating the synthesis and analysis of failure-extended models. Model checking and failure logic synthesis and analysis (FLSA) are two prominent MBSA paradigms. Extensive research has placed emphasis on the development of these techniques, but discussion on their integration remains limited. In this paper, we propose a technique in which model checking and Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) – an advanced FLSA technique – can be applied synergistically with benefit for the MBSA process. The application of the technique is illustrated through an example of a brake-by-wire system. - Highlights: • We propose technique to integrate HiP-HOPS and model checking. • State machines can be systematically constructed from HiP-HOPS. • The strengths of different MBSA techniques are combined. • Demonstrated through modeling and analysis of brake-by-wire system. • Root cause analysis is automated and system dynamic behaviors analyzed and verified

  20. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    Science.gov (United States)

    2015-03-03

    based whole-cell models of E. coli [6]. Conversely , highly abstracted kinetic frameworks, such as the cybernetic framework, represented a paradigm shift...metabolic objective function has been the optimization of biomass formation [18], although other metabolic objectives have also been estimated [19...experimental data. Toward these questions, we explored five hypothetical cell-free networks. Each network shared the same enzymatic connectivity, but

  1. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    Directory of Open Access Journals (Sweden)

    Joseph A. Wayman

    2015-03-01

    Full Text Available Cell-free systems offer many advantages for the study, manipulation and modeling of metabolism compared to in vivo processes. Many of the challenges confronting genome-scale kinetic modeling can potentially be overcome in a cell-free system. For example, there is no complex transcriptional regulation to consider, transient metabolic measurements are easier to obtain, and we no longer have to consider cell growth. Thus, cell-free operation holds several significant advantages for model development, identification and validation. Theoretically, genome-scale cell-free kinetic models may be possible for industrially important organisms, such as E. coli, if a simple, tractable framework for integrating allosteric regulation with enzyme kinetics can be formulated. Toward this unmet need, we present an effective biochemical network modeling framework for building dynamic cell-free metabolic models. The key innovation of our approach is the integration of simple effective rules encoding complex allosteric regulation with traditional kinetic pathway modeling. We tested our approach by modeling the time evolution of several hypothetical cell-free metabolic networks. We found that simple effective rules, when integrated with traditional enzyme kinetic expressions, captured complex allosteric patterns such as ultrasensitivity or non-competitive inhibition in the absence of mechanistic information. Second, when integrated into network models, these rules captured classic regulatory patterns such as product-induced feedback inhibition. Lastly, we showed, at least for the network architectures considered here, that we could simultaneously estimate kinetic parameters and allosteric connectivity from synthetic data starting from an unbiased collection of possible allosteric structures using particle swarm optimization. However, when starting with an initial population that was heavily enriched with incorrect structures, our particle swarm approach could converge

  2. Thermoacoustic model of a modified free piston Stirling engine with a thermal buffer tube

    International Nuclear Information System (INIS)

    Yang, Qin; Luo, Ercang; Dai, Wei; Yu, Guoyao

    2012-01-01

    This article presents a modified free-piston Stirling heat engine configuration in which a thermal buffer tube is added to sandwich between the hot and cold heat exchangers. Such a modified configuration may lead to an easier fabrication and lighter weight of a free piston. To analyze the thermodynamic performance of the modified free piston Stirling heat engine, thermoacoustic theory is used. In the thermoacoustic modelling, the regenerator, the free piston, and the thermal buffer tube are given at first. Then, based on linear thermoacoustic network theory, the thermal and thermodynamic networks are presented to characterize acoustic pressure and volume flow rate distributions at different interfaces, and the global performance such as the power output, the heat input and the thermal efficiency. A free piston Stirling heat engine with several hundreds of watts mechanical power output is selected as an example. The typical operating and structure parameters are as follows: frequency around 50 Hz, mean pressure around 3.0 MPa, and a diameter of free piston around 50 mm. From the analysis, it was found that the modified free-piston Stirling heat engine has almost the same thermodynamic performance as the original design, which indicates that the modified configuration is worthy to develop in future because of its mechanical simplicity and reliability.

  3. Depletion GPT-free sensitivity analysis for reactor eigenvalue problems

    International Nuclear Information System (INIS)

    Kennedy, C.; Abdel-Khalik, H.

    2013-01-01

    This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)

  4. Models of Regge behaviour in an asymptotically free theory

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    Two simple Feynman integral models are presented which reproduce the features expected to be of physical importance in the Regge behaviour of asymptotically free theories. Analysis confirms the result, expected on general grounds, that phi 3 in six dimensions has an essential singularity at l=-1. The extension to gauge theories is discussed. (Auth.)

  5. A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy

    International Nuclear Information System (INIS)

    Wada, Ken; Hyodo, Toshio

    2013-01-01

    Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.

  6. A simple shape-free model for pore-size estimation with positron annihilation lifetime spectroscopy

    Science.gov (United States)

    Wada, Ken; Hyodo, Toshio

    2013-06-01

    Positron annihilation lifetime spectroscopy is one of the methods for estimating pore size in insulating materials. We present a shape-free model to be used conveniently for such analysis. A basic model in classical picture is modified by introducing a parameter corresponding to an effective size of the positronium (Ps). This parameter is adjusted so that its Ps-lifetime to pore-size relation merges smoothly with that of the well-established Tao-Eldrup model (with modification involving the intrinsic Ps annihilation rate) applicable to very small pores. The combined model, i.e., modified Tao-Eldrup model for smaller pores and the modified classical model for larger pores, agrees surprisingly well with the quantum-mechanics based extended Tao-Eldrup model, which deals with Ps trapped in and thermally equilibrium with a rectangular pore.

  7. A free-surface lattice Boltzmann method for modelling the filling of expanding cavities by Bingham fluids.

    Science.gov (United States)

    Ginzburg, Irina; Steiner, Konrad

    2002-03-15

    The filling process of viscoplastic metal alloys and plastics in expanding cavities is modelled using the lattice Boltzmann method in two and three dimensions. These models combine the regularized Bingham model for viscoplastic fluids with a free-interface algorithm. The latter is based on a modified immiscible lattice Boltzmann model in which one species is the fluid and the other one is considered to be a vacuum. The boundary conditions at the curved liquid-vacuum interface are met without any geometrical front reconstruction from a first-order Chapman-Enskog expansion. The numerical results obtained with these models are found in good agreement with available theoretical and numerical analysis.

  8. Free fermion resolution of supergroup WZNW models

    Energy Technology Data Exchange (ETDEWEB)

    Quella, T.; Schomerus, V.

    2007-06-15

    Extending our earlier work on PSL(2 vertical stroke 2), we explain how to reduce the solution of WZNW models on general type I supergroups to those defined on the bosonic subgroup. The new analysis covers in particular the supergroups GL(M vertical stroke N) along with several close relatives such as PSL(N vertical stroke N), certain Poincar'e supergroups and the series OSP(2 vertical stroke 2N). This remarkable progress relies on the use of a special Feigin-Fuchs type representation. In preparation for the field theory analysis, we shall exploit a minisuperspace analogue of a free fermion construction to deduce the spectrum of the Laplacian on type I supergroups. The latter is shown to be non-diagonalizable. After lifting these results to the full WZNW model, we address various issues of the field theory, including its modular invariance and the computation of correlation functions. In agreement with previous findings, supergroup WZNW models allow to study chiral and non-chiral aspects of logarithmic conformal field theory within a geometric framework. We shall briefly indicate how insights from WZNW models carry over to non-geometric examples, such as e.g. the W(p) triplet models.

  9. Free fermion resolution of supergroup WZNW models

    Energy Technology Data Exchange (ETDEWEB)

    Quella, T; Schomerus, V

    2007-06-15

    Extending our earlier work on PSL(2 vertical stroke 2), we explain how to reduce the solution of WZNW models on general type I supergroups to those defined on the bosonic subgroup. The new analysis covers in particular the supergroups GL(M vertical stroke N) along with several close relatives such as PSL(N vertical stroke N), certain Poincar'e supergroups and the series OSP(2 vertical stroke 2N). This remarkable progress relies on the use of a special Feigin-Fuchs type representation. In preparation for the field theory analysis, we shall exploit a minisuperspace analogue of a free fermion construction to deduce the spectrum of the Laplacian on type I supergroups. The latter is shown to be non-diagonalizable. After lifting these results to the full WZNW model, we address various issues of the field theory, including its modular invariance and the computation of correlation functions. In agreement with previous findings, supergroup WZNW models allow to study chiral and non-chiral aspects of logarithmic conformal field theory within a geometric framework. We shall briefly indicate how insights from WZNW models carry over to non-geometric examples, such as e.g. the W(p) triplet models.

  10. European Climate - Energy Security Nexus. A model based scenario analysis

    International Nuclear Information System (INIS)

    Criqui, Patrick; Mima, Silvana

    2011-01-01

    In this research, we have provided an overview of the climate-security nexus in the European sector through a model based scenario analysis with POLES model. The analysis underline that under stringent climate policies, Europe take advantage of a double dividend in its capacity to develop a new cleaner energy model and in lower vulnerability to potential shocks on the international energy markets. (authors)

  11. Model-based human reliability analysis: prospects and requirements

    International Nuclear Information System (INIS)

    Mosleh, A.; Chang, Y.H.

    2004-01-01

    Major limitations of the conventional methods for human reliability analysis (HRA), particularly those developed for operator response analysis in probabilistic safety assessments (PSA) of nuclear power plants, are summarized as a motivation for the need and a basis for developing requirements for the next generation HRA methods. It is argued that a model-based approach that provides explicit cognitive causal links between operator behaviors and directly or indirectly measurable causal factors should be at the core of the advanced methods. An example of such causal model is briefly reviewed, where due to the model complexity and input requirements can only be currently implemented in a dynamic PSA environment. The computer simulation code developed for this purpose is also described briefly, together with current limitations in the models, data, and the computer implementation

  12. Sentinel model for influenza A virus monitoring in free-grazing ducks in Thailand.

    Science.gov (United States)

    Boonyapisitsopa, Supanat; Chaiyawong, Supassama; Nonthabenjawan, Nutthawan; Jairak, Waleemas; Prakairungnamthip, Duangduean; Bunpapong, Napawan; Amonsin, Alongkorn

    2016-01-01

    Influenza A virus (IAV) can cause influenza in birds and mammals. In Thailand, free-grazing ducks are known IAV reservoirs and can spread viruses through frequent movements in habitats they share with wild birds. In this study, the sentinel model for IAV monitoring was conducted over 4 months in two free-grazing duck flocks. IAV subtypes H4N6 (n=1) and H3N8 (n=5) were isolated from sentinel ducks at the ages of 13 and 15 weeks. Clinical signs of depression and ocular discharge were observed in the infected ducks. Phylogenetic analysis and genetic characterization of the isolated IAVs indicated that all Thai IAVs were clustered in the Eurasian lineage and pose low pathogenic avian influenza characteristics. Serological analysis found that antibodies against IAVs could be detected in the ducks since 9-weeks-old. In summary, our results indicate that the sentinel model can be used for IAV monitoring in free-grazing duck flocks. Since free-grazing ducks are potential reservoirs and transmitters of IAVs, routine IAV surveillance in free-grazing duck flocks can be beneficial for influenza prevention and control strategies. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Multibody dynamic analysis using a rotation-free shell element with corotational frame

    Science.gov (United States)

    Shi, Jiabei; Liu, Zhuyong; Hong, Jiazhen

    2018-03-01

    Rotation-free shell formulation is a simple and effective method to model a shell with large deformation. Moreover, it can be compatible with the existing theories of finite element method. However, a rotation-free shell is seldom employed in multibody systems. Using a derivative of rigid body motion, an efficient nonlinear shell model is proposed based on the rotation-free shell element and corotational frame. The bending and membrane strains of the shell have been simplified by isolating deformational displacements from the detailed description of rigid body motion. The consistent stiffness matrix can be obtained easily in this form of shell model. To model the multibody system consisting of the presented shells, joint kinematic constraints including translational and rotational constraints are deduced in the context of geometric nonlinear rotation-free element. A simple node-to-surface contact discretization and penalty method are adopted for contacts between shells. A series of analyses for multibody system dynamics are presented to validate the proposed formulation. Furthermore, the deployment of a large scaled solar array is presented to verify the comprehensive performance of the nonlinear shell model.

  14. Aeroelastic simulation of multi-MW wind turbines using a free vortex model coupled to a geometrically exact beam model

    International Nuclear Information System (INIS)

    Saverin, Joseph; Peukert, Juliane; Marten, David; Pechlivanoglou, George; Paschereit, Christian Oliver; Greenblatt, David

    2016-01-01

    The current paper investigates the aeroelastic modelling of large, flexible multi- MW wind turbine blades. Most current performance prediction tools make use of the Blade Element Momentum (BEM) model, based upon a number of simplifying assumptions that hold only under steady conditions. This is why a lifting line free vortex wake (LLFVW) algorithm is used here to accurately resolve unsteady wind turbine aerodynamics. A coupling to the structural analysis tool BeamDyn, based on geometrically exact beam theory, allows for time-resolved aeroelastic simulations with highly deflected blades including bend-twist, coupling. Predictions of blade loading and deformation for rigid and flexible blades are analysed with reference to different aerodynamic and structural approaches. The emergency shutdown procedure is chosen as an examplary design load case causing large deflections to place emphasis on the influence of structural coupling and demonstrate the necessity of high fidelity structural models. (paper)

  15. Semiphysiological versus Empirical Modelling of the Population Pharmacokinetics of Free and Total Cefazolin during Pregnancy

    Directory of Open Access Journals (Sweden)

    J. G. Coen van Hasselt

    2014-01-01

    Full Text Available This work describes a first population pharmacokinetic (PK model for free and total cefazolin during pregnancy, which can be used for dose regimen optimization. Secondly, analysis of PK studies in pregnant patients is challenging due to study design limitations. We therefore developed a semiphysiological modeling approach, which leveraged gestation-induced changes in creatinine clearance (CrCL into a population PK model. This model was then compared to the conventional empirical covariate model. First, a base two-compartmental PK model with a linear protein binding was developed. The empirical covariate model for gestational changes consisted of a linear relationship between CL and gestational age. The semiphysiological model was based on the base population PK model and a separately developed mixed-effect model for gestation-induced change in CrCL. Estimates for baseline clearance (CL were 0.119 L/min (RSE 58% and 0.142 L/min (RSE 44% for the empirical and semiphysiological models, respectively. Both models described the available PK data comparably well. However, as the semiphysiological model was based on prior knowledge of gestation-induced changes in renal function, this model may have improved predictive performance. This work demonstrates how a hybrid semiphysiological population PK approach may be of relevance in order to derive more informative inferences.

  16. The fallacy of the cognitive free fall in communication metaphor - a semiotic analysis

    DEFF Research Database (Denmark)

    Thellefsen, Martin Muderspach; Thellefsen, Torkild Leo; Sørensen, Bent

    2015-01-01

    This article is a theoretical analysis of the cognitive free fall metaphor, used within the cognitive view, as model for explaining the communication process between a generator and receiver of a message. The aim is to demonstrate that the idea of a cognitive free fall taking place within...... as a complex interrelation of emotion, information and cognition....

  17. Adaptive Disturbance Estimation for Offset-Free SISO Model Predictive Control

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Poulsen, Niels Kjølstad; Jørgensen, Sten Bay

    2011-01-01

    Offset free tracking in Model Predictive Control requires estimation of unmeasured disturbances or the inclusion of an integrator. An algorithm for estimation of an unknown disturbance based on adaptive estimation with time varying forgetting is introduced and benchmarked against the classical...

  18. On the applicability of nearly free electron model for resistivity calculations in liquid metals

    International Nuclear Information System (INIS)

    Gorecki, J.; Popielawski, J.

    1982-09-01

    The calculations of resistivity based on the nearly free electron model are presented for many noble and transition liquid metals. The triple ion correlation is included in resistivity formula according to SCQCA approximation. Two different methods for describing the conduction band are used. The problem of applicability of the nearly free electron model for different metals is discussed. (author)

  19. Modeling and analysis on ring-type piezoelectric transformers.

    Science.gov (United States)

    Ho, Shine-Tzong

    2007-11-01

    This paper presents an electromechanical model for a ring-type piezoelectric transformer (PT). To establish this model, vibration characteristics of the piezoelectric ring with free boundary conditions are analyzed in advance. Based on the vibration analysis of the piezoelectric ring, the operating frequency and vibration mode of the PT are chosen. Then, electromechanical equations of motion for the PT are derived based on Hamilton's principle, which can be used to simulate the coupled electromechanical system for the transformer. Such as voltage stepup ratio, input impedance, output impedance, input power, output power, and efficiency are calculated by the equations. The optimal load resistance and the maximum efficiency for the PT will be presented in this paper. Experiments also were conducted to verify the theoretical analysis, and a good agreement was obtained.

  20. On a price formation free boundary model by Lasry and Lions

    KAUST Repository

    Caffarelli, Luis A.

    2011-06-01

    We discuss global existence and asymptotic behaviour of a price formation free boundary model introduced by Lasry and Lions in 2007. Our results are based on a construction which transforms the problem into the heat equation with specially prepared initial datum. The key point is that the free boundary present in the original problem becomes the zero level set of this solution. Using the properties of the heat operator we can show global existence, regularity and asymptotic results of the free boundary. 2011 Académie des sciences.

  1. On a price formation free boundary model by Lasry and Lions

    KAUST Repository

    Caffarelli, Luis A.; Markowich, Peter A.; Pietschmann, Jan-F.

    2011-01-01

    We discuss global existence and asymptotic behaviour of a price formation free boundary model introduced by Lasry and Lions in 2007. Our results are based on a construction which transforms the problem into the heat equation with specially prepared initial datum. The key point is that the free boundary present in the original problem becomes the zero level set of this solution. Using the properties of the heat operator we can show global existence, regularity and asymptotic results of the free boundary. 2011 Académie des sciences.

  2. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    Science.gov (United States)

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric

  3. Constraints based analysis of extended cybernetic models.

    Science.gov (United States)

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Loss terms in free-piston Stirling engine models

    Science.gov (United States)

    Gordon, Lloyd B.

    1992-01-01

    Various models for free piston Stirling engines are reviewed. Initial models were developed primarily for design purposes and to predict operating parameters, especially efficiency. More recently, however, such models have been used to predict engine stability. Free piston Stirling engines have no kinematic constraints and stability may not only be sensitive to the load, but also to various nonlinear loss and spring constraints. The present understanding is reviewed of various loss mechanisms for free piston Stirling engines and how they have been incorporated into engine models is discussed.

  5. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  6. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B [Northwestern Memorial Hospital, Chicago, IL (United States); Georgia Institute of Technology, Atlanta, GA (Georgia); Wang, C [Georgia Institute of Technology, Atlanta, GA (Georgia)

    2016-06-15

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities. These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell

  7. Variance-based sensitivity analysis for wastewater treatment plant modelling.

    Science.gov (United States)

    Cosenza, Alida; Mannina, Giorgio; Vanrolleghem, Peter A; Neumann, Marc B

    2014-02-01

    Global sensitivity analysis (GSA) is a valuable tool to support the use of mathematical models that characterise technical or natural systems. In the field of wastewater modelling, most of the recent applications of GSA use either regression-based methods, which require close to linear relationships between the model outputs and model factors, or screening methods, which only yield qualitative results. However, due to the characteristics of membrane bioreactors (MBR) (non-linear kinetics, complexity, etc.) there is an interest to adequately quantify the effects of non-linearity and interactions. This can be achieved with variance-based sensitivity analysis methods. In this paper, the Extended Fourier Amplitude Sensitivity Testing (Extended-FAST) method is applied to an integrated activated sludge model (ASM2d) for an MBR system including microbial product formation and physical separation processes. Twenty-one model outputs located throughout the different sections of the bioreactor and 79 model factors are considered. Significant interactions among the model factors are found. Contrary to previous GSA studies for ASM models, we find the relationship between variables and factors to be non-linear and non-additive. By analysing the pattern of the variance decomposition along the plant, the model factors having the highest variance contributions were identified. This study demonstrates the usefulness of variance-based methods in membrane bioreactor modelling where, due to the presence of membranes and different operating conditions than those typically found in conventional activated sludge systems, several highly non-linear effects are present. Further, the obtained results highlight the relevant role played by the modelling approach for MBR taking into account simultaneously biological and physical processes. © 2013.

  8. Analyzing Ambiguity of Context-Free Grammars

    DEFF Research Database (Denmark)

    Brabrand, Claus; Giegerich, Robert; Møller, Anders

    2010-01-01

    It has been known since 1962 that the ambiguity problem for context-free grammars is undecidable. Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where grammars are used as models of real-world physical structures. We...... observe that there is a simple linguistic characterization of the grammar ambiguity problem, and we show how to exploit this by presenting an ambiguity analysis framework based on conservative language approximations. As a concrete example, we propose a technique based on local regular approximations...

  9. Evaluating the environmental fate of pharmaceuticals using a level III model based on poly-parameter linear free energy relationships

    International Nuclear Information System (INIS)

    Zukowska, Barbara; Breivik, Knut; Wania, Frank

    2006-01-01

    We recently proposed how to expand the applicability of multimedia models towards polar organic chemicals by expressing environmental phase partitioning with the help of poly-parameter linear free energy relationships (PP-LFERs). Here we elaborate on this approach by applying it to three pharmaceutical substances. A PP-LFER-based version of a Level III fugacity model calculates overall persistence, concentrations and intermedia fluxes of polar and non-polar organic chemicals between air, water, soil and sediments at steady-state. Illustrative modeling results for the pharmaceuticals within a defined coastal region are presented and discussed. The model results are highly sensitive to the degradation rate in water and the equilibrium partitioning between organic carbon and water, suggesting that an accurate description of this particular partitioning equilibrium is essential in order to obtain reliable predictions of environmental fate. The PP-LFER based modeling approach furthermore illustrates that the greatest mobility in aqueous phases may be experienced by pharmaceuticals that combines a small molecular size with strong H-acceptor properties

  10. Evaluating the environmental fate of pharmaceuticals using a level III model based on poly-parameter linear free energy relationships

    Energy Technology Data Exchange (ETDEWEB)

    Zukowska, Barbara [Department of Analytical Chemistry, Chemical Faculty, Gdansk University of Technology, 11/12 G. Narutowicza St., 80-952 Gdansk (Poland); Breivik, Knut [NILU- Norwegian Institute for Air Research, P.O. Box 100, NO-2027 Kjeller (Norway)]. E-mail: knut.breivik@nilu.no; Wania, Frank [Department of Physical and Environmental Sciences, University of Toronto at Scarborough, 1265 Military Trail, Scarborough, Ontario, M1C 1A4 (Canada)

    2006-04-15

    We recently proposed how to expand the applicability of multimedia models towards polar organic chemicals by expressing environmental phase partitioning with the help of poly-parameter linear free energy relationships (PP-LFERs). Here we elaborate on this approach by applying it to three pharmaceutical substances. A PP-LFER-based version of a Level III fugacity model calculates overall persistence, concentrations and intermedia fluxes of polar and non-polar organic chemicals between air, water, soil and sediments at steady-state. Illustrative modeling results for the pharmaceuticals within a defined coastal region are presented and discussed. The model results are highly sensitive to the degradation rate in water and the equilibrium partitioning between organic carbon and water, suggesting that an accurate description of this particular partitioning equilibrium is essential in order to obtain reliable predictions of environmental fate. The PP-LFER based modeling approach furthermore illustrates that the greatest mobility in aqueous phases may be experienced by pharmaceuticals that combines a small molecular size with strong H-acceptor properties.

  11. HDclassif : An R Package for Model-Based Clustering and Discriminant Analysis of High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Laurent Berge

    2012-01-01

    Full Text Available This paper presents the R package HDclassif which is devoted to the clustering and the discriminant analysis of high-dimensional data. The classification methods proposed in the package result from a new parametrization of the Gaussian mixture model which combines the idea of dimension reduction and model constraints on the covariance matrices. The supervised classification method using this parametrization is called high dimensional discriminant analysis (HDDA. In a similar manner, the associated clustering method iscalled high dimensional data clustering (HDDC and uses the expectation-maximization algorithm for inference. In order to correctly t the data, both methods estimate the specific subspace and the intrinsic dimension of the groups. Due to the constraints on the covariance matrices, the number of parameters to estimate is significantly lower than other model-based methods and this allows the methods to be stable and efficient in high dimensions. Two introductory examples illustrated with R codes allow the user to discover the hdda and hddc functions. Experiments on simulated and real datasets also compare HDDC and HDDA with existing classification methods on high-dimensional datasets. HDclassif is a free software and distributed under the general public license, as part of the R software project.

  12. Analysis of functionality free CASE-tools databases design

    Directory of Open Access Journals (Sweden)

    A. V. Gavrilov

    2016-01-01

    Full Text Available The introduction in the educational process of database design CASEtechnologies requires the institution of significant costs for the purchase of software. A possible solution could be the use of free software peers. At the same time this kind of substitution should be based on even-com representation of the functional characteristics and features of operation of these programs. The purpose of the article – a review of the free and non-profi t CASE-tools database design, as well as their classifi cation on the basis of the analysis functionality. When writing this article were used materials from the offi cial websites of the tool developers. Evaluation of the functional characteristics of CASEtools for database design made exclusively empirically with the direct work with software products. Analysis functionality of tools allow you to distinguish the two categories CASE-tools database design. The first category includes systems with a basic set of features and tools. The most important basic functions of these systems are: management connections to database servers, visual tools to create and modify database objects (tables, views, triggers, procedures, the ability to enter and edit data in table mode, user and privilege management tools, editor SQL-code, means export/import data. CASE-system related to the first category can be used to design and develop simple databases, data management, as well as a means of administration server database. A distinctive feature of the second category of CASE-tools for database design (full-featured systems is the presence of visual designer, allowing to carry out the construction of the database model and automatic creation of the database on the server based on this model. CASE-system related to this categories can be used for the design and development of databases of any structural complexity, as well as a database server administration tool. The article concluded that the

  13. Propagation of experimental uncertainties using the Lipari-Szabo model-free analysis of protein dynamics

    International Nuclear Information System (INIS)

    Jin Danqing; Andrec, Michael; Montelione, Gaetano T.; Levy, Ronald M.

    1998-01-01

    In this paper we make use of the graphical procedure previously described [Jin, D. et al. (1997) J. Am. Chem. Soc., 119, 6923-6924] to analyze NMR relaxation data using the Lipari-Szabo model-free formalism. The graphical approach is advantageous in that it allows the direct visualization of the experimental uncertainties in the motional parameter space. Some general 'rules' describing the relationship between the precision of the relaxation measurements and the precision of the model-free parameters and how this relationship changes with the overall tumbling time (τm) are summarized. The effect of the precision in the relaxation measurements on the detection of internal motions not close to the extreme narrowing limit is analyzed. We also show that multiple timescale internal motions may be obscured by experimental uncertainty, and that the collection of relaxation data at very high field strength can improve the ability to detect such deviations from the simple Lipari-Szabo model

  14. Analysis of coherence properties of 3-rd generation synchrotron sources and free-electron lasers

    Energy Technology Data Exchange (ETDEWEB)

    Vartanyants, I.A.; Singer, A. [HASYLAB at Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany)

    2009-07-15

    A general theoretical approach based on the results of statistical optics is used for the analysis of the transverse coherence properties of 3-rd generation synchrotron sources and X-ray free-electron lasers (XFEL). Correlation properties of the wave elds are calculated at different distances from an equivalent Gaussian Schell-model source. This model is used to describe coherence properties of the five meter undulator source at the synchrotron storage ring PETRA III. In the case of XFEL sources the decomposition of the statistical fields into a sum of independently propagating transverse modes is used for the analysis of the coherence properties of these new sources. A detailed calculation is performed for the parameters of the SASE1 undulator at the European XFEL. It is demonstrated that only a few modes contribute significantly to the total radiation field of that source. (orig.)

  15. Analysis of coherence properties of 3-rd generation synchrotron sources and free-electron lasers

    International Nuclear Information System (INIS)

    Vartanyants, I.A.; Singer, A.

    2009-07-01

    A general theoretical approach based on the results of statistical optics is used for the analysis of the transverse coherence properties of 3-rd generation synchrotron sources and X-ray free-electron lasers (XFEL). Correlation properties of the wave elds are calculated at different distances from an equivalent Gaussian Schell-model source. This model is used to describe coherence properties of the five meter undulator source at the synchrotron storage ring PETRA III. In the case of XFEL sources the decomposition of the statistical fields into a sum of independently propagating transverse modes is used for the analysis of the coherence properties of these new sources. A detailed calculation is performed for the parameters of the SASE1 undulator at the European XFEL. It is demonstrated that only a few modes contribute significantly to the total radiation field of that source. (orig.)

  16. Model-Free Adaptive Control for Unknown Nonlinear Zero-Sum Differential Game.

    Science.gov (United States)

    Zhong, Xiangnan; He, Haibo; Wang, Ding; Ni, Zhen

    2018-05-01

    In this paper, we present a new model-free globalized dual heuristic dynamic programming (GDHP) approach for the discrete-time nonlinear zero-sum game problems. First, the online learning algorithm is proposed based on the GDHP method to solve the Hamilton-Jacobi-Isaacs equation associated with optimal regulation control problem. By setting backward one step of the definition of performance index, the requirement of system dynamics, or an identifier is relaxed in the proposed method. Then, three neural networks are established to approximate the optimal saddle point feedback control law, the disturbance law, and the performance index, respectively. The explicit updating rules for these three neural networks are provided based on the data generated during the online learning along the system trajectories. The stability analysis in terms of the neural network approximation errors is discussed based on the Lyapunov approach. Finally, two simulation examples are provided to show the effectiveness of the proposed method.

  17. Coupled dynamic-multidimensional modelling of free-piston engine combustion

    International Nuclear Information System (INIS)

    Mikalsen, R.; Roskilly, A.P.

    2009-01-01

    Free-piston engines are under investigation by a number of research groups worldwide, as an alternative to conventional technology in applications such as electric and hydraulic power generation. The piston dynamics of the free-piston engine differ significantly from those of conventional engines, and this may influence in-cylinder gas motion, combustion and emissions formation. Due to the complex interaction between mechanics and thermodynamics, the modelling of free-piston engines is not straight-forward. This paper presents a novel approach to the modelling of free-piston engines through the introduction of solution-dependent mesh motion in an engine CFD code. The particular features of free-piston engines are discussed, and the model for engine dynamics implemented in the CFD code is described. Finally, the coupled solver is demonstrated through the modelling of a spark ignited free-piston engine generator

  18. Coupled dynamic-multidimensional modelling of free-piston engine combustion

    Energy Technology Data Exchange (ETDEWEB)

    Mikalsen, R. [Sir Joseph Swan Institute for Energy Research, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom); Roskilly, A.P. [Sir Joseph Swan Institute for Energy Research, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)], E-mail: tony.roskilly@ncl.ac.uk

    2009-01-15

    Free-piston engines are under investigation by a number of research groups worldwide, as an alternative to conventional technology in applications such as electric and hydraulic power generation. The piston dynamics of the free-piston engine differ significantly from those of conventional engines, and this may influence in-cylinder gas motion, combustion and emissions formation. Due to the complex interaction between mechanics and thermodynamics, the modelling of free-piston engines is not straight-forward. This paper presents a novel approach to the modelling of free-piston engines through the introduction of solution-dependent mesh motion in an engine CFD code. The particular features of free-piston engines are discussed, and the model for engine dynamics implemented in the CFD code is described. Finally, the coupled solver is demonstrated through the modelling of a spark ignited free-piston engine generator.

  19. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  20. T-Spline Based Unifying Registration Procedure for Free-Form Surface Workpieces in Intelligent CMM

    Directory of Open Access Journals (Sweden)

    Zhenhua Han

    2017-10-01

    Full Text Available With the development of the modern manufacturing industry, the free-form surface is widely used in various fields, and the automatic detection of a free-form surface is an important function of future intelligent three-coordinate measuring machines (CMMs. To improve the intelligence of CMMs, a new visual system is designed based on the characteristics of CMMs. A unified model of the free-form surface is proposed based on T-splines. A discretization method of the T-spline surface formula model is proposed. Under this discretization, the position and orientation of the workpiece would be recognized by point cloud registration. A high accuracy evaluation method is proposed between the measured point cloud and the T-spline surface formula. The experimental results demonstrate that the proposed method has the potential to realize the automatic detection of different free-form surfaces and improve the intelligence of CMMs.

  1. Static aeroelastic analysis including geometric nonlinearities based on reduced order model

    Directory of Open Access Journals (Sweden)

    Changchuan Xie

    2017-04-01

    Full Text Available This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM. The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/finite element (MFE method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM. Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result.

  2. A receptor model for urban aerosols based on oblique factor analysis

    DEFF Research Database (Denmark)

    Keiding, Kristian; Sørensen, Morten S.; Pind, Niels

    1987-01-01

    A procedure is outlined for the construction of receptor models of urban aerosols, based on factor analysis. The advantage of the procedure is that the covariation of source impacts is included in the construction of the models. The results are compared with results obtained by other receptor......-modelling procedures. It was found that procedures based on correlating sources were physically sound as well as in mutual agreement. Procedures based on non-correlating sources were found to generate physically obscure models....

  3. A model-based systems approach to pharmaceutical product-process design and analysis

    DEFF Research Database (Denmark)

    Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    This is a perspective paper highlighting the need for systematic model-based design and analysis in pharmaceutical product-process development. A model-based framework is presented and the role, development and use of models of various types are discussed together with the structure of the models...

  4. Differential geometry based solvation model II: Lagrangian formulation.

    Science.gov (United States)

    Chen, Zhan; Baker, Nathan A; Wei, G W

    2011-12-01

    Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation models. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The optimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and PB equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of

  5. SU-G-BRC-13: Model Based Classification for Optimal Position Selection for Left-Sided Breast Radiotherapy: Free Breathing, DIBH, Or Prone

    Energy Technology Data Exchange (ETDEWEB)

    Lin, H; Liu, T; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Petillion, S; Kindts, I [University Hospitals Leuven, Leuven, Vlaams-Brabant (Belgium); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States)

    2016-06-15

    Purpose: There are clinical decision challenges to select optimal treatment positions for left-sided breast cancer patients—supine free breathing (FB), supine Deep Inspiration Breath Hold (DIBH) and prone free breathing (prone). Physicians often make the decision based on experiences and trials, which might not always result optimal OAR doses. We herein propose a mathematical model to predict the lowest OAR doses among these three positions, providing a quantitative tool for corresponding clinical decision. Methods: Patients were scanned in FB, DIBH, and prone positions under an IRB approved protocol. Tangential beam plans were generated for each position, and OAR doses were calculated. The position with least OAR doses is defined as the optimal position. The following features were extracted from each scan to build the model: heart, ipsilateral lung, breast volume, in-field heart, ipsilateral lung volume, distance between heart and target, laterality of heart, and dose to heart and ipsilateral lung. Principal Components Analysis (PCA) was applied to remove the co-linearity of the input data and also to lower the data dimensionality. Feature selection, another method to reduce dimensionality, was applied as a comparison. Support Vector Machine (SVM) was then used for classification. Thirtyseven patient data were acquired; up to now, five patient plans were available. K-fold cross validation was used to validate the accuracy of the classifier model with small training size. Results: The classification results and K-fold cross validation demonstrated the model is capable of predicting the optimal position for patients. The accuracy of K-fold cross validations has reached 80%. Compared to PCA, feature selection allows causal features of dose to be determined. This provides more clinical insights. Conclusion: The proposed classification system appeared to be feasible. We are generating plans for the rest of the 37 patient images, and more statistically significant

  6. Molecular-based recursive partitioning analysis model for glioblastoma in the temozolomide era a correlative analysis based on nrg oncology RTOG 0525

    NARCIS (Netherlands)

    Bell, Erica Hlavin; Pugh, Stephanie L.; McElroy, Joseph P.; Gilbert, Mark R.; Mehta, Minesh; Klimowicz, Alexander C.; Magliocco, Anthony; Bredel, Markus; Robe, Pierre; Grosu, Anca L.; Stupp, Roger; Curran, Walter; Becker, Aline P.; Salavaggione, Andrea L.; Barnholtz-Sloan, Jill S.; Aldape, Kenneth; Blumenthal, Deborah T.; Brown, Paul D.; Glass, Jon; Souhami, Luis; Lee, R. Jeffrey; Brachman, David; Flickinger, John; Won, Minhee; Chakravarti, Arnab

    2017-01-01

    IMPORTANCE: There is a need for a more refined, molecularly based classification model for glioblastoma (GBM) in the temozolomide era. OBJECTIVE: To refine the existing clinically based recursive partitioning analysis (RPA) model by incorporating molecular variables. DESIGN, SETTING, AND

  7. Model-based gene set analysis for Bioconductor.

    Science.gov (United States)

    Bauer, Sebastian; Robinson, Peter N; Gagneur, Julien

    2011-07-01

    Gene Ontology and other forms of gene-category analysis play a major role in the evaluation of high-throughput experiments in molecular biology. Single-category enrichment analysis procedures such as Fisher's exact test tend to flag large numbers of redundant categories as significant, which can complicate interpretation. We have recently developed an approach called model-based gene set analysis (MGSA), that substantially reduces the number of redundant categories returned by the gene-category analysis. In this work, we present the Bioconductor package mgsa, which makes the MGSA algorithm available to users of the R language. Our package provides a simple and flexible application programming interface for applying the approach. The mgsa package has been made available as part of Bioconductor 2.8. It is released under the conditions of the Artistic license 2.0. peter.robinson@charite.de; julien.gagneur@embl.de.

  8. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    DEFF Research Database (Denmark)

    Stilling, M; Kold, S; de Raedt, S

    2012-01-01

    The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear.......The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear....

  9. Uncertainty analysis and validation of environmental models. The empirically based uncertainty analysis

    International Nuclear Information System (INIS)

    Monte, Luigi; Hakanson, Lars; Bergstroem, Ulla; Brittain, John; Heling, Rudie

    1996-01-01

    The principles of Empirically Based Uncertainty Analysis (EBUA) are described. EBUA is based on the evaluation of 'performance indices' that express the level of agreement between the model and sets of empirical independent data collected in different experimental circumstances. Some of these indices may be used to evaluate the confidence limits of the model output. The method is based on the statistical analysis of the distribution of the index values and on the quantitative relationship of these values with the ratio 'experimental data/model output'. Some performance indices are described in the present paper. Among these, the so-called 'functional distance' (d) between the logarithm of model output and the logarithm of the experimental data, defined as d 2 =Σ n 1 ( ln M i - ln O i ) 2 /n where M i is the i-th experimental value, O i the corresponding model evaluation and n the number of the couplets 'experimental value, predicted value', is an important tool for the EBUA method. From the statistical distribution of this performance index, it is possible to infer the characteristics of the distribution of the ratio 'experimental data/model output' and, consequently to evaluate the confidence limits for the model predictions. This method was applied to calculate the uncertainty level of a model developed to predict the migration of radiocaesium in lacustrine systems. Unfortunately, performance indices are affected by the uncertainty of the experimental data used in validation. Indeed, measurement results of environmental levels of contamination are generally associated with large uncertainty due to the measurement and sampling techniques and to the large variability in space and time of the measured quantities. It is demonstrated that this non-desired effect, in some circumstances, may be corrected by means of simple formulae

  10. Weighted functional linear regression models for gene-based association analysis.

    Science.gov (United States)

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  11. Development of a CANDU Moderator Analysis Model; Based on Coupled Solver

    International Nuclear Information System (INIS)

    Yoon, Churl; Park, Joo Hwan

    2006-01-01

    A CFD model for predicting the CANDU-6 moderator temperature has been developed for several years in KAERI, which is based on CFX-4. This analytic model(CFX4-CAMO) has some strength in the modeling of hydraulic resistance in the core region and in the treatment of heat source term in the energy equations. But the convergence difficulties and slow computing speed reveal to be the limitations of this model, because the CFX-4 code adapts a segregated solver to solve the governing equations with strong coupled-effect. Compared to CFX-4 using segregated solver, CFX-10 adapts high efficient and robust coupled-solver. Before December 2005 when CFX-10 was distributed, the previous version of CFX-10(CFX-5. series) also adapted coupled solver but didn't have any capability to apply porous media approaches correctly. In this study, the developed moderator analysis model based on CFX- 4 (CFX4-CAMO) is transformed into a new moderator analysis model based on CFX-10. The new model is examined and the results are compared to the former

  12. A layer-wise MITC9 finite element for the free-vibration analysis of plates with piezo-patches

    Directory of Open Access Journals (Sweden)

    Maria Cinefra

    2015-04-01

    Full Text Available The present article considers the free-vibration analysis of plate structures with piezoelectric patches by means of a plate finite element with variable through-the-thickness layer-wise kinematic. The refined models used are derived from Carrera’s Unified Formulation (CUF and they permit the vibration modes along the thickness to be accurately described. The finite-element method is employed and the plate element implemented has nine nodes, and the mixed interpolation of tensorial component (MITC method is used to contrast the membrane and shear locking phenomenon. The related governing equations are derived from the principle of virtual displacement, extended to the analysis of electromechanical problems. An isotropic plate with piezoelectric patches is analyzed, with clamped-free boundary conditions and subjected to open- and short-circuit configurations. The results, obtained with different theories, are compared with the higher-order type solutions given in the literature. The conclusion is reached that the plate element based on the CUF is more suitable and efficient compared to the classical models in the study of multilayered structures embedding piezo-patches.

  13. A simple topography-driven, calibration-free runoff generation model

    Science.gov (United States)

    Gao, H.; Birkel, C.; Hrachowitz, M.; Tetzlaff, D.; Soulsby, C.; Savenije, H. H. G.

    2017-12-01

    Determining the amount of runoff generation from rainfall occupies a central place in rainfall-runoff modelling. Moreover, reading landscapes and developing calibration-free runoff generation models that adequately reflect land surface heterogeneities remains the focus of much hydrological research. In this study, we created a new method to estimate runoff generation - HAND-based Storage Capacity curve (HSC) which uses a topographic index (HAND, Height Above the Nearest Drainage) to identify hydrological similarity and partially the saturated areas of catchments. We then coupled the HSC model with the Mass Curve Technique (MCT) method to estimate root zone storage capacity (SuMax), and obtained the calibration-free runoff generation model HSC-MCT. Both the two models (HSC and HSC-MCT) allow us to estimate runoff generation and simultaneously visualize the spatial dynamic of saturated area. We tested the two models in the data-rich Bruntland Burn (BB) experimental catchment in Scotland with an unusual time series of the field-mapped saturation area extent. The models were subsequently tested in 323 MOPEX (Model Parameter Estimation Experiment) catchments in the United States. HBV and TOPMODEL were used as benchmarks. We found that the HSC performed better in reproducing the spatio-temporal pattern of the observed saturated areas in the BB catchment compared with TOPMODEL which is based on the topographic wetness index (TWI). The HSC also outperformed HBV and TOPMODEL in the MOPEX catchments for both calibration and validation. Despite having no calibrated parameters, the HSC-MCT model also performed comparably well with the calibrated HBV and TOPMODEL, highlighting the robustness of the HSC model to both describe the spatial distribution of the root zone storage capacity and the efficiency of the MCT method to estimate the SuMax. Moreover, the HSC-MCT model facilitated effective visualization of the saturated area, which has the potential to be used for broader

  14. Design and Application of Offset-Free Model Predictive Control Disturbance Observation Method

    Directory of Open Access Journals (Sweden)

    Xue Wang

    2016-01-01

    Full Text Available Model predictive control (MPC with its lower request to the mathematical model, excellent control performance, and convenience online calculation has developed into a very important subdiscipline with rich theory foundation and practical application. However, unmeasurable disturbance is widespread in industrial processes, which is difficult to deal with directly at present. In most of the implemented MPC strategies, the method of incorporating a constant output disturbance into the process model is introduced to solve this problem, but it fails to achieve offset-free control once the unmeasured disturbances access the process. Based on the Kalman filter theory, the problem is solved by using a more general disturbance model which is superior to the constant output disturbance model. This paper presents the necessary conditions for offset-free model predictive control based on the model. By applying disturbance model, the unmeasurable disturbance vectors are augmented as the states of control system, and the Kalman filer is used to estimate unmeasurable disturbance and its effect on the output. Then, the dynamic matrix control (DMC algorithm is improved by utilizing the feed-forward compensation control strategy with the disturbance estimated.

  15. An analytical method for free vibration analysis of functionally graded beams with edge cracks

    Science.gov (United States)

    Wei, Dong; Liu, Yinghua; Xiang, Zhihai

    2012-03-01

    In this paper, an analytical method is proposed for solving the free vibration of cracked functionally graded material (FGM) beams with axial loading, rotary inertia and shear deformation. The governing differential equations of motion for an FGM beam are established and the corresponding solutions are found first. The discontinuity of rotation caused by the cracks is simulated by means of the rotational spring model. Based on the transfer matrix method, then the recurrence formula is developed to get the eigenvalue equations of free vibration of FGM beams. The main advantage of the proposed method is that the eigenvalue equation for vibrating beams with an arbitrary number of cracks can be conveniently determined from a third-order determinant. Due to the decrease in the determinant order as compared with previous methods, the developed method is simpler and more convenient to analytically solve the free vibration problem of cracked FGM beams. Moreover, free vibration analyses of the Euler-Bernoulli and Timoshenko beams with any number of cracks can be conducted using the unified procedure based on the developed method. These advantages of the proposed procedure would be more remarkable as the increase of the number of cracks. A comprehensive analysis is conducted to investigate the influences of the location and total number of cracks, material properties, axial load, inertia and end supports on the natural frequencies and vibration mode shapes of FGM beams. The present work may be useful for the design and control of damaged structures.

  16. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  17. Free Vibration Analysis of Fiber Metal Laminate Annular Plate by State-Space Based Differential Quadrature Method

    Directory of Open Access Journals (Sweden)

    G. H. Rahimi

    2014-01-01

    Full Text Available A three-dimensional elasticity theory by means of a state-space based differential quadrature method is presented for free vibration analysis of fiber metal laminate annular plate. The kinds of composite material and metal layers are considered to be S2-glass and aluminum, respectively. A semianalytical approach which uses state-space in the thickness and differential quadrature in the radial direction is implemented for evaluating the nondimensional natural frequencies of the annular plates. The influences of changes in boundary condition, plate thickness, and lay-up direction on the natural frequencies are studied. A comparison is also made with the numerical results reported by ABAQUS software which shows an excellent agreement.

  18. Hand-to-Hand Model for Bioelectrical Impedance Analysis to Estimate Fat Free Mass in a Healthy Population

    Directory of Open Access Journals (Sweden)

    Hsueh-Kuan Lu

    2016-10-01

    Full Text Available This study aimed to establish a hand-to-hand (HH model for bioelectrical impedance analysis (BIA fat free mass (FFM estimation by comparing with a standing position hand-to-foot (HF BIA model and dual energy X-ray absorptiometry (DXA; we also verified the reliability of the newly developed model. A total of 704 healthy Chinese individuals (403 men and 301 women participated. FFM (FFMDXA reference variables were measured using DXA and segmental BIA. Further, regression analysis, Bland–Altman plots, and cross-validation (2/3 participants as the modeling group, 1/3 as the validation group; three turns were repeated for validation grouping were conducted to compare tests of agreement with FFMDXA reference variables. In male participants, the hand-to-hand BIA model estimation equation was calculated as follows: FFMmHH = 0.537 h2/ZHH − 0.126 year + 0.217 weight + 18.235 (r2 = 0.919, standard estimate of error (SEE = 2.164 kg, n = 269. The mean validated correlation coefficients and limits of agreement (LOAs of the Bland–Altman analysis of the calculated values for FFMmHH and FFMDXA were 0.958 and −4.369–4.343 kg, respectively, for hand-to-foot BIA model measurements for men; the FFM (FFMmHF and FFMDXA were 0.958 and −4.356–4.375 kg, respectively. The hand-to-hand BIA model estimating equation for female participants was FFMFHH = 0.615 h2/ZHH − 0.144 year + 0.132 weight + 16.507 (r2 = 0.870, SEE = 1.884 kg, n = 201; the three mean validated correlation coefficient and LOA for the hand-to-foot BIA model measurements for female participants (FFMFHH and FFMDXA were 0.929 and −3.880–3.886 kg, respectively. The FFMHF and FFMDXA were 0.942 and −3.511–3.489 kg, respectively. The results of both hand-to-hand and hand-to-foot BIA models demonstrated similar reliability, and the hand-to-hand BIA models are practical for assessing FFM.

  19. Hand-to-Hand Model for Bioelectrical Impedance Analysis to Estimate Fat Free Mass in a Healthy Population.

    Science.gov (United States)

    Lu, Hsueh-Kuan; Chiang, Li-Ming; Chen, Yu-Yawn; Chuang, Chih-Lin; Chen, Kuen-Tsann; Dwyer, Gregory B; Hsu, Ying-Lin; Chen, Chun-Hao; Hsieh, Kuen-Chang

    2016-10-21

    This study aimed to establish a hand-to-hand (HH) model for bioelectrical impedance analysis (BIA) fat free mass (FFM) estimation by comparing with a standing position hand-to-foot (HF) BIA model and dual energy X-ray absorptiometry (DXA); we also verified the reliability of the newly developed model. A total of 704 healthy Chinese individuals (403 men and 301 women) participated. FFM (FFM DXA ) reference variables were measured using DXA and segmental BIA. Further, regression analysis, Bland-Altman plots, and cross-validation (2/3 participants as the modeling group, 1/3 as the validation group; three turns were repeated for validation grouping) were conducted to compare tests of agreement with FFM DXA reference variables. In male participants, the hand-to-hand BIA model estimation equation was calculated as follows: FFM m HH = 0.537 h²/Z HH - 0.126 year + 0.217 weight + 18.235 ( r ² = 0.919, standard estimate of error (SEE) = 2.164 kg, n = 269). The mean validated correlation coefficients and limits of agreement (LOAs) of the Bland-Altman analysis of the calculated values for FFM m HH and FFM DXA were 0.958 and -4.369-4.343 kg, respectively, for hand-to-foot BIA model measurements for men; the FFM (FFM m HF ) and FFM DXA were 0.958 and -4.356-4.375 kg, respectively. The hand-to-hand BIA model estimating equation for female participants was FFM F HH = 0.615 h²/Z HH - 0.144 year + 0.132 weight + 16.507 ( r ² = 0.870, SEE = 1.884 kg, n = 201); the three mean validated correlation coefficient and LOA for the hand-to-foot BIA model measurements for female participants (FFM F HH and FFM DXA ) were 0.929 and -3.880-3.886 kg, respectively. The FFM HF and FFM DXA were 0.942 and -3.511-3.489 kg, respectively. The results of both hand-to-hand and hand-to-foot BIA models demonstrated similar reliability, and the hand-to-hand BIA models are practical for assessing FFM.

  20. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    Science.gov (United States)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  1. On the Performance Analysis of Free-Space Optical Links under Generalized Turbulence and Misalignment Models

    KAUST Repository

    AlQuwaiee, Hessa

    2016-11-01

    One of the potential solutions to the radio frequency (RF) spectrum scarcity problem is optical wireless communications (OWC), which utilizes the unlicensed optical spectrum. Long-range outdoor OWC are usually referred to in the literature as free-space optical (FSO) communications. Unlike RF systems, FSO is immune to interference and multi-path fading. Also, the deployment of FSO systems is flexible and much faster than optical fibers. These attractive features make FSO applicable for broadband wireless transmission such as optical fiber backup, metropolitan area network, and last mile access. Although FSO communication is a promising technology, it is negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. These two critical issues have prompted intensive research in the last decade. To quantify the effect of these two factors on FSO system performance, we need effective mathematical models. In this work, we propose and study a generalized pointing error model based on the Beckmann distribution. Then, we aim to generalize the FSO channel model to span all turbulence conditions from weak to strong while taking pointing errors into consideration. Since scintillation in FSO is analogous to the fading phenomena in RF, diversity has been proposed too to overcome the effect of irradiance fluctuations. Thus, several combining techniques of not necessarily independent dual-branch free-space optical links were investigated over both weak and strong turbulence channels in the presence of pointing errors. On another front, improving the performance, enhancing the capacity and reducing the delay of the communication link has been the motivation of any newly developed schemes, especially for backhauling. Recently, there has been a growing interest in practical systems to integrate RF and FSO technologies to solve the last mile bottleneck. As such, we also study in this thesis asymmetric an RF-FSO dual-hop relay

  2. Rotor design optimization using a free wake analysis

    Science.gov (United States)

    Quackenbush, Todd R.; Boschitsch, Alexander H.; Wachspress, Daniel A.; Chua, Kiat

    1993-01-01

    The aim of this effort was to develop a comprehensive performance optimization capability for tiltrotor and helicopter blades. The analysis incorporates the validated EHPIC (Evaluation of Hover Performance using Influence Coefficients) model of helicopter rotor aerodynamics within a general linear/quadratic programming algorithm that allows optimization using a variety of objective functions involving the performance. The resulting computer code, EHPIC/HERO (HElicopter Rotor Optimization), improves upon several features of the previous EHPIC performance model and allows optimization utilizing a wide spectrum of design variables, including twist, chord, anhedral, and sweep. The new analysis supports optimization of a variety of objective functions, including weighted measures of rotor thrust, power, and propulsive efficiency. The fundamental strength of the approach is that an efficient search for improved versions of the baseline design can be carried out while retaining the demonstrated accuracy inherent in the EHPIC free wake/vortex lattice performance analysis. Sample problems are described that demonstrate the success of this approach for several representative rotor configurations in hover and axial flight. Features that were introduced to convert earlier demonstration versions of this analysis into a generally applicable tool for researchers and designers is also discussed.

  3. A distribution-free newsvendor model with balking penalty and random yield

    Directory of Open Access Journals (Sweden)

    Chongfeng Lan

    2015-05-01

    Full Text Available Purpose: The purpose of this paper is to extend the analysis of the distribution-free newsvendor problem in an environment of customer balking, which occurs when customers are reluctant to buy a product if its available inventory falls below a threshold level. Design/methodology/approach: We provide a new tradeoff tool as a replacement of the traditional one to weigh the holding cost and the goodwill costs segment: in addition to the shortage penalty, we also introduce the balking penalty. Furthermore, we extend our model to the case of random yield. Findings: A model is presented for determining both an optimal order quantity and a lower bound on the profit under the worst possible distribution of the demand. We also study the effects of shortage penalty and the balking penalty on the optimal order quantity, which have been largely bypassed in the existing distribution free single period models with balking. Numerical examples are presented to illustrate the result. Originality/value: The incorporation of balking penalty and random yield represents an important improvement in inventory policy performance for distribution-free newsvendor problem when customer balking occurs and the distributional form of demand is unknown.

  4. Data from quantitative label free proteomics analysis of rat spleen

    Directory of Open Access Journals (Sweden)

    Khadar Dudekula

    2016-09-01

    Full Text Available The dataset presented in this work has been obtained using a label-free quantitative proteomic analysis of rat spleen. A robust method for extraction of proteins from rat spleen tissue and LC-MS-MS analysis was developed using a urea and SDS-based buffer. Different fractionation methods were compared. A total of 3484 different proteins were identified from the pool of all experiments run in this study (a total of 2460 proteins with at least two peptides. A total of 1822 proteins were identified from nine non-fractionated pulse gels, 2288 proteins and 2864 proteins were identified by SDS-PAGE fractionation into three and five fractions respectively. The proteomics data are deposited in ProteomeXchange Consortium via PRIDE PXD003520, Progenesis and Maxquant output are presented in the supported information. The generated list of proteins under different regimes of fractionation allow assessing the nature of the identified proteins; variability in the quantitative analysis associated with the different sampling strategy and allow defining a proper number of replicates for future quantitative analysis. Keywords: Spleen, Rat, Protein extraction, Label-free quantitative proteomics

  5. Model-based security analysis of the German health card architecture.

    Science.gov (United States)

    Jürjens, J; Rumm, R

    2008-01-01

    Health-care information systems are particularly security-critical. In order to make these applications secure, the security analysis has to be an integral part of the system design and IT management process for such systems. This work presents the experiences and results from the security analysis of the system architecture of the German Health Card, by making use of an approach to model-based security engineering that is based on the UML extension UMLsec. The focus lies on the security mechanisms and security policies of the smart-card-based architecture which were analyzed using the UMLsec method and tools. Main results of the paper include a report on the employment of the UMLsec method in an industrial health information systems context as well as indications of its benefits and limitations. In particular, two potential security weaknesses were detected and countermeasures discussed. The results indicate that it can be feasible to apply a model-based security analysis using UMLsec to an industrial health information system like the German Health Card architecture, and that doing so can have concrete benefits (such as discovering potential weaknesses, and an increased confidence that no further vulnerabilities of the kind that were considered are present).

  6. Hybrid System Modeling and Full Cycle Operation Analysis of a Two-Stroke Free-Piston Linear Generator

    Directory of Open Access Journals (Sweden)

    Peng Sun

    2017-02-01

    Full Text Available Free-piston linear generators (FPLGs have attractive application prospects for hybrid electric vehicles (HEVs owing to their high-efficiency, low-emissions and multi-fuel flexibility. In order to achieve long-term stable operation, the hybrid system design and full-cycle operation strategy are essential factors that should be considered. A 25 kW FPLG consisting of an internal combustion engine (ICE, a linear electric machine (LEM and a gas spring (GS is designed. To improve the power density and generating efficiency, the LEM is assembled with two modular flat-type double-sided PM LEM units, which sandwich a common moving-magnet plate supported by a middle keel beam and bilateral slide guide rails to enhance the stiffness of the moving plate. For the convenience of operation processes analysis, the coupling hybrid system is modeled mathematically and a full cycle simulation model is established. Top-level systemic control strategies including the starting, stable operating, fault recovering and stopping strategies are analyzed and discussed. The analysis results validate that the system can run stably and robustly with the proposed full cycle operation strategy. The effective electric output power can reach 26.36 kW with an overall system efficiency of 36.32%.

  7. A projection-based model reduction strategy for the wave and vibration analysis of rotating periodic structures

    Science.gov (United States)

    Beli, D.; Mencik, J.-M.; Silva, P. B.; Arruda, J. R. F.

    2018-05-01

    The wave finite element method has proved to be an efficient and accurate numerical tool to perform the free and forced vibration analysis of linear reciprocal periodic structures, i.e. those conforming to symmetrical wave fields. In this paper, its use is extended to the analysis of rotating periodic structures, which, due to the gyroscopic effect, exhibit asymmetric wave propagation. A projection-based strategy which uses reduced symplectic wave basis is employed, which provides a well-conditioned eigenproblem for computing waves in rotating periodic structures. The proposed formulation is applied to the free and forced response analysis of homogeneous, multi-layered and phononic ring structures. In all test cases, the following features are highlighted: well-conditioned dispersion diagrams, good accuracy, and low computational time. The proposed strategy is particularly convenient in the simulation of rotating structures when parametric analysis for several rotational speeds is usually required, e.g. for calculating Campbell diagrams. This provides an efficient and flexible framework for the analysis of rotordynamic problems.

  8. Model-based schedulability analysis of safety critical hard real-time Java programs

    DEFF Research Database (Denmark)

    Bøgholm, Thomas; Kragh-Hansen, Henrik; Olsen, Petur

    2008-01-01

    verifiable by the Uppaal model checker [23]. Schedulability analysis is reduced to a simple reachability question, checking for deadlock freedom. Model-based schedulability analysis has been developed by Amnell et al. [2], but has so far only been applied to high level specifications, not actual...

  9. Fusing Quantitative Requirements Analysis with Model-based Systems Engineering

    Science.gov (United States)

    Cornford, Steven L.; Feather, Martin S.; Heron, Vance A.; Jenkins, J. Steven

    2006-01-01

    A vision is presented for fusing quantitative requirements analysis with model-based systems engineering. This vision draws upon and combines emergent themes in the engineering milieu. "Requirements engineering" provides means to explicitly represent requirements (both functional and non-functional) as constraints and preferences on acceptable solutions, and emphasizes early-lifecycle review, analysis and verification of design and development plans. "Design by shopping" emphasizes revealing the space of options available from which to choose (without presuming that all selection criteria have previously been elicited), and provides means to make understandable the range of choices and their ramifications. "Model-based engineering" emphasizes the goal of utilizing a formal representation of all aspects of system design, from development through operations, and provides powerful tool suites that support the practical application of these principles. A first step prototype towards this vision is described, embodying the key capabilities. Illustrations, implications, further challenges and opportunities are outlined.

  10. Model-Based Dependability Analysis of Physical Systems with Modelica

    Directory of Open Access Journals (Sweden)

    Andrea Tundis

    2017-01-01

    Full Text Available Modelica is an innovative, equation-based, and acausal language that allows modeling complex physical systems, which are made of mechanical, electrical, and electrotechnical components, and evaluates their design through simulation techniques. Unfortunately, the increasing complexity and accuracy of such physical systems require new, more powerful, and flexible tools and techniques for evaluating important system properties and, in particular, the dependability ones such as reliability, safety, and maintainability. In this context, the paper describes some extensions of the Modelica language to support the modeling of system requirements and their relationships. Such extensions enable the requirement verification analysis through native constructs in the Modelica language. Furthermore, they allow exporting a Modelica-based system design as a Bayesian Network in order to analyze its dependability by employing a probabilistic approach. The proposal is exemplified through a case study concerning the dependability analysis of a Tank System.

  11. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko

    2017-05-10

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  12. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database.

    Science.gov (United States)

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-06-23

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max 'Enrei'). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. The Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all predicted proteins from

  13. Integration of gel-based and gel-free proteomic data for functional analysis of proteins through Soybean Proteome Database

    KAUST Repository

    Komatsu, Setsuko; Wang, Xin; Yin, Xiaojian; Nanjo, Yohei; Ohyanagi, Hajime; Sakata, Katsumi

    2017-01-01

    The Soybean Proteome Database (SPD) stores data on soybean proteins obtained with gel-based and gel-free proteomic techniques. The database was constructed to provide information on proteins for functional analyses. The majority of the data is focused on soybean (Glycine max ‘Enrei’). The growth and yield of soybean are strongly affected by environmental stresses such as flooding. The database was originally constructed using data on soybean proteins separated by two-dimensional polyacrylamide gel electrophoresis, which is a gel-based proteomic technique. Since 2015, the database has been expanded to incorporate data obtained by label-free mass spectrometry-based quantitative proteomics, which is a gel-free proteomic technique. Here, the portions of the database consisting of gel-free proteomic data are described. The gel-free proteomic database contains 39,212 proteins identified in 63 sample sets, such as temporal and organ-specific samples of soybean plants grown under flooding stress or non-stressed conditions. In addition, data on organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored. Furthermore, the database integrates multiple omics data such as genomics, transcriptomics, metabolomics, and proteomics. The SPD database is accessible at http://proteome.dc.affrc.go.jp/Soybean/. Biological significanceThe Soybean Proteome Database stores data obtained from both gel-based and gel-free proteomic techniques. The gel-free proteomic database comprises 39,212 proteins identified in 63 sample sets, such as different organs of soybean plants grown under flooding stress or non-stressed conditions in a time-dependent manner. In addition, organellar proteins identified in mitochondria, nuclei, and endoplasmic reticulum are stored in the gel-free proteomics database. A total of 44,704 proteins, including 5490 proteins identified using a gel-based proteomic technique, are stored in the SPD. It accounts for approximately 80% of all

  14. Free vibration analysis of single-walled boron nitride nanotubes based on a computational mechanics framework

    Science.gov (United States)

    Yan, J. W.; Tong, L. H.; Xiang, Ping

    2017-12-01

    Free vibration behaviors of single-walled boron nitride nanotubes are investigated using a computational mechanics approach. Tersoff-Brenner potential is used to reflect atomic interaction between boron and nitrogen atoms. The higher-order Cauchy-Born rule is employed to establish the constitutive relationship for single-walled boron nitride nanotubes on the basis of higher-order gradient continuum theory. It bridges the gaps between the nanoscale lattice structures with a continuum body. A mesh-free modeling framework is constructed, using the moving Kriging interpolation which automatically satisfies the higher-order continuity, to implement numerical simulation in order to match the higher-order constitutive model. In comparison with conventional atomistic simulation methods, the established atomistic-continuum multi-scale approach possesses advantages in tackling atomic structures with high-accuracy and high-efficiency. Free vibration characteristics of single-walled boron nitride nanotubes with different boundary conditions, tube chiralities, lengths and radii are examined in case studies. In this research, it is pointed out that a critical radius exists for the evaluation of fundamental vibration frequencies of boron nitride nanotubes; opposite trends can be observed prior to and beyond the critical radius. Simulation results are presented and discussed.

  15. Modeling and Grid impedance Variation Analysis of Parallel Connected Grid Connected Inverter based on Impedance Based Harmonic Analysis

    DEFF Research Database (Denmark)

    Kwon, JunBum; Wang, Xiongfei; Bak, Claus Leth

    2014-01-01

    This paper addresses the harmonic compensation error problem existing with parallel connected inverter in the same grid interface conditions by means of impedance-based analysis and modeling. Unlike the single grid connected inverter, it is found that multiple parallel connected inverters and grid...... impedance can make influence to each other if they each have a harmonic compensation function. The analysis method proposed in this paper is based on the relationship between the overall output impedance and input impedance of parallel connected inverter, where controller gain design method, which can...

  16. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    Science.gov (United States)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  17. Analysis of non-linear aeroelastic response of a supersonic thick fin with plunging, pinching and flapping free-plays

    Science.gov (United States)

    Firouz-Abadi, R. D.; Alavi, S. M.; Salarieh, H.

    2013-07-01

    The flutter of a 3-D rigid fin with double-wedge section and free-play in flapping, plunging and pitching degrees-of-freedom operating in supersonic and hypersonic flight speed regimes have been considered. Aerodynamic model is obtained by local usage of the piston theory behind the shock and expansion analysis, and structural model is obtained based on Lagrange equation of motion. Such model presents fast, accurate algorithm for studying the aeroelastic behavior of the thick supersonic fin in time domain. Dynamic behavior of the fin is considered over large number of parameters that characterize the aeroelastic system. Results show that the free-play in the pitching, plunging and flapping degrees-of-freedom has significant effects on the oscillation exhibited by the aeroelastic system in the supersonic/hypersonic flight speed regimes. The simulations also show that the aeroelastic system behavior is greatly affected by some parameters, such as the Mach number, thickness, angle of attack, hinge position and sweep angle.

  18. A Matrix-Free Posterior Ensemble Kalman Filter Implementation Based on a Modified Cholesky Decomposition

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2017-07-01

    Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.

  19. Nonlinear analysis of wiggler-imperfections in free-electron lasers

    Energy Technology Data Exchange (ETDEWEB)

    Freund, H.P. [Naval Research Lab., Washington, DC (United States); Yu, L.H. [Brookhaven National Lab., Upton, NY (United States)

    1995-12-31

    We present an analysis of the effect of wiggler imperfections in FELs using a variety of techniques. Our basic intention is to compare wiggler averaged nonlinear simulations to determine the effect of various approximations on the estimates of gain degradation due to wiggler imperfections. The fundamental assumption in the wiggler-averaged formulations is that the electrons are described by a random walk model, and an analytic representation of the orbits is made. This is fundamentally different from the approach taken for the non-wiggler-averaged formulation in which the wiggler imperfections are specified at the outset, and the orbits are integrated using a field model that is consistent with the Maxwell equations. It has been conjectured on the basis of prior studies using the non-wiggler-averaged formalism that electrons follow a {open_quotes}meander line{close_quotes} through the wiggler governed by the specific imperfections; hence, the electrons behave more as a ball-in-groove than as a random walk. This conjecture is tested by comparison of the wiggler-averaged and non-wiggler-averaged simulations. In addition, two different wiggler models are employed in the non-wiggler-averaged simulation: one based upon a parabolic pole face wiggler which is not curl and divergence free in the presence of wiggler imperfections, and a second model in which the divergence and z-component of the curl vanish identically. This will gauge the effect of inconsistencies in the wiggler model on the estimation of the effect of the imperfections. Preliminary results indicate that the inconsistency introduced by the non-vanishing curl and divergence result in an overestimation of the effect of wiggler imperfections on the orbit. The wiggler-averaged simulation is based upon the TDA code, and the non-wiggler-averaged simulation is a variant of the ARACHNE and WIGGLIN codes called MEDUSA developed to treat short-wavelength Gauss-Hermite modes.

  20. Formation of model-free motor memories during motor adaptation depends on perturbation schedule.

    Science.gov (United States)

    Orban de Xivry, Jean-Jacques; Lefèvre, Philippe

    2015-04-01

    Motor adaptation to an external perturbation relies on several mechanisms such as model-based, model-free, strategic, or repetition-dependent learning. Depending on the experimental conditions, each of these mechanisms has more or less weight in the final adaptation state. Here we focused on the conditions that lead to the formation of a model-free motor memory (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011), i.e., a memory that does not depend on an internal model or on the size or direction of the errors experienced during the learning. The formation of such model-free motor memory was hypothesized to depend on the schedule of the perturbation (Orban de Xivry JJ, Ahmadi-Pajouh MA, Harran MD, Salimpour Y, Shadmehr R. J Neurophysiol 109: 124-136, 2013). Here we built on this observation by directly testing the nature of the motor memory after abrupt or gradual introduction of a visuomotor rotation, in an experimental paradigm where the presence of model-free motor memory can be identified (Huang VS, Haith AM, Mazzoni P, Krakauer JW. Neuron 70: 787-801, 2011). We found that relearning was faster after abrupt than gradual perturbation, which suggests that model-free learning is reduced during gradual adaptation to a visuomotor rotation. In addition, the presence of savings after abrupt introduction of the perturbation but gradual extinction of the motor memory suggests that unexpected errors are necessary to induce a model-free motor memory. Overall, these data support the hypothesis that different perturbation schedules do not lead to a more or less stabilized motor memory but to distinct motor memories with different attributes and neural representations. Copyright © 2015 the American Physiological Society.

  1. Habitat of in vivo transformation influences the levels of free radical scavengers in Clinostomum complanatum: implications for free radical scavenger based vaccines against trematode infections.

    Science.gov (United States)

    Zafar, Atif; Rizvi, Asim; Ahmad, Irshad; Ahmad, Masood

    2014-01-01

    Since free radical scavengers of parasite origin like glutathione-S-transferase and superoxide dismutase are being explored as prospective vaccine targets, availability of these molecules within the parasite infecting different hosts as well as different sites of infection is of considerable importance. Using Clinostomum complanatum, as a model helminth parasite, we analysed the effects of habitat of in vivo transformation on free radical scavengers of this trematode parasite. Using three different animal models for in vivo transformation and markedly different sites of infection, progenetic metacercaria of C. complanatum were transformed to adult ovigerous worms. Whole worm homogenates were used to estimate the levels of lipid peroxidation, a marker of oxidative stress and free radical scavengers. Site of in vivo transformation was found to drastically affect the levels of free radical scavengers in this model trematode parasite. It was observed that oxygen availability at the site of infection probably influences levels of free radical scavengers in trematode parasites. This is the first report showing that habitat of in vivo transformation affects levels of free radical scavengers in trematode parasites. Since free radical scavengers are prospective vaccine targets and parasite infection at ectopic sites is common, we propose that infections at different sites, may respond differently to free radical scavenger based vaccines.

  2. Molecular structure based property modeling: Development/ improvement of property models through a systematic property-data-model analysis

    DEFF Research Database (Denmark)

    Hukkerikar, Amol Shivajirao; Sarup, Bent; Sin, Gürkan

    2013-01-01

    models. To make the property-data-model analysis fast and efficient, an approach based on the “molecular structure similarity criteria” to identify molecules (mono-functional, bi-functional, etc.) containing specified set of structural parameters (that is, groups) is employed. The method has been applied...

  3. Molecular Modeling and MM-PBSA Free Energy Analysis of Endo-1,4-β-Xylanase from Ruminococcus albus 8

    Directory of Open Access Journals (Sweden)

    Dongling Zhan

    2014-09-01

    Full Text Available Endo-1,4-β-xylanase (EC 3.2.1.8 is the enzyme from Ruminococcus albus 8 (R. albus 8 (Xyn10A, and catalyzes the degradation of arabinoxylan, which is a major cell wall non-starch polysaccharide of cereals. The crystallographic structure of Xyn10A is still unknown. For this reason, we report a computer-assisted homology study conducted to build its three-dimensional structure based on the known sequence of amino acids of this enzyme. In this study, the best similarity was found with the Clostridium thermocellum (C. thermocellum N-terminal endo-1,4-β-d-xylanase 10 b. Following the 100 ns molecular dynamics (MD simulation, a reliable model was obtained for further studies. Molecular Mechanics/Poisson-Boltzmann Surface Area (MM-PBSA methods were used for the substrate xylotetraose having the reactive sugar, which was bound in the −1 subsite of Xyn10A in the 4C1 (chair and 2SO (skew boat ground state conformations. According to the simulations and free energy analysis, Xyn10A binds the substrate with the −1 sugar in the 2SO conformation 39.27 kcal·mol−1 tighter than the substrate with the sugar in the 4C1 conformation. According to the Xyn10A-2SO Xylotetraose (X4(sb interaction energies, the most important subsite for the substrate binding is subsite −1. The results of this study indicate that the substrate is bound in a skew boat conformation with Xyn10A and the −1 sugar subsite proceeds from the 4C1 conformation through 2SO to the transition state. MM-PBSA free energy analysis indicates that Asn187 and Trp344 in subsite −1 may an important residue for substrate binding. Our findings provide fundamental knowledge that may contribute to further enhancement of enzyme performance through molecular engineering.

  4. Free Energy-Based Virtual Screening and Optimization of RNase H Inhibitors of HIV-1 Reverse Transcriptase.

    Science.gov (United States)

    Zhang, Baofeng; D'Erasmo, Michael P; Murelli, Ryan P; Gallicchio, Emilio

    2016-09-30

    We report the results of a binding free energy-based virtual screening campaign of a library of 77 α-hydroxytropolone derivatives against the challenging RNase H active site of the reverse transcriptase (RT) enzyme of human immunodeficiency virus-1. Multiple protonation states, rotamer states, and binding modalities of each compound were individually evaluated. The work involved more than 300 individual absolute alchemical binding free energy parallel molecular dynamics calculations and over 1 million CPU hours on national computing clusters and a local campus computational grid. The thermodynamic and structural measures obtained in this work rationalize a series of characteristics of this system useful for guiding future synthetic and biochemical efforts. The free energy model identified key ligand-dependent entropic and conformational reorganization processes difficult to capture using standard docking and scoring approaches. Binding free energy-based optimization of the lead compounds emerging from the virtual screen has yielded four compounds with very favorable binding properties, which will be the subject of further experimental investigations. This work is one of the few reported applications of advanced-binding free energy models to large-scale virtual screening and optimization projects. It further demonstrates that, with suitable algorithms and automation, advanced-binding free energy models can have a useful role in early-stage drug-discovery programs.

  5. Analysis of initial changes in the proteins of soybean root tip under flooding stress using gel-free and gel-based proteomic techniques.

    Science.gov (United States)

    Yin, Xiaojian; Sakata, Katsumi; Nanjo, Yohei; Komatsu, Setsuko

    2014-06-25

    Flooding has a severe negative effect on soybean cultivation in the early stages of growth. To obtain a better understanding of the response mechanisms of soybean to flooding stress, initial changes in root tip proteins under flooding were analyzed using two proteomic techniques. Two-day-old soybeans were treated with flooding for 3, 6, 12, and 24h. The weight of soybeans increased during the first 3h of flooding, but root elongation was not observed. Using gel-based and gel-free proteomic techniques, 115 proteins were identified in root tips, of which 9 proteins were commonly detected by both methods. The 71 proteins identified by the gel-free proteomics were analyzed by a hierarchical clustering method based on induction levels during the flooding, and the proteins were divided into 5 clusters. Additional interaction analysis of the proteins revealed that ten proteins belonging to cluster I formed the center of a protein interaction network. mRNA expression analysis of these ten proteins showed that citrate lyase and heat shock protein 70 were down-regulated, whereas calreticulin was up-regulated in initial phase of flooding. These results suggest that flooding stress to soybean induces calcium-related signal transduction, which might play important roles in the early responses to flooding. Flooding has a severe negative effect on soybean cultivation, particularly in the early stages of growth. To better understand the response mechanisms of soybean to the early stages of flooding stress, two proteomic techniques were used. Two-day-old soybeans were treated without or with flooding for 3, 6, 12, and 24h. The fresh weight of soybeans increased during the first 3h of flooding stress, but the growth then slowed and no root elongation was observed. Using gel-based and gel-free proteomic techniques, 115 proteins were identified in root tips, of which 9 proteins were commonly detected by both methods. The 71 proteins identified by the gel-free proteomics were analyzed

  6. Model-based Computer Aided Framework for Design of Process Monitoring and Analysis Systems

    DEFF Research Database (Denmark)

    Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul

    2009-01-01

    In the manufacturing industry, for example, the pharmaceutical industry, a thorough understanding of the process is necessary in addition to a properly designed monitoring and analysis system (PAT system) to consistently obtain the desired end-product properties. A model-based computer....... The knowledge base provides the necessary information/data during the design of the PAT system while the model library generates additional or missing data needed for design and analysis. Optimization of the PAT system design is achieved in terms of product data analysis time and/or cost of monitoring equipment......-aided framework including the methods and tools through which the design of monitoring and analysis systems for product quality control can be generated, analyzed and/or validated, has been developed. Two important supporting tools developed as part of the framework are a knowledge base and a model library...

  7. Analysis of Lead and Zinc by Mercury-Free Potentiometric Stripping Analysis

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov

    1997-01-01

    A method is presented for trace-element analysis of lead and zinc by potentiometric stripping analysis (PSA) where both the glassy-carbon working electrode and the electrolyte are free of mercury. Analysis of zinc requires an activation procedure of the glassy-carbon electrode. The activation...... is performed by pre-concentrating zinc on glassy carbon at -1400 mV(SCE) in a mercury-free electrolyte containing 0.1 M HCl and 2 ppm Zn2+, followed by stripping at approx. -1050 mV. A linear relationship between stripping peak areas, recorded in the derivative mode, and concentration was found...

  8. Topic model-based mass spectrometric data analysis in cancer biomarker discovery studies.

    Science.gov (United States)

    Wang, Minkun; Tsai, Tsung-Heng; Di Poto, Cristina; Ferrarini, Alessia; Yu, Guoqiang; Ressom, Habtom W

    2016-08-18

    A fundamental challenge in quantitation of biomolecules for cancer biomarker discovery is owing to the heterogeneous nature of human biospecimens. Although this issue has been a subject of discussion in cancer genomic studies, it has not yet been rigorously investigated in mass spectrometry based proteomic and metabolomic studies. Purification of mass spectometric data is highly desired prior to subsequent analysis, e.g., quantitative comparison of the abundance of biomolecules in biological samples. We investigated topic models to computationally analyze mass spectrometric data considering both integrated peak intensities and scan-level features, i.e., extracted ion chromatograms (EICs). Probabilistic generative models enable flexible representation in data structure and infer sample-specific pure resources. Scan-level modeling helps alleviate information loss during data preprocessing. We evaluated the capability of the proposed models in capturing mixture proportions of contaminants and cancer profiles on LC-MS based serum proteomic and GC-MS based tissue metabolomic datasets acquired from patients with hepatocellular carcinoma (HCC) and liver cirrhosis as well as synthetic data we generated based on the serum proteomic data. The results we obtained by analysis of the synthetic data demonstrated that both intensity-level and scan-level purification models can accurately infer the mixture proportions and the underlying true cancerous sources with small average error ratios (data, we found more proteins and metabolites with significant changes between HCC cases and cirrhotic controls. Candidate biomarkers selected after purification yielded biologically meaningful pathway analysis results and improved disease discrimination power in terms of the area under ROC curve compared to the results found prior to purification. We investigated topic model-based inference methods to computationally address the heterogeneity issue in samples analyzed by LC/GC-MS. We observed

  9. A Risk-Free Protection Index Model for Portfolio Selection with Entropy Constraint under an Uncertainty Framework

    Directory of Open Access Journals (Sweden)

    Jianwei Gao

    2017-02-01

    Full Text Available This paper aims to develop a risk-free protection index model for portfolio selection based on the uncertain theory. First, the returns of risk assets are assumed as uncertain variables and subject to reputable experts’ evaluations. Second, under this assumption, combining with the risk-free interest rate we define a risk-free protection index (RFPI, which can measure the protection degree when the loss of risk assets happens. Third, note that the proportion entropy serves as a complementary means to reduce the risk by the preset diversification requirement. We put forward a risk-free protection index model with an entropy constraint under an uncertainty framework by applying the RFPI, Huang’s risk index model (RIM, and mean-variance-entropy model (MVEM. Furthermore, to solve our portfolio model, an algorithm is given to estimate the uncertain expected return and standard deviation of different risk assets by applying the Delphi method. Finally, an example is provided to show that the risk-free protection index model performs better than the traditional MVEM and RIM.

  10. Atrazine analysis using an amperometric immunosensor based on single-chain antibody fragments and regeneration-free multi-calibrant measurement

    International Nuclear Information System (INIS)

    Grennan, Kathleen; Strachan, Gillian; Porter, Andrew J.; Killard, Anthony J.; Smyth, Malcolm R.

    2003-01-01

    This work describes the development of an electrochemical immunosensor for the analysis of atrazine using recombinant single-chain antibody (scAb) fragments. The sensors are based on carbon paste screen-printed electrodes incorporating the conducting polymer polyaniline (PANI)/poly(vinylsulphonic acid) (PVSA), which enables direct mediatorless coupling to take place between the redox centres of antigen-labelled horseradish peroxidase (HRP) and the electrode surface. Competitive immunoassays can be performed in real-time using this separation-free system. Analytical measurements based on the pseudo-linear relationship between the slope of a real-time amperometric signal and the concentration of analyte, yield a novel immunosensor set-up capable of regenerationless amperometric analysis. Multiple, sequential measurements of standards and samples can be performed on a single scAb-modified surface in a matter of minutes. No separation of bound and unbound species was necessary prior to detection. The system is capable of measuring atrazine to a detection limit of 0.1 ppb (0.1 μg l -1 ). This system offers the potential for rapid, cost-effective immunosensing for the analysis of samples of environmental, medical and pharmaceutical significance

  11. Photoinjector optimization using a derivative-free, model-based trust-region algorithm for the Argonne Wakefield Accelerator

    Science.gov (United States)

    Neveu, N.; Larson, J.; Power, J. G.; Spentzouris, L.

    2017-07-01

    Model-based, derivative-free, trust-region algorithms are increasingly popular for optimizing computationally expensive numerical simulations. A strength of such methods is their efficient use of function evaluations. In this paper, we use one such algorithm to optimize the beam dynamics in two cases of interest at the Argonne Wakefield Accelerator (AWA) facility. First, we minimize the emittance of a 1 nC electron bunch produced by the AWA rf photocathode gun by adjusting three parameters: rf gun phase, solenoid strength, and laser radius. The algorithm converges to a set of parameters that yield an emittance of 1.08 μm. Second, we expand the number of optimization parameters to model the complete AWA rf photoinjector (the gun and six accelerating cavities) at 40 nC. The optimization algorithm is used in a Pareto study that compares the trade-off between emittance and bunch length for the AWA 70MeV photoinjector.

  12. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  13. Observability analysis for model-based fault detection and sensor selection in induction motors

    International Nuclear Information System (INIS)

    Nakhaeinejad, Mohsen; Bryant, Michael D

    2011-01-01

    Sensors in different types and configurations provide information on the dynamics of a system. For a specific task, the question is whether measurements have enough information or whether the sensor configuration can be changed to improve the performance or to reduce costs. Observability analysis may answer the questions. This paper presents a general algorithm of nonlinear observability analysis with application to model-based diagnostics and sensor selection in three-phase induction motors. A bond graph model of the motor is developed and verified with experiments. A nonlinear observability matrix based on Lie derivatives is obtained from state equations. An observability index based on the singular value decomposition of the observability matrix is obtained. Singular values and singular vectors are used to identify the most and least observable configurations of sensors and parameters. A complex step derivative technique is used in the calculation of Jacobians to improve the computational performance of the observability analysis. The proposed algorithm of observability analysis can be applied to any nonlinear system to select the best configuration of sensors for applications of model-based diagnostics, observer-based controller, or to determine the level of sensor redundancy. Observability analysis on induction motors provides various sensor configurations with corresponding observability indices. Results show the redundancy levels for different sensors, and provide a sensor selection guideline for model-based diagnostics, and for observer-based controllers. The results can also be used for sensor fault detection and to improve the reliability of the system by increasing the redundancy level in measurements

  14. Cold vacuum drying residual free water test description

    International Nuclear Information System (INIS)

    Pajunen, A.L.

    1997-01-01

    Residual free water expected to remain in a Multi-Canister Overpack (MCO) after processing in the Cold Vacuum Drying (CVD) Facility is investigated based on three alternative models of fuel crevices. Tests and operating conditions for the CVD process are defined based on the analysis of these models. The models consider water pockets constrained by cladding defects, water constrained in a pore or crack by flow through a porous bed, and water constrained in pores by diffusion. An analysis of comparative reaction rate constraints is also presented indicating that a pressure rise test can be used to show MCO's will be thermally stable at operating temperatures up to 75 C

  15. Model-based analysis and simulation of regenerative heat wheel

    DEFF Research Database (Denmark)

    Wu, Zhuang; Melnik, Roderick V. N.; Borup, F.

    2006-01-01

    The rotary regenerator (also called the heat wheel) is an important component of energy intensive sectors, which is used in many heat recovery systems. In this paper, a model-based analysis of a rotary regenerator is carried out with a major emphasis given to the development and implementation of...

  16. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Dynamics analysis of a boiling water reactor based on multivariable autoregressive modeling

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Matsubara, Kunihiko

    1980-01-01

    The establishment of the highly reliable mathematical model for the dynamic characteristics of a reactor is indispensable for the achievement of safe operation in reactor plants. The authors have tried to model the dynamic characteristics of a reactor based on the identification technique, taking the JPDR (Japan Power Demonstration Reactor) as the object, as one of the technical studies for diagnosing BWR anomaly, and employed the multivariable autoregressive modeling (MAR method) as one of the useful methods for forwarding the analysis. In this paper, the outline of the system analysis by MAR modeling is explained, and the identification experiments and their analysis results performed in the phase 4 of the power increase test of the JPDR are described. The authors evaluated the results of identification based on only reactor noises, making reference to the results of identification in the case of exciting the system by applying artificial irregular disturbance, in order to clarify the extent in which the modeling is possible by reactor noises only. However, some difficulties were encountered. The largest problem is the one concerning the separation and identification of the noise sources exciting the variables from the dynamic characteristics among the variables. If the effective technique can be obtained to this problem, the approach by the identification technique based on the probability model might be a powerful tool in the field of reactor noise analysis and the development of diagnosis technics. (Wakatsuki, Y.)

  18. Template-based and free modeling of I-TASSER and QUARK pipelines using predicted contact maps in CASP12.

    Science.gov (United States)

    Zhang, Chengxin; Mortuza, S M; He, Baoji; Wang, Yanting; Zhang, Yang

    2018-03-01

    We develop two complementary pipelines, "Zhang-Server" and "QUARK", based on I-TASSER and QUARK pipelines for template-based modeling (TBM) and free modeling (FM), and test them in the CASP12 experiment. The combination of I-TASSER and QUARK successfully folds three medium-size FM targets that have more than 150 residues, even though the interplay between the two pipelines still awaits further optimization. Newly developed sequence-based contact prediction by NeBcon plays a critical role to enhance the quality of models, particularly for FM targets, by the new pipelines. The inclusion of NeBcon predicted contacts as restraints in the QUARK simulations results in an average TM-score of 0.41 for the best in top five predicted models, which is 37% higher than that by the QUARK simulations without contacts. In particular, there are seven targets that are converted from non-foldable to foldable (TM-score >0.5) due to the use of contact restraints in the simulations. Another additional feature in the current pipelines is the local structure quality prediction by ResQ, which provides a robust residue-level modeling error estimation. Despite the success, significant challenges still remain in ab initio modeling of multi-domain proteins and folding of β-proteins with complicated topologies bound by long-range strand-strand interactions. Improvements on domain boundary and long-range contact prediction, as well as optimal use of the predicted contacts and multiple threading alignments, are critical to address these issues seen in the CASP12 experiment. © 2017 Wiley Periodicals, Inc.

  19. Skill-Based and Planned Active Play Versus Free-Play Effects on Fundamental Movement Skills in Preschoolers.

    Science.gov (United States)

    Roach, Lindsay; Keats, Melanie

    2018-01-01

    Fundamental movement skill interventions are important for promoting physical activity, but the optimal intervention model for preschool children remains unclear. We compared two 8-week interventions, a structured skill-station and a planned active play approach, to a free-play control condition on pre- and postintervention fundamental movement skills. We also collected data regarding program attendance and perceived enjoyment. We found a significant interaction effect between intervention type and time. A Tukey honest significant difference analysis supported a positive intervention effect showing a significant difference between both interventions and the free-play control condition. There was a significant between-group difference in group attendance such that mean attendance was higher for both the free-play and planned active play groups relative to the structured skill-based approach. There were no differences in attendance between free-play and planned active play groups, and there were no differences in enjoyment ratings between the two intervention groups. In sum, while both interventions led to improved fundamental movement skills, the active play approach offered several logistical advantages. Although these findings should be replicated, they can guide feasible and sustainable fundamental movement skill programs within day care settings.

  20. Cluster-based analysis of multi-model climate ensembles

    Science.gov (United States)

    Hyde, Richard; Hossaini, Ryan; Leeson, Amber A.

    2018-06-01

    Clustering - the automated grouping of similar data - can provide powerful and unique insight into large and complex data sets, in a fast and computationally efficient manner. While clustering has been used in a variety of fields (from medical image processing to economics), its application within atmospheric science has been fairly limited to date, and the potential benefits of the application of advanced clustering techniques to climate data (both model output and observations) has yet to be fully realised. In this paper, we explore the specific application of clustering to a multi-model climate ensemble. We hypothesise that clustering techniques can provide (a) a flexible, data-driven method of testing model-observation agreement and (b) a mechanism with which to identify model development priorities. We focus our analysis on chemistry-climate model (CCM) output of tropospheric ozone - an important greenhouse gas - from the recent Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Tropospheric column ozone from the ACCMIP ensemble was clustered using the Data Density based Clustering (DDC) algorithm. We find that a multi-model mean (MMM) calculated using members of the most-populous cluster identified at each location offers a reduction of up to ˜ 20 % in the global absolute mean bias between the MMM and an observed satellite-based tropospheric ozone climatology, with respect to a simple, all-model MMM. On a spatial basis, the bias is reduced at ˜ 62 % of all locations, with the largest bias reductions occurring in the Northern Hemisphere - where ozone concentrations are relatively large. However, the bias is unchanged at 9 % of all locations and increases at 29 %, particularly in the Southern Hemisphere. The latter demonstrates that although cluster-based subsampling acts to remove outlier model data, such data may in fact be closer to observed values in some locations. We further demonstrate that clustering can provide a viable and

  1. Generating Billion-Edge Scale-Free Networks in Seconds: Performance Study of a Novel GPU-based Preferential Attachment Model

    Energy Technology Data Exchange (ETDEWEB)

    Perumalla, Kalyan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Alam, Maksudul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-10-01

    A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidia GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.

  2. Development and Analysis of Patient-Based Complete Conducting Airways Models.

    Directory of Open Access Journals (Sweden)

    Rafel Bordas

    Full Text Available The analysis of high-resolution computed tomography (CT images of the lung is dependent on inter-subject differences in airway geometry. The application of computational models in understanding the significance of these differences has previously been shown to be a useful tool in biomedical research. Studies using image-based geometries alone are limited to the analysis of the central airways, down to generation 6-10, as other airways are not visible on high-resolution CT. However, airways distal to this, often termed the small airways, are known to play a crucial role in common airway diseases such as asthma and chronic obstructive pulmonary disease (COPD. Other studies have incorporated an algorithmic approach to extrapolate CT segmented airways in order to obtain a complete conducting airway tree down to the level of the acinus. These models have typically been used for mechanistic studies, but also have the potential to be used in a patient-specific setting. In the current study, an image analysis and modelling pipeline was developed and applied to a number of healthy (n = 11 and asthmatic (n = 24 CT patient scans to produce complete patient-based airway models to the acinar level (mean terminal generation 15.8 ± 0.47. The resulting models are analysed in terms of morphometric properties and seen to be consistent with previous work. A number of global clinical lung function measures are compared to resistance predictions in the models to assess their suitability for use in a patient-specific setting. We show a significant difference (p < 0.01 in airways resistance at all tested flow rates in complete airway trees built using CT data from severe asthmatics (GINA 3-5 versus healthy subjects. Further, model predictions of airways resistance at all flow rates are shown to correlate with patient forced expiratory volume in one second (FEV1 (Spearman ρ = -0.65, p < 0.001 and, at low flow rates (0.00017 L/s, FEV1 over forced vital capacity (FEV1

  3. Analyzing Ambiguity of Context-Free Grammars

    DEFF Research Database (Denmark)

    Brabrand, Claus; Giegerich, Robert; Møller, Anders

    2007-01-01

    It has been known since 1962 that the ambiguity problem for context-free grammars is undecidable. Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where grammars are used as models of real-world physical structures. We...... observe that there is a simple linguistic characterization of the grammar ambiguity problem, and we show how to exploit this to conservatively approximate the problem based on local regular approximations and grammar unfoldings. As an application, we consider grammars that occur in RNA analysis...

  4. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  5. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  6. Temperature-dependent relativistic microscopic optical potential and the mean free path of a nucleon based on Walecka's model

    International Nuclear Information System (INIS)

    Han Yinlu; Shen Qingbiao; Zhuo Yizhong

    1994-01-01

    The relativistic microscopic optical potential, the Schroedinger equivalent potential, and mean free paths of a nucleon at finite temperature in nuclear matter and finite nuclei are studied based on Walecka's model and thermo-field dynamics. We let only the Hartree-Fock self-energy of a nucleon represent the real part of the microscopic optical potential and the fourth order of meson exchange diagrams, i.e. the polarization diagrams represent the imaginary part of the microscopic optical potential in nuclear matter. The microscopic optical potential of finite nuclei is obtained by means of the local density approximation. (orig.)

  7. Product Lifecycle Management Architecture: A Model Based Systems Engineering Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Noonan, Nicholas James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This report is an analysis of the Product Lifecycle Management (PLM) program. The analysis is centered on a need statement generated by a Nuclear Weapons (NW) customer. The need statement captured in this report creates an opportunity for the PLM to provide a robust service as a solution. Lifecycles for both the NW and PLM are analyzed using Model Based System Engineering (MBSE).

  8. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  9. Cold starting characteristics analysis of hydraulic free piston engine

    International Nuclear Information System (INIS)

    Zhang, Shuanlu; Zhao, Zhenfeng; Zhao, Changlu; Zhang, Fujun; Wang, Shan

    2017-01-01

    The cold start characteristic of hydraulic free piston diesel engine may affect its stable operation. Therefore the specific cold start characteristics, such as BDC or TDC positions, pressure in-cylinder, heat release rate, should be investigated in detail. These parameters fluctuate in some regularity in the cod start process. With the development of the free piston engine prototype and the establishment of test bench, the results are obtained. For the dynamic results, the fluctuation range of TDC and BDC positions is 8 mm and decreases with time. The thermodynamic results show that the combustion process is not stable and the pressure in-cylinder fluctuates largely in the cold start process. In addition, the combustion is rapid and knock happens inevitably. In order to investigate the reasons, a CFD model is established for temperature analysis in-cylinder and heat transfer conditions. It is found that higher start wall temperature will lead to more uniform temperature distribution. The delay period may decreases and heat release will move forward. This reason is analyzed by thermodynamic derivation based on the first law of thermodynamics. Finally, the improvement suggestions of cold start strategy are proposed. - Highlights: • The cold start behaviors of HFPE are investigated in detail. • CFD method is used for simulating temperature distribution in start process. • Thermodynamic derivation uncovers the compression temperature distribution. • The improvement suggestions of cold start strategy are proposed.

  10. A model of lipid-free apolipoprotein A-I revealed by iterative molecular dynamics simulation.

    Directory of Open Access Journals (Sweden)

    Xing Zhang

    Full Text Available Apolipoprotein A-I (apo A-I, the major protein component of high-density lipoprotein, has been proven inversely correlated to cardiovascular risk in past decades. The lipid-free state of apo A-I is the initial stage which binds to lipids forming high-density lipoprotein. Molecular models of lipid-free apo A-I have been reported by methods like X-ray crystallography and chemical cross-linking/mass spectrometry (CCL/MS. Through structural analysis we found that those current models had limited consistency with other experimental results, such as those from hydrogen exchange with mass spectrometry. Through molecular dynamics simulations, we also found those models could not reach a stable equilibrium state. Therefore, by integrating various experimental results, we proposed a new structural model for lipid-free apo A-I, which contains a bundled four-helix N-terminal domain (1-192 that forms a variable hydrophobic groove and a mobile short hairpin C-terminal domain (193-243. This model exhibits an equilibrium state through molecular dynamics simulation and is consistent with most of the experimental results known from CCL/MS on lysine pairs, fluorescence resonance energy transfer and hydrogen exchange. This solution-state lipid-free apo A-I model may elucidate the possible conformational transitions of apo A-I binding with lipids in high-density lipoprotein formation.

  11. Heterotic free fermionic and symmetric toroidal orbifold models

    Energy Technology Data Exchange (ETDEWEB)

    Athanasopoulos, P.; Faraggi, A.E. [Department of Mathematical Sciences, University of Liverpool,Liverpool L69 7ZL (United Kingdom); Nibbelink, S. Groot [Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München,80333 München (Germany); Mehta, V.M. [Institute for Theoretical Physics, University of Heidelberg,69120 Heidelberg (Germany)

    2016-04-07

    Free fermionic models and symmetric heterotic toroidal orbifolds both constitute exact backgrounds that can be used effectively for phenomenological explorations within string theory. Even though it is widely believed that for ℤ{sub 2}×ℤ{sub 2} orbifolds the two descriptions should be equivalent, a detailed dictionary between both formulations is still lacking. This paper aims to fill this gap: we give a detailed account of how the input data of both descriptions can be related to each other. In particular, we show that the generalized GSO phases of the free fermionic model correspond to generalized torsion phases used in orbifold model building. We illustrate our translation methods by providing free fermionic realizations for all ℤ{sub 2}×ℤ{sub 2} orbifold geometries in six dimensions.

  12. A diffusion model-free framework with echo time dependence for free-water elimination and brain tissue microstructure characterization.

    Science.gov (United States)

    Molina-Romero, Miguel; Gómez, Pedro A; Sperl, Jonathan I; Czisch, Michael; Sämann, Philipp G; Jones, Derek K; Menzel, Marion I; Menze, Bjoern H

    2018-03-23

    The compartmental nature of brain tissue microstructure is typically studied by diffusion MRI, MR relaxometry or their correlation. Diffusion MRI relies on signal representations or biophysical models, while MR relaxometry and correlation studies are based on regularized inverse Laplace transforms (ILTs). Here we introduce a general framework for characterizing microstructure that does not depend on diffusion modeling and replaces ill-posed ILTs with blind source separation (BSS). This framework yields proton density, relaxation times, volume fractions, and signal disentanglement, allowing for separation of the free-water component. Diffusion experiments repeated for several different echo times, contain entangled diffusion and relaxation compartmental information. These can be disentangled by BSS using a physically constrained nonnegative matrix factorization. Computer simulations, phantom studies, together with repeatability and reproducibility experiments demonstrated that BSS is capable of estimating proton density, compartmental volume fractions and transversal relaxations. In vivo results proved its potential to correct for free-water contamination and to estimate tissue parameters. Formulation of the diffusion-relaxation dependence as a BSS problem introduces a new framework for studying microstructure compartmentalization, and a novel tool for free-water elimination. © 2018 International Society for Magnetic Resonance in Medicine.

  13. Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles

    Science.gov (United States)

    Knox, Lenora A.

    The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.

  14. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  15. Isogeometric analysis of free-form Timoshenko curved beams including the nonlinear effects of large deformations

    Science.gov (United States)

    Hosseini, Seyed Farhad; Hashemian, Ali; Moetakef-Imani, Behnam; Hadidimoud, Saied

    2018-03-01

    In the present paper, the isogeometric analysis (IGA) of free-form planar curved beams is formulated based on the nonlinear Timoshenko beam theory to investigate the large deformation of beams with variable curvature. Based on the isoparametric concept, the shape functions of the field variables (displacement and rotation) in a finite element analysis are considered to be the same as the non-uniform rational basis spline (NURBS) basis functions defining the geometry. The validity of the presented formulation is tested in five case studies covering a wide range of engineering curved structures including from straight and constant curvature to variable curvature beams. The nonlinear deformation results obtained by the presented method are compared to well-established benchmark examples and also compared to the results of linear and nonlinear finite element analyses. As the nonlinear load-deflection behavior of Timoshenko beams is the main topic of this article, the results strongly show the applicability of the IGA method to the large deformation analysis of free-form curved beams. Finally, it is interesting to notice that, until very recently, the large deformations analysis of free-form Timoshenko curved beams has not been considered in IGA by researchers.

  16. An Agent Based Modelling Approach for Multi-Stakeholder Analysis of City Logistics Solutions

    NARCIS (Netherlands)

    Anand, N.

    2015-01-01

    This thesis presents a comprehensive framework for multi-stakeholder analysis of city logistics solutions using agent based modeling. The framework describes different stages for the systematic development of an agent based model for the city logistics domain. The framework includes a

  17. Free-boundary models of a meltwater conduit

    KAUST Repository

    Dallaston, Michael C.

    2014-08-01

    © 2014 AIP Publishing LLC. We analyse the cross-sectional evolution of an englacial meltwater conduit that contracts due to inward creep of the surrounding ice and expands due to melting. Making use of theoretical methods from free-boundary problems in Stokes flow and Hele-Shaw squeeze flow we construct an exact solution to the coupled problem of external viscous creep and internal heating, in which we adopt a Newtonian approximation for ice flow and an idealized uniform heat source in the conduit. This problem provides an interesting variant on standard free-boundary problems, coupling different internal and external problems through the kinematic condition at the interface. The boundary in the exact solution takes the form of an ellipse that may contract or expand (depending on the magnitudes of effective pressure and heating rate) around fixed focal points. Linear stability analysis reveals that without the melting this solution is unstable to perturbations in the shape. Melting can stabilize the interface unless the aspect ratio is too small; in that case, instabilities grow largest at the thin ends of the ellipse. The predictions are corroborated with numerical solutions using boundary integral techniques. Finally, a number of extensions to the idealized model are considered, showing that a contracting circular conduit is unstable to all modes of perturbation if melting occurs at a uniform rate around the boundary, or if the ice is modelled as a shear-thinning fluid.

  18. Surface free energy analysis of adsorbents used for radioiodine adsorption

    International Nuclear Information System (INIS)

    González-García, C.M.; Román, S.; González, J.F.; Sabio, E.; Ledesma, B.

    2013-01-01

    In this work, the surface free energy of biomass-based activated carbons, both fresh and impregnated with triethylenediamine, has been evaluated. The contribution of Lifshitz van der Waals components was determined by the model proposed by van Oss et al. The results obtained allowed predicting the most probable configurations of the impregnant onto the carbon surface and its influence on the subsequent adsorption of radioactive methyl iodide.

  19. A model of quasi-free scattering with polarized protons

    International Nuclear Information System (INIS)

    Teodoro, M.R.

    1976-01-01

    A quantitative evaluation, based on a simple model for spin-free coplanar and asymmetric reaction in 16 O, for 215 MeV incoming polarized protons confirms the use of the strong effective polarization of the knocked-out proton by the spin-orbit coupling and of the strong dependence of free, medium energy, proton-proton cross section on the relative orientation of the proton spins. Effective polarizations, momentum distributions and correlation cross sections have been calculated for the 1p sub(1/2), 1 p sub(3/2) and 1s sub(1/2) states in 16 O, using protons totally polarized orthogonal to the scattering plane. Harmonic oscillator and square wells have been used to generate the bound state wave functions, whereas the optical potentials have been taken spin-independent and purely imaginary [pt

  20. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  1. Geometrical and kinematical characterization of parallax-free world models

    International Nuclear Information System (INIS)

    Hasse, W.; Perlick, V.

    1988-01-01

    An arbitrary general relativistic world model, i.e., a pseudo-Riemannian manifold along with a timelike vector field V, is considered. Such a kinematical world model is called ''parallax-free'' iff the angle under which any two observers (i.e., integral curves of V) are seen by any third observer remains constant in the course of time. It is shown that a model is parallax-free iff V is proportional to some conformal Killing field. In this case V, especially, has to be shear-free. Furthermore a relationship between parallaxes and red shift is presented and a reference is made to considerations concerning the visibility of cosmic rotation

  2. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    Science.gov (United States)

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  3. Experimental design and data-analysis in label-free quantitative LC/MS proteomics: A tutorial with MSqRob.

    Science.gov (United States)

    Goeminne, Ludger J E; Gevaert, Kris; Clement, Lieven

    2018-01-16

    Label-free shotgun proteomics is routinely used to assess proteomes. However, extracting relevant information from the massive amounts of generated data remains difficult. This tutorial provides a strong foundation on analysis of quantitative proteomics data. We provide key statistical concepts that help researchers to design proteomics experiments and we showcase how to analyze quantitative proteomics data using our recent free and open-source R package MSqRob, which was developed to implement the peptide-level robust ridge regression method for relative protein quantification described by Goeminne et al. MSqRob can handle virtually any experimental proteomics design and outputs proteins ordered by statistical significance. Moreover, its graphical user interface and interactive diagnostic plots provide easy inspection and also detection of anomalies in the data and flaws in the data analysis, allowing deeper assessment of the validity of results and a critical review of the experimental design. Our tutorial discusses interactive preprocessing, data analysis and visualization of label-free MS-based quantitative proteomics experiments with simple and more complex designs. We provide well-documented scripts to run analyses in bash mode on GitHub, enabling the integration of MSqRob in automated pipelines on cluster environments (https://github.com/statOmics/MSqRob). The concepts outlined in this tutorial aid in designing better experiments and analyzing the resulting data more appropriately. The two case studies using the MSqRob graphical user interface will contribute to a wider adaptation of advanced peptide-based models, resulting in higher quality data analysis workflows and more reproducible results in the proteomics community. We also provide well-documented scripts for experienced users that aim at automating MSqRob on cluster environments. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Digital Tomosynthesis System Geometry Analysis Using Convolution-Based Blur-and-Add (BAA) Model.

    Science.gov (United States)

    Wu, Meng; Yoon, Sungwon; Solomon, Edward G; Star-Lack, Josh; Pelc, Norbert; Fahrig, Rebecca

    2016-01-01

    Digital tomosynthesis is a three-dimensional imaging technique with a lower radiation dose than computed tomography (CT). Due to the missing data in tomosynthesis systems, out-of-plane structures in the depth direction cannot be completely removed by the reconstruction algorithms. In this work, we analyzed the impulse responses of common tomosynthesis systems on a plane-to-plane basis and proposed a fast and accurate convolution-based blur-and-add (BAA) model to simulate the backprojected images. In addition, the analysis formalism describing the impulse response of out-of-plane structures can be generalized to both rotating and parallel gantries. We implemented a ray tracing forward projection and backprojection (ray-based model) algorithm and the convolution-based BAA model to simulate the shift-and-add (backproject) tomosynthesis reconstructions. The convolution-based BAA model with proper geometry distortion correction provides reasonably accurate estimates of the tomosynthesis reconstruction. A numerical comparison indicates that the simulated images using the two models differ by less than 6% in terms of the root-mean-squared error. This convolution-based BAA model can be used in efficient system geometry analysis, reconstruction algorithm design, out-of-plane artifacts suppression, and CT-tomosynthesis registration.

  5. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    Science.gov (United States)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  6. Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model

    Directory of Open Access Journals (Sweden)

    Peng Heping

    2017-01-01

    Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.

  7. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  8. Determination of pyrolysis characteristics and kinetics of palm kernel shell using TGA–FTIR and model-free integral methods

    International Nuclear Information System (INIS)

    Ma, Zhongqing; Chen, Dengyu; Gu, Jie; Bao, Binfu; Zhang, Qisheng

    2015-01-01

    Highlights: • Model-free integral kinetics method and analytical TGA–FTIR were conducted on pyrolysis process of PKS. • The pyrolysis mechanism of PKS was elaborated. • Thermal stability was established: lignin > cellulose > xylan. • Detailed compositions in the volatiles of PKS pyrolysis were determinated. • The interaction of biomass three components led to the fluctuation of activation energy in PKS pyrolysis. - Abstract: Palm kernel shell (PKS) from palm oil production is a potential biomass source for bio-energy production. A fundamental understanding of PKS pyrolysis behavior and kinetics is essential to its efficient thermochemical conversion. The thermal degradation profile in derivative thermogravimetry (DTG) analysis shown two significant mass-loss peaks mainly related to the decomposition of hemicellulose and cellulose respectively. This characteristic differentiated with other biomass (e.g. wheat straw and corn stover) presented just one peak or accompanied with an extra “shoulder” peak (e.g. wheat straw). According to the Fourier transform infrared spectrometry (FTIR) analysis, the prominent volatile components generated by the pyrolysis of PKS were CO 2 (2400–2250 cm −1 and 586–726 cm −1 ), aldehydes, ketones, organic acids (1900–1650 cm −1 ), and alkanes, phenols (1475–1000 cm −1 ). The activation energy dependent on the conversion rate was estimated by two model-free integral methods: Flynn–Wall–Ozawa (FWO) and Kissinger–Akahira–Sunose (KAS) method at different heating rates. The fluctuation of activation energy can be interpreted as a result of interactive reactions related to cellulose, hemicellulose and lignin degradation, occurred in the pyrolysis process. Based on TGA–FTIR analysis and model free integral kinetics method, the pyrolysis mechanism of PKS was elaborated in this paper

  9. Mechanical modeling for magnetorheological elastomer isolators based on constitutive equations and electromagnetic analysis

    Science.gov (United States)

    Wang, Qi; Dong, Xufeng; Li, Luyu; Ou, Jinping

    2018-06-01

    As constitutive models are too complicated and existing mechanical models lack universality, these models are beyond satisfaction for magnetorheological elastomer (MRE) devices. In this article, a novel universal method is proposed to build concise mechanical models. Constitutive model and electromagnetic analysis were applied in this method to ensure universality, while a series of derivations and simplifications were carried out to obtain a concise formulation. To illustrate the proposed modeling method, a conical MRE isolator was introduced. Its basic mechanical equations were built based on equilibrium, deformation compatibility, constitutive equations and electromagnetic analysis. An iteration model and a highly efficient differential equation editor based model were then derived to solve the basic mechanical equations. The final simplified mechanical equations were obtained by re-fitting the simulations with a novel optimal algorithm. In the end, verification test of the isolator has proved the accuracy of the derived mechanical model and the modeling method.

  10. Spreading dynamics of an e-commerce preferential information model on scale-free networks

    Science.gov (United States)

    Wan, Chen; Li, Tao; Guan, Zhi-Hong; Wang, Yuanmei; Liu, Xiongding

    2017-02-01

    In order to study the influence of the preferential degree and the heterogeneity of underlying networks on the spread of preferential e-commerce information, we propose a novel susceptible-infected-beneficial model based on scale-free networks. The spreading dynamics of the preferential information are analyzed in detail using the mean-field theory. We determine the basic reproductive number and equilibria. The theoretical analysis indicates that the basic reproductive number depends mainly on the preferential degree and the topology of the underlying networks. We prove the global stability of the information-elimination equilibrium. The permanence of preferential information and the global attractivity of the information-prevailing equilibrium are also studied in detail. Some numerical simulations are presented to verify the theoretical results.

  11. Predicting Free Flow Speed and Crash Risk of Bicycle Traffic Flow Using Artificial Neural Network Models

    Directory of Open Access Journals (Sweden)

    Cheng Xu

    2015-01-01

    Full Text Available Free flow speed is a fundamental measure of traffic performance and has been found to affect the severity of crash risk. However, the previous studies lack analysis and modelling of impact factors on bicycles’ free flow speed. The main focus of this study is to develop multilayer back propagation artificial neural network (BPANN models for the prediction of free flow speed and crash risk on the separated bicycle path. Four different models with considering different combinations of input variables (e.g., path width, traffic condition, bicycle type, and cyclists’ characteristics were developed. 459 field data samples were collected from eleven bicycle paths in Hangzhou, China, and 70% of total samples were used for training, 15% for validation, and 15% for testing. The results show that considering the input variables of bicycle types and characteristics of cyclists will effectively improve the accuracy of the prediction models. Meanwhile, the parameters of bicycle types have more significant effect on predicting free flow speed of bicycle compared to those of cyclists’ characteristics. The findings could contribute for evaluation, planning, and management of bicycle safety.

  12. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    Science.gov (United States)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  13. Electrochemical approach for acute myocardial infarction diagnosis based on direct antibodies-free analysis of human blood plasma.

    Science.gov (United States)

    Suprun, Elena V; Saveliev, Anatoly A; Evtugyn, Gennady A; Lisitsa, Alexander V; Bulko, Tatiana V; Shumyantseva, Victoria V; Archakov, Alexander I

    2012-03-15

    A novel direct antibodies-free electrochemical approach for acute myocardial infarction (AMI) diagnosis has been developed. For this purpose, a combination of the electrochemical assay of plasma samples with chemometrics was proposed. Screen printed carbon electrodes modified with didodecyldimethylammonium bromide were used for plasma charactrerization by cyclic (CV) and square wave voltammetry and square wave (SWV) voltammetry. It was shown that the cathodic peak in voltammograms at about -250 mV vs. Ag/AgCl can be associated with AMI. In parallel tests, cardiac myoglobin and troponin I, the AMI biomarkers, were determined in each sample by RAMP immunoassay. The applicability of the electrochemical testing for AMI diagnostics was confirmed by statistical methods: generalized linear model (GLM), linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), artificial neural net (multi-layer perception, MLP), and support vector machine (SVM), all of which were created to obtain the "True-False" distribution prediction where "True" and "False" are, respectively, positive and negative decision about an illness event. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  15. Cryogenic Fuel Tank Draining Analysis Model

    Science.gov (United States)

    Greer, Donald

    1999-01-01

    One of the technological challenges in designing advanced hypersonic aircraft and the next generation of spacecraft is developing reusable flight-weight cryogenic fuel tanks. As an aid in the design and analysis of these cryogenic tanks, a computational fluid dynamics (CFD) model has been developed specifically for the analysis of flow in a cryogenic fuel tank. This model employs the full set of Navier-Stokes equations, except that viscous dissipation is neglected in the energy equation. An explicit finite difference technique in two-dimensional generalized coordinates, approximated to second-order accuracy in both space and time is used. The stiffness resulting from the low Mach number is resolved by using artificial compressibility. The model simulates the transient, two-dimensional draining of a fuel tank cross section. To calculate the slosh wave dynamics the interface between the ullage gas and liquid fuel is modeled as a free surface. Then, experimental data for free convection inside a horizontal cylinder are compared with model results. Finally, cryogenic tank draining calculations are performed with three different wall heat fluxes to demonstrate the effect of wall heat flux on the internal tank flow field.

  16. Development of a Sampling-Based Global Sensitivity Analysis Workflow for Multiscale Computational Cancer Models

    Science.gov (United States)

    Wang, Zhihui; Deisboeck, Thomas S.; Cristini, Vittorio

    2014-01-01

    There are two challenges that researchers face when performing global sensitivity analysis (GSA) on multiscale in silico cancer models. The first is increased computational intensity, since a multiscale cancer model generally takes longer to run than does a scale-specific model. The second problem is the lack of a best GSA method that fits all types of models, which implies that multiple methods and their sequence need to be taken into account. In this article, we therefore propose a sampling-based GSA workflow consisting of three phases – pre-analysis, analysis, and post-analysis – by integrating Monte Carlo and resampling methods with the repeated use of analysis of variance (ANOVA); we then exemplify this workflow using a two-dimensional multiscale lung cancer model. By accounting for all parameter rankings produced by multiple GSA methods, a summarized ranking is created at the end of the workflow based on the weighted mean of the rankings for each input parameter. For the cancer model investigated here, this analysis reveals that ERK, a downstream molecule of the EGFR signaling pathway, has the most important impact on regulating both the tumor volume and expansion rate in the algorithm used. PMID:25257020

  17. Mechanism of Process-Induced Salt-to-Free Base Transformation of Pharmaceutical Products

    DEFF Research Database (Denmark)

    Bruun Hansen, Thomas; Qu, Haiyan

    2014-01-01

    pH-solubility profiles of a model drug in salt form was established and the mechanism of salt-to-free base form transformation was investigated by increasing pH of the system. Wet massing experiments along with suspension experiments were used to investigate the effects of excipients on the stabi...

  18. Free Fall Misconceptions: Results of a Graph Based Pre-Test of Sophomore Civil Engineering Students

    Science.gov (United States)

    Montecinos, Alicia M.

    2014-01-01

    A partially unusual behaviour was found among 14 sophomore students of civil engineering who took a pre test for a free fall laboratory session, in the context of a general mechanics course. An analysis contemplating mathematics models and physics models consistency was made. In all cases, the students presented evidence favoring a correct free…

  19. Efficient alignment-free DNA barcode analytics.

    Science.gov (United States)

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  20. Universal free school breakfast: a qualitative model for breakfast behaviors

    Directory of Open Access Journals (Sweden)

    Louise eHarvey-Golding

    2015-06-01

    Full Text Available In recent years the provision of school breakfast has increased significantly in the UK. However, research examining the effectiveness of school breakfast is still within relative stages of infancy, and findings to date have been rather mixed. Moreover, previous evaluations of school breakfast schemes have been predominantly quantitative in their methodologies. Presently there are few qualitative studies examining the subjective perceptions and experiences of stakeholders, and thereby an absence of knowledge regarding the sociocultural impacts of school breakfast. The purpose of this study was to investigate the beliefs, views and attitudes, and breakfast consumption behaviors, among key stakeholders, served by a council-wide universal free school breakfast initiative, within the North West of England, UK. A sample of children, parents and school staff were recruited from three primary schools, participating in the universal free school breakfast scheme, to partake in semi-structured interviews and small focus groups. A Grounded Theory analysis of the data collected identified a theoretical model of breakfast behaviors, underpinned by the subjective perceptions and experiences of these key stakeholders. The model comprises of three domains relating to breakfast behaviors, and the internal and external factors that are perceived to influence breakfast behaviors, among children, parents and school staff. Findings were validated using triangulation methods, member checks and inter-rater reliability measures. In presenting this theoretically grounded model for breakfast behaviors, this paper provides a unique qualitative insight into the breakfast consumption behaviors and barriers to breakfast consumption, within a socioeconomically deprived community, participating in a universal free school breakfast intervention program.

  1. On the TAP Free Energy in the Mixed p-Spin Models

    Science.gov (United States)

    Chen, Wei-Kuo; Panchenko, Dmitry

    2018-05-01

    Thouless et al. (Phys Mag 35(3):593-601, 1977), derived a representation for the free energy of the Sherrington-Kirkpatrick model, called the TAP free energy, written as the difference of the energy and entropy on the extended configuration space of local magnetizations with an Onsager correction term. In the setting of mixed p-spin models with Ising spins, we prove that the free energy can indeed be written as the supremum of the TAP free energy over the space of local magnetizations whose Edwards-Anderson order parameter (self-overlap) is to the right of the support of the Parisi measure. Furthermore, for generic mixed p-spin models, we prove that the free energy is equal to the TAP free energy evaluated on the local magnetization of any pure state.

  2. Residual stress distribution analysis of heat treated APS TBC using image based modelling.

    Science.gov (United States)

    Li, Chun; Zhang, Xun; Chen, Ying; Carr, James; Jacques, Simon; Behnsen, Julia; di Michiel, Marco; Xiao, Ping; Cernik, Robert

    2017-08-01

    We carried out a residual stress distribution analysis in a APS TBC throughout the depth of the coatings. The samples were heat treated at 1150 °C for 190 h and the data analysis used image based modelling based on the real 3D images measured by Computed Tomography (CT). The stress distribution in several 2D slices from the 3D model is included in this paper as well as the stress distribution along several paths shown on the slices. Our analysis can explain the occurrence of the "jump" features near the interface between the top coat and the bond coat. These features in the residual stress distribution trend were measured (as a function of depth) by high-energy synchrotron XRD (as shown in our related research article entitled 'Understanding the Residual Stress Distribution through the Thickness of Atmosphere Plasma Sprayed (APS) Thermal Barrier Coatings (TBCs) by high energy Synchrotron XRD; Digital Image Correlation (DIC) and Image Based Modelling') (Li et al., 2017) [1].

  3. Hybrid modeling and empirical analysis of automobile supply chain network

    Science.gov (United States)

    Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying

    2017-05-01

    Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.

  4. Cognitive components underpinning the development of model-based learning.

    Science.gov (United States)

    Potter, Tracey C S; Bryce, Nessa V; Hartley, Catherine A

    2017-06-01

    Reinforcement learning theory distinguishes "model-free" learning, which fosters reflexive repetition of previously rewarded actions, from "model-based" learning, which recruits a mental model of the environment to flexibly select goal-directed actions. Whereas model-free learning is evident across development, recruitment of model-based learning appears to increase with age. However, the cognitive processes underlying the development of model-based learning remain poorly characterized. Here, we examined whether age-related differences in cognitive processes underlying the construction and flexible recruitment of mental models predict developmental increases in model-based choice. In a cohort of participants aged 9-25, we examined whether the abilities to infer sequential regularities in the environment ("statistical learning"), maintain information in an active state ("working memory") and integrate distant concepts to solve problems ("fluid reasoning") predicted age-related improvements in model-based choice. We found that age-related improvements in statistical learning performance did not mediate the relationship between age and model-based choice. Ceiling performance on our working memory assay prevented examination of its contribution to model-based learning. However, age-related improvements in fluid reasoning statistically mediated the developmental increase in the recruitment of a model-based strategy. These findings suggest that gradual development of fluid reasoning may be a critical component process underlying the emergence of model-based learning. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Shear-free axial model in massive Brans–Dicke gravity

    Energy Technology Data Exchange (ETDEWEB)

    Sharif, M., E-mail: msharif.math@pu.edu.pk [Department of Mathematics, University of the Punjab, Quaid-e-Azam Campus, Lahore-54590 (Pakistan); Manzoor, Rubab, E-mail: rubab.manzoor@umt.edu.pk [Department of Mathematics, University of the Punjab, Quaid-e-Azam Campus, Lahore-54590 (Pakistan); Department of Mathematics, University of Management and Technology, Johar Town Campus, Lahore-54782 (Pakistan)

    2017-01-15

    This paper explores the influences of dark energy on the shear-free axially symmetric evolution by considering self-interacting Brans–Dicke gravity as a dark energy candidate. We describe energy source of the model and derive all the effective dynamical variables as well as effective structure scalars. It is found that scalar field is one of the sources of anisotropy and dissipation. The resulting effective structure scalars help to study the dynamics associated with dark energy in any axial configuration. In order to investigate shear-free evolution, we formulate a set of governing equations along with heat transport equation. We discuss consequences of shear-free condition upon different SBD fluid models like dissipative non-geodesic and geodesic models. For dissipative non-geodesic case, the rotational distribution turns out to be the necessary and sufficient condition for radiating model. The dissipation depends upon inhomogeneous expansion. The geodesic model is found to be irrotational and non-radiating. The non-dissipative geodesic model leads to FRW model for positive values of the expansion parameter.

  6. Physics implications of flat directions in free fermionic superstring models. II. Renormalization group analysis

    International Nuclear Information System (INIS)

    Cleaver, G.; Cvetic, M.; Everett, L.; Langacker, P.; Wang, J.; Espinosa, J.R.; Everett, L.

    1999-01-01

    We continue the investigation of the physics implications of a class of flat directions for a prototype quasi-realistic free fermionic string model (CHL5), building upon the results of a previous paper in which the complete mass spectrum and effective trilinear couplings of the observable sector were calculated to all orders in the superpotential. We introduce soft supersymmetry breaking mass parameters into the model, and investigate the gauge symmetry breaking patterns and the renormalization group analysis for two representative flat directions, which leave an additional U(1) ' as well as the SM gauge group unbroken at the string scale. We study symmetry breaking patterns that lead to a phenomenologically acceptable Z-Z ' hierarchy, M Z ' ∼O(1 TeV) and 10 12 GeV for electroweak and intermediate scale U(1) ' symmetry breaking, respectively, and the associated mass spectra after electroweak symmetry breaking. The fermion mass spectrum exhibits unrealistic features, including massless exotic fermions, but has an interesting d-quark hierarchy and associated CKM matrix in one case. There are (some) non-canonical effective μ terms, which lead to a non-minimal Higgs sector with more than two Higgs doublets involved in the symmetry breaking, and a rich structure of Higgs particles, charginos, and neutralinos, some of which, however, are massless or ultralight. In the electroweak scale cases the scale of supersymmetry breaking is set by the Z ' mass, with the sparticle masses in the several TeV range. copyright 1999 The American Physical Society

  7. Serum-free keloid fibroblast cell culture: an in vitro model for the study of aberrant wound healing.

    Science.gov (United States)

    Koch, R J; Goode, R L; Simpson, G T

    1997-04-01

    The purpose of this study was to develop an in vitro serum-free keloid fibroblast model. Keloid formation remains a problem for every surgeon. Prior evaluations of fibroblast characteristics in vitro, especially those of growth factor measurement, have been confounded by the presence of serum-containing tissue culture media. The serum itself contains growth factors, yet has been a "necessary evil" to sustain cell growth. The design of this study is laboratory-based and uses keloid fibroblasts obtained from five patients undergoing facial (ear lobule) keloid removal in a university-affiliated clinic. Keloid fibroblasts were established in primary cell culture and then propagated in a serum-free environment. The main outcome measures included sustained keloid fibroblast growth and viability, which was comparable to serum-based models. The keloid fibroblast cell cultures exhibited logarithmic growth, sustained a high cellular viability, maintained a monolayer, and displayed contact inhibition. Demonstrating model consistency, there was no statistically significant difference between the mean cell counts of the five keloid fibroblast cell lines at each experimental time point. The in vitro growth of keloid fibroblasts in a serum-free model has not been done previous to this study. The results of this study indicate that the proliferative characteristics described are comparable to those of serum-based models. The described model will facilitate the evaluation of potential wound healing modulators, and cellular effects and collagen modifications of laser resurfacing techniques, and may serve as a harvest source for contaminant-free fibroblast autoimplants. Perhaps its greatest utility will be in the evaluation of endogenous and exogenous growth factors.

  8. A multi-model analysis of vertical ozone profiles

    Directory of Open Access Journals (Sweden)

    J. E. Jonson

    2010-06-01

    Full Text Available A multi-model study of the long-range transport of ozone and its precursors from major anthropogenic source regions was coordinated by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP under the Convention on Long-range Transboundary Air Pollution (LRTAP. Vertical profiles of ozone at 12-h intervals from 2001 are available from twelve of the models contributing to this study and are compared here with observed profiles from ozonesondes. The contributions from each major source region are analysed for selected sondes, and this analysis is supplemented by retroplume calculations using the FLEXPART Lagrangian particle dispersion model to provide insight into the origin of ozone transport events and the cause of differences between the models and observations.

    In the boundary layer ozone levels are in general strongly affected by regional sources and sinks. With a considerably longer lifetime in the free troposphere, ozone here is to a much larger extent affected by processes on a larger scale such as intercontinental transport and exchange with the stratosphere. Such individual events are difficult to trace over several days or weeks of transport. This may explain why statistical relationships between models and ozonesonde measurements are far less satisfactory than shown in previous studies for surface measurements at all seasons. The lowest bias between model-calculated ozone profiles and the ozonesonde measurements is seen in the winter and autumn months. Following the increase in photochemical activity in the spring and summer months, the spread in model results increases, and the agreement between ozonesonde measurements and the individual models deteriorates further.

    At selected sites calculated contributions to ozone levels in the free troposphere from intercontinental transport are shown. Intercontinental transport is identified based on differences in model calculations with unperturbed emissions and

  9. Latent segmentation based count models: Analysis of bicycle safety in Montreal and Toronto.

    Science.gov (United States)

    Yasmin, Shamsunnahar; Eluru, Naveen

    2016-10-01

    The study contributes to literature on bicycle safety by building on the traditional count regression models to investigate factors affecting bicycle crashes at the Traffic Analysis Zone (TAZ) level. TAZ is a traffic related geographic entity which is most frequently used as spatial unit for macroscopic crash risk analysis. In conventional count models, the impact of exogenous factors is restricted to be the same across the entire region. However, it is possible that the influence of exogenous factors might vary across different TAZs. To accommodate for the potential variation in the impact of exogenous factors we formulate latent segmentation based count models. Specifically, we formulate and estimate latent segmentation based Poisson (LP) and latent segmentation based Negative Binomial (LNB) models to study bicycle crash counts. In our latent segmentation approach, we allow for more than two segments and also consider a large set of variables in segmentation and segment specific models. The formulated models are estimated using bicycle-motor vehicle crash data from the Island of Montreal and City of Toronto for the years 2006 through 2010. The TAZ level variables considered in our analysis include accessibility measures, exposure measures, sociodemographic characteristics, socioeconomic characteristics, road network characteristics and built environment. A policy analysis is also conducted to illustrate the applicability of the proposed model for planning purposes. This macro-level research would assist decision makers, transportation officials and community planners to make informed decisions to proactively improve bicycle safety - a prerequisite to promoting a culture of active transportation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  11. A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means

    Science.gov (United States)

    Polak, Marike; De Rooij, Mark; Heiser, Willem J.

    2012-01-01

    In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) "criterion…

  12. Spreadsheet based analysis of Mössbauer spectra

    Energy Technology Data Exchange (ETDEWEB)

    Gunnlaugsson, H. P., E-mail: haraldur.p.gunnlaugsson@cern.ch [CERN, PH Div (Switzerland)

    2016-12-15

    Using spreadsheet programs to analyse spectral data opens up new possibilities in data analysis. The spreadsheet program contains all the functionality needed for graphical support, fitting and post processing of the results. Unconventional restrictions between fitting parameters can be set up freely, and simultaneous analysis i.e. analysis of many spectra simultaneously in terms of model parameters is straightforward. The free program package Vinda – used for analysing Mössbauer spectra – is described. The package contains support for reading data, calibration, and common functions of particular importance for Mössbauer spectroscopy (f-factors, second order Doppler shift etc.). Methods to create spectral series and support for error analysis is included. Different types of fitting models are included, ranging from simple Lorentzian models to complex distribution models.

  13. Carrier ampholyte-free isoelectric focusing on a paper-based analytical device for the fractionation of proteins.

    Science.gov (United States)

    Xie, Song-Fang; Gao, Han; Niu, Li-Li; Xie, Zhen-Sheng; Fang, Fang; Wu, Zhi-Yong; Yang, Fu-Quan

    2018-01-25

    Isoelectric focusing plays a critical role in the analysis of complex protein samples. Conventionally, isoelectric focusing is implemented with carrier ampholytes in capillary or immobilized pH gradient gel. In this study, we successfully exhibited a carrier ampholyte-free isoelectric focusing on paper-based analytical device. Proof of the concept was visually demonstrated with color model proteins. Experimental results showed that not only a pH gradient was well established along the open paper fluidic channel as confirmed by pH indicator strip, the pH gradient range could also be tuned by the catholyte or anolyte. Furthermore, the isoelectric focusing fractions from the paper channel can be directly cut and recovered into solutions for post analysis with sodium dodecyl sulfate-polyacrylamide gel electrophoresis and matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry. This paper-based isoelectric focusing method is fast, cheap, simple and easy to operate, and could potentially be used as a cost-effective protein sample clean-up method for target protein analysis with mass spectrometry. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Modelling the complete operation of a free-piston shock tunnel for a low enthalpy condition

    Science.gov (United States)

    McGilvray, M.; Dann, A. G.; Jacobs, P. A.

    2013-07-01

    Only a limited number of free-stream flow properties can be measured in hypersonic impulse facilities at the nozzle exit. This poses challenges for experimenters when subsequently analysing experimental data obtained from these facilities. Typically in a reflected shock tunnel, a simple analysis that requires small amounts of computational resources is used to calculate quasi-steady gas properties. This simple analysis requires initial fill conditions and experimental measurements in analytical calculations of each major flow process, using forward coupling with minor corrections to include processes that are not directly modeled. However, this simplistic approach leads to an unknown level of discrepancy to the true flow properties. To explore the simple modelling techniques accuracy, this paper details the use of transient one and two-dimensional numerical simulations of a complete facility to obtain more refined free-stream flow properties from a free-piston reflected shock tunnel operating at low-enthalpy conditions. These calculations were verified by comparison to experimental data obtained from the facility. For the condition and facility investigated, the test conditions at nozzle exit produced with the simple modelling technique agree with the time and space averaged results from the complete facility calculations to within the accuracy of the experimental measurements.

  15. A Realistic Process Example for MIMO MPC based on Autoregressive Models

    DEFF Research Database (Denmark)

    Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2014-01-01

    for advanced control design develo pment which may be used by non experts in control theory. This paper presents and illustra tes the use of a simple methodology to design an offset-free MPC based on ARX models. Hence a mecha nistic process model is not required. The forced circulation evaporator by Newell...... and Lee is used to illustrate the offset-free MPC based on ARX models for a nonlinear multivariate process ....

  16. CFD analysis of a diaphragm free-piston Stirling cryocooler

    Science.gov (United States)

    Caughley, Alan; Sellier, Mathieu; Gschwendtner, Michael; Tucker, Alan

    2016-10-01

    This paper presents a Computational Fluid Dynamics (CFD) analysis of a novel free-piston Stirling cryocooler that uses a pair of metal diaphragms to seal and suspend the displacer. The diaphragms allow the displacer to move without rubbing or moving seals. When coupled to a metal diaphragm pressure wave generator, the system produces a complete Stirling cryocooler with no rubbing parts in the working gas space. Initial modelling of this concept using the Sage modelling tool indicated the potential for a useful cryocooler. A proof-of-concept prototype was constructed and achieved cryogenic temperatures. A second prototype was designed and constructed using the experience gained from the first. The prototype produced 29 W of cooling at 77 K and reached a no-load temperature of 56 K. The diaphragm's large diameter and short stroke produces a significant radial component to the oscillating flow fields inside the cryocooler which were not modelled in the one-dimensional analysis tool Sage that was used to design the prototypes. Compared with standard pistons, the diaphragm geometry increases the gas-to-wall heat transfer due to the higher velocities and smaller hydraulic diameters. A Computational Fluid Dynamics (CFD) model of the cryocooler was constructed to understand the underlying fluid-dynamics and heat transfer mechanisms with the aim of further improving performance. The CFD modelling of the heat transfer in the radial flow fields created by the diaphragms shows the possibility of utilizing the flat geometry for heat transfer, reducing the need for, and the size of, expensive heat exchangers. This paper presents details of a CFD analysis used to model the flow and gas-to-wall heat transfer inside the second prototype cryocooler, including experimental validation of the CFD to produce a robust analysis.

  17. Agent-based modeling and network dynamics

    CERN Document Server

    Namatame, Akira

    2016-01-01

    The book integrates agent-based modeling and network science. It is divided into three parts, namely, foundations, primary dynamics on and of social networks, and applications. The book begins with the network origin of agent-based models, known as cellular automata, and introduce a number of classic models, such as Schelling’s segregation model and Axelrod’s spatial game. The essence of the foundation part is the network-based agent-based models in which agents follow network-based decision rules. Under the influence of the substantial progress in network science in late 1990s, these models have been extended from using lattices into using small-world networks, scale-free networks, etc. The book also shows that the modern network science mainly driven by game-theorists and sociophysicists has inspired agent-based social scientists to develop alternative formation algorithms, known as agent-based social networks. The book reviews a number of pioneering and representative models in this family. Upon the gi...

  18. Constraint-based modeling and kinetic analysis of the Smad dependent TGF-beta signaling pathway.

    Directory of Open Access Journals (Sweden)

    Zhike Zi

    Full Text Available BACKGROUND: Investigation of dynamics and regulation of the TGF-beta signaling pathway is central to the understanding of complex cellular processes such as growth, apoptosis, and differentiation. In this study, we aim at using systems biology approach to provide dynamic analysis on this pathway. METHODOLOGY/PRINCIPAL FINDINGS: We proposed a constraint-based modeling method to build a comprehensive mathematical model for the Smad dependent TGF-beta signaling pathway by fitting the experimental data and incorporating the qualitative constraints from the experimental analysis. The performance of the model generated by constraint-based modeling method is significantly improved compared to the model obtained by only fitting the quantitative data. The model agrees well with the experimental analysis of TGF-beta pathway, such as the time course of nuclear phosphorylated Smad, the subcellular location of Smad and signal response of Smad phosphorylation to different doses of TGF-beta. CONCLUSIONS/SIGNIFICANCE: The simulation results indicate that the signal response to TGF-beta is regulated by the balance between clathrin dependent endocytosis and non-clathrin mediated endocytosis. This model is useful to be built upon as new precise experimental data are emerging. The constraint-based modeling method can also be applied to quantitative modeling of other signaling pathways.

  19. Validation of Accelerometer-Based Energy Expenditure Prediction Models in Structured and Simulated Free-Living Settings

    Science.gov (United States)

    Montoye, Alexander H. K.; Conger, Scott A.; Connolly, Christopher P.; Imboden, Mary T.; Nelson, M. Benjamin; Bock, Josh M.; Kaminsky, Leonard A.

    2017-01-01

    This study compared accuracy of energy expenditure (EE) prediction models from accelerometer data collected in structured and simulated free-living settings. Twenty-four adults (mean age 45.8 years, 50% female) performed two sessions of 11 to 21 activities, wearing four ActiGraph GT9X Link activity monitors (right hip, ankle, both wrists) and a…

  20. A neural network - based algorithm for predicting stone - free status after ESWL therapy.

    Science.gov (United States)

    Seckiner, Ilker; Seckiner, Serap; Sen, Haluk; Bayrak, Omer; Dogan, Kazim; Erturhan, Sakip

    2017-01-01

    The prototype artificial neural network (ANN) model was developed using data from patients with renal stone, in order to predict stone-free status and to help in planning treatment with Extracorporeal Shock Wave Lithotripsy (ESWL) for kidney stones. Data were collected from the 203 patients including gender, single or multiple nature of the stone, location of the stone, infundibulopelvic angle primary or secondary nature of the stone, status of hydronephrosis, stone size after ESWL, age, size, skin to stone distance, stone density and creatinine, for eleven variables. Regression analysis and the ANN method were applied to predict treatment success using the same series of data. Subsequently, patients were divided into three groups by neural network software, in order to implement the ANN: training group (n=139), validation group (n=32), and the test group (n=32). ANN analysis demonstrated that the prediction accuracy of the stone-free rate was 99.25% in the training group, 85.48% in the validation group, and 88.70% in the test group. Successful results were obtained to predict the stone-free rate, with the help of the ANN model designed by using a series of data collected from real patients in whom ESWL was implemented to help in planning treatment for kidney stones. Copyright® by the International Brazilian Journal of Urology.

  1. Hidden symmetry of a free fermion model

    International Nuclear Information System (INIS)

    Bazhanov, V.V.; Stroganov, Yu.G.

    1984-01-01

    A well-known eight-vertex free fermion model on a plane lattice is considered. Solving triangle equations and using the symmetry properties of the model, an elliptic parametrization for Boltzmann vertex weights is constructed. In the parametrization the weights are meromorphic functions of three complex variables

  2. Folding model analysis of the nucleus–nucleus scattering based on ...

    Indian Academy of Sciences (India)

    ... Lecture Workshops · Refresher Courses · Symposia · Live Streaming. Home; Journals; Pramana – Journal of Physics; Volume 87; Issue 6. Folding model analysis of the nucleus–nucleus scattering based on Jacobi coordinates. F PAKDEL A A RAJABI L NICKHAH. Regular Volume 87 Issue 6 December 2016 Article ID 90 ...

  3. Solution of quadratic matrix equations for free vibration analysis of structures.

    Science.gov (United States)

    Gupta, K. K.

    1973-01-01

    An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.

  4. A robust mathematical model for a loophole-free Clauser–Horne experiment

    International Nuclear Information System (INIS)

    Bierhorst, Peter

    2015-01-01

    Recent experiments (Giustina et al 2013 Nature 497 227–30; Christensen et al 2013 Phys. Rev. Lett. 111 130406) have reached detection efficiencies sufficient to close the detection loophole, testing the Clauser–Horne version of Bell's inequality. For a similar future experiment to be completely loophole-free, it will be important to have discrete experimental trials with randomized measurement settings for each trial, and the statistical analysis should not overlook the possibility of a local state varying over time with possible dependence on earlier trials (the ‘memory loophole’). In this paper, a mathematical model for such an experiment is presented, and a method for statistical analysis that is robust to memory effects is introduced. Additionally, a new method for calculating exact p-values for martingale-based statistics is described; previously, only non-sharp upper bounds derived from the Azuma–Hoeffding inequality have been available for such statistics. This improvement decreases the required number of experimental trials to demonstrate non-locality. The statistical techniques are applied to the data of Giustina et al (2013 Nature 497 227–30) and Christensen et al (2013 Phys. Rev. Lett. 111 130406) and found to perform well. (paper)

  5. Modelling CO concentrations under free-flowing and congested traffic conditions in Ireland

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, B; Budd, U; Misstear, B [Dept. of Civil, Structural and Environmental Engineering, Trinity Coll. Dublin (Ireland); Ceburnis, D; Jennings, S G [Dept. of Experimental Physics, National Univ. of Ireland, Galway (Ireland)

    2004-07-01

    The assessment and management of air quality is required under the EU Air Quality Framework Directive and its Daughter Directives (CEC, 1996, 1999, 2000) which specify the limits for certain pollutants, including carbon monoxide (CO). Air quality modelling is used to predict the future impact of road improvements, often as part of an Environmental Impact Assessment. The U.S. National Commission on Air Quality found in 1981 that such models may typically overpredict or underpredict actual concentrations by a factor of two. Even twenty years later the U.K. Department of the Environment Transport and the Regions (UK DETR, 2001) concurred that ''If the prediction of an annual mean concentration lies within {+-}50% of the measurement, a user would not consider that the model has behaved badly.'' The Daughter Directive (CEC, 2000) concerned with CO allows 50% uncertainty in modelling of the eight-hour average concentration. An assessment of CALINE4 was performed for two contrasting sites: a free-flowing motorway and a periodically-congested roundabout. Air quality was continuously monitored over a one-year period at both sites. The data collected was compared with model predictions based on local and regional meteorological data, site geometry and traffic volumes. The modelled and monitored results were compared through both graphical and statistical analysis (Broderick B.M. et al., 2003). (orig.)

  6. Analysis of human blood plasma cell-free DNA fragment size distribution using EvaGreen chemistry based droplet digital PCR assays.

    Science.gov (United States)

    Fernando, M Rohan; Jiang, Chao; Krzyzanowski, Gary D; Ryan, Wayne L

    2018-04-12

    Plasma cell-free DNA (cfDNA) fragment size distribution provides important information required for diagnostic assay development. We have developed and optimized droplet digital PCR (ddPCR) assays that quantify short and long DNA fragments. These assays were used to analyze plasma cfDNA fragment size distribution in human blood. Assays were designed to amplify 76,135, 490 and 905 base pair fragments of human β-actin gene. These assays were used for fragment size analysis of plasma cell-free, exosome and apoptotic body DNA obtained from normal and pregnant donors. The relative percentages for 76, 135, 490 and 905 bp fragments from non-pregnant plasma and exosome DNA were 100%, 39%, 18%, 5.6% and 100%, 40%, 18%,3.3%, respectively. The relative percentages for pregnant plasma and exosome DNA were 100%, 34%, 14%, 23%, and 100%, 30%, 12%, 18%, respectively. The relative percentages for non-pregnant plasma pellet (obtained after 2nd centrifugation step) were 100%, 100%, 87% and 83%, respectively. Non-pregnant Plasma cell-free and exosome DNA share a unique fragment distribution pattern which is different from pregnant donor plasma and exosome DNA fragment distribution indicating the effect of physiological status on cfDNA fragment size distribution. Fragment distribution pattern for plasma pellet that includes apoptotic bodies and nuclear DNA was greatly different from plasma cell-free and exosome DNA. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. A Simple Free Surface Tracking Model for Multi-dimensional Two-Fluid Approaches

    International Nuclear Information System (INIS)

    Lee, Seungjun; Yoon, Han Young

    2014-01-01

    The development in two-phase experiments devoted to find unknown phenomenological relationships modified conventional flow pattern maps into a sophisticated one and even extended to the multi-dimensional usage. However, for a system including a large void fraction gradient, such as a pool with the free surface, the flow patterns varies spatially throughout small number of cells and sometimes results in an unstable and unrealistic prediction of flows at the large gradient void fraction cells. Then, the numerical stability problem arising from the free surface is the major interest in the analyses of a passive cooling pool convecting the decay heat naturally, which has become a design issue to increase the safety level of nuclear reactors recently. In this research, a new and simple free surface tracking method combined with a simplified topology map is presented. The method modified the interfacial drag coefficient only for the cells defined as the free surface. The performance is shown by comparing the natural convection analysis of a small scale pool with respect to single- and two-phase condition. A simple free surface tracking model with a simplified topology map is developed

  8. Free-Suspension Residual Flexibility Testing of Space Station Pathfinder: Comparison to Fixed-Base Results

    Science.gov (United States)

    Tinker, Michael L.

    1998-01-01

    Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.

  9. Finite Element Modelling for Static and Free Vibration Response of Functionally Graded Beam

    Directory of Open Access Journals (Sweden)

    Ateeb Ahmad Khan

    Full Text Available Abstract A 1D Finite Element model for static response and free vibration analysis of functionally graded material (FGM beam is presented in this work. The FE model is based on efficient zig-zag theory (ZIGT with two noded beam element having four degrees of freedom at each node. Linear interpolation is used for the axial displacement and cubic hermite interpolation is used for the deflection. Out of a large variety of FGM systems available, Al/SiC and Ni/Al2O3 metal/ceramic FGM system has been chosen. Modified rule of mixture (MROM is used to calculate the young's modulus and rule of mixture (ROM is used to calculate density and poisson's ratio of FGM beam at any point. The MATLAB code based on 1D FE zigzag theory for FGM elastic beams is developed. A 2D FE model for the same elastic FGM beam has been developed using ABAQUS software. An 8-node biquadratic plane stress quadrilateral type element is used for modeling in ABAQUS. Three different end conditions namely simply-supported, cantilever and clamped- clamped are considered. The deflection, normal stress and shear stress has been reported for various models used. Eigen Value problem using subspace iteration method is solved to obtain un-damped natural frequencies and the corresponding mode shapes. The results predicted by the 1D FE model have been compared with the 2D FE results and the results present in open literature. This proves the correctness of the model. Finally, mode shapes have also been plotted for various FGM systems.

  10. Retrieval-travel-time model for free-fall-flow-rack automated storage and retrieval system

    Science.gov (United States)

    Metahri, Dhiyaeddine; Hachemi, Khalid

    2018-03-01

    Automated storage and retrieval systems (AS/RSs) are material handling systems that are frequently used in manufacturing and distribution centers. The modelling of the retrieval-travel time of an AS/RS (expected product delivery time) is practically important, because it allows us to evaluate and improve the system throughput. The free-fall-flow-rack AS/RS has emerged as a new technology for drug distribution. This system is a new variation of flow-rack AS/RS that uses an operator or a single machine for storage operations, and uses a combination between the free-fall movement and a transport conveyor for retrieval operations. The main contribution of this paper is to develop an analytical model of the expected retrieval-travel time for the free-fall flow-rack under a dedicated storage assignment policy. The proposed model, which is based on a continuous approach, is compared for accuracy, via simulation, with discrete model. The obtained results show that the maximum deviation between the continuous model and the simulation is less than 5%, which shows the accuracy of our model to estimate the retrieval time. The analytical model is useful to optimise the dimensions of the rack, assess the system throughput, and evaluate different storage policies.

  11. Hydrochemical analysis of groundwater using a tree-based model

    Science.gov (United States)

    Litaor, M. Iggy; Brielmann, H.; Reichmann, O.; Shenker, M.

    2010-06-01

    SummaryHydrochemical indices are commonly used to ascertain aquifer characteristics, salinity problems, anthropogenic inputs and resource management, among others. This study was conducted to test the applicability of a binary decision tree model to aquifer evaluation using hydrochemical indices as input. The main advantage of the tree-based model compared to other commonly used statistical procedures such as cluster and factor analyses is the ability to classify groundwater samples with assigned probability and the reduction of a large data set into a few significant variables without creating new factors. We tested the model using data sets collected from headwater springs of the Jordan River, Israel. The model evaluation consisted of several levels of complexity, from simple separation between the calcium-magnesium-bicarbonate water type of karstic aquifers to the more challenging separation of calcium-sodium-bicarbonate water type flowing through perched and regional basaltic aquifers. In all cases, the model assigned measures for goodness of fit in the form of misclassification errors and singled out the most significant variable in the analysis. The model proceeded through a sequence of partitions providing insight into different possible pathways and changing lithology. The model results were extremely useful in constraining the interpretation of geological heterogeneity and constructing a conceptual flow model for a given aquifer. The tree model clearly identified the hydrochemical indices that were excluded from the analysis, thus providing information that can lead to a decrease in the number of routinely analyzed variables and a significant reduction in laboratory cost.

  12. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured re- source. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  13. Numerical equilibrium analysis for structured consumer resource models

    NARCIS (Netherlands)

    de Roos, A.M.; Diekmann, O.; Getto, P.; Kirkilionis, M.A.

    2010-01-01

    In this paper, we present methods for a numerical equilibrium and stability analysis for models of a size structured population competing for an unstructured resource. We concentrate on cases where two model parameters are free, and thus existence boundaries for equilibria and stability boundaries

  14. Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior

    Science.gov (United States)

    Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio

    2017-12-01

    In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.

  15. Lab-on-a-chip for label free biological semiconductor analysis of Staphylococcal Enterotoxin B

    NARCIS (Netherlands)

    Yang, Minghui; Sun, Steven; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham

    2010-01-01

    We describe a new lab-on-a-chip (LOC) which utilizes a biological semiconductor (BSC) transducer for label free analysis of Staphylococcal Enterotoxin B (SEB) (or other biological interactions) directly and electronically. BSCs are new transducers based on electrical percolation through a

  16. Analysis of DGNB-DK criteria for BIM-based Model Checking automatization

    DEFF Research Database (Denmark)

    Gade, Peter Nørkjær; Svidt, Kjeld; Jensen, Rasmus Lund

    This report includes the results of an analysis of the automation potential of the Danish edition of building sustainability assessment method Deutsche Gesellschaft für Nachhaltiges Bauen (DGNB) for office buildings version 2014 1.1. The analysis investigate the criteria related to DGNB-DK and if......-DK and if they would be suited for automation through the technological concept BIM-based Model Checking (BMC)....

  17. Sensitivity Analysis of an Agent-Based Model of Culture's Consequences for Trade

    NARCIS (Netherlands)

    Burgers, S.L.G.E.; Jonker, C.M.; Hofstede, G.J.; Verwaart, D.

    2010-01-01

    This paper describes the analysis of an agent-based model’s sensitivity to changes in parameters that describe the agents’ cultural background, relational parameters, and parameters of the decision functions. As agent-based models may be very sensitive to small changes in parameter values, it is of

  18. Micromechanical analysis of nanocomposites using 3D voxel based material model

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon

    2012-01-01

    A computational study on the effect of nanocomposite structures on the elastic properties is carried out with the use of the 3D voxel based model of materials and the combined Voigt–Reuss method. A hierarchical voxel based model of a material reinforced by an array of exfoliated and intercalated...... nanoclay platelets surrounded by interphase layers is developed. With this model, the elastic properties of the interphase layer are estimated using the inverse analysis. The effects of aspect ratio, intercalation and orientation of nanoparticles on the elastic properties of the nanocomposites are analyzed....... For modeling the damage in nanocomposites with intercalated structures, “four phase” model is suggested, in which the strength of “intrastack interphase” is lower than that of “outer” interphase around the nanoplatelets. Analyzing the effect of nanoreinforcement in the matrix on the failure probability...

  19. Biocompatibility study on Ni-free Ti-based and Zr-based bulk metallic glasses

    Energy Technology Data Exchange (ETDEWEB)

    Li, T.H. [Institute of Material Science and Engineering, National Central University, Taoyuan, Taiwan (China); Wong, P.C. [Department of Biomedical Engineering, National Yang-Ming University, Taipei, Taiwan (China); Chang, S.F. [Department of Mechanical Engineering, National Central University, Taoyuan, Taiwan (China); Tsai, P.H. [Institute of Material Science and Engineering, National Central University, Taoyuan, Taiwan (China); Jang, J.S.C., E-mail: jscjang@ncu.edu.tw [Institute of Material Science and Engineering, National Central University, Taoyuan, Taiwan (China); Department of Mechanical Engineering, National Central University, Taoyuan, Taiwan (China); Huang, J.C. [Department of Materials and Optoelectronic Science, National Sun Yat-Sen University, Kaohsiung, Taiwan (China)

    2017-06-01

    Safety and reliability are crucial issues for medical instruments and implants. In the past few decays, bulk metallic glasses (BMGs) have drawn attentions due to their superior mechanical properties, good corrosion resistance, antibacterial and good biocompatibility. However, most Zr-based and Ti-based BMGs contain Ni as an important element which is prone to human allergy problem. In this study, the Ni-free Ti-based and Zr-based BMGs, Ti{sub 40}Zr{sub 10}Cu{sub 36}Pd{sub 14}, and Zr{sub 48}Cu{sub 36}Al{sub 8}Ag{sub 8}, were selected for systematical evaluation of their biocompatibility. Several biocompatibility tests, co-cultural with L929 murine fibroblast cell line, were carried out on these two BMGs, as well as the comparison samples of Ti6Al4V and pure Cu. The results in terms of cellular adhesion, cytotoxicity, and metallic ion release affection reveal that the Ti{sub 40}Zr{sub 10}Cu{sub 36}Pd{sub 14} BMG and Ti6Al4V exhibit the optimum biocompatibility; cells still being attached on the petri dish with good adhesion and exhibiting the spindle shape after direct contact test. Furthermore, the Ti{sub 40}Zr{sub 10}Cu{sub 36}Pd{sub 14} BMG showed very low Cu ion release level, in agreement with the MTT results. Based on the current findings, it is believed that Ni-free Ti-based BMG can act as an ideal candidate for medical implant. - Highlight: • Ni-free bulk metallic glass is promising material for medical implants. • Ni-free Ti-based BMG presents similar cellular adhesion as Ti6Al4V. • Ni-free Ti-based BMG shows less cytotoxicity, and metallic ion release than Ti6Al4V.

  20. A free wake vortex lattice model for vertical axis wind turbines: Modeling, verification and validation

    International Nuclear Information System (INIS)

    Meng, Fanzhong; Schwarze, Holger; Vorpahl, Fabian; Strobel, Michael

    2014-01-01

    Since the 1970s several research activities had been carried out on developing aerodynamic models for Vertical Axis Wind Turbines (VAWTs). In order to design large VAWTs of MW scale, more accurate aerodynamic calculation is required to predict their aero-elastic behaviours. In this paper, a 3D free wake vortex lattice model for VAWTs is developed, verified and validated. Comparisons to the experimental results show that the 3D free wake vortex lattice model developed is capable of making an accurate prediction of the general performance and the instantaneous aerodynamic forces on the blades. The comparison between momentum method and the vortex lattice model shows that free wake vortex models are needed for detailed loads calculation and for calculating highly loaded rotors

  1. Label-free cell-cycle analysis by high-throughput quantitative phase time-stretch imaging flow cytometry

    Science.gov (United States)

    Mok, Aaron T. Y.; Lee, Kelvin C. M.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2018-02-01

    Biophysical properties of cells could complement and correlate biochemical markers to characterize a multitude of cellular states. Changes in cell size, dry mass and subcellular morphology, for instance, are relevant to cell-cycle progression which is prevalently evaluated by DNA-targeted fluorescence measurements. Quantitative-phase microscopy (QPM) is among the effective biophysical phenotyping tools that can quantify cell sizes and sub-cellular dry mass density distribution of single cells at high spatial resolution. However, limited camera frame rate and thus imaging throughput makes QPM incompatible with high-throughput flow cytometry - a gold standard in multiparametric cell-based assay. Here we present a high-throughput approach for label-free analysis of cell cycle based on quantitative-phase time-stretch imaging flow cytometry at a throughput of > 10,000 cells/s. Our time-stretch QPM system enables sub-cellular resolution even at high speed, allowing us to extract a multitude (at least 24) of single-cell biophysical phenotypes (from both amplitude and phase images). Those phenotypes can be combined to track cell-cycle progression based on a t-distributed stochastic neighbor embedding (t-SNE) algorithm. Using multivariate analysis of variance (MANOVA) discriminant analysis, cell-cycle phases can also be predicted label-free with high accuracy at >90% in G1 and G2 phase, and >80% in S phase. We anticipate that high throughput label-free cell cycle characterization could open new approaches for large-scale single-cell analysis, bringing new mechanistic insights into complex biological processes including diseases pathogenesis.

  2. Analysis of Mechanical Energy Transport on Free-Falling Wedge during Water-Entry Phase

    Directory of Open Access Journals (Sweden)

    Wen-Hua Wang

    2012-01-01

    Full Text Available For better discussing and understanding the physical phenomena and body-fluid interaction of water-entry problem, here mechanical-energy transport (wedge, fluid, and each other of water-entry model for free falling wedge is studied by numerical method based on free surface capturing method and Cartesian cut cell mesh. In this method, incompressible Euler equations for a variable density fluid are numerically calculated by the finite volume method. Then artificial compressibility method, dual-time stepping technique, and Roe's approximate Riemann solver are applied in the numerical scheme. Furthermore, the projection method of momentum equations and exact Riemann solution are used to calculate the fluid pressure on solid boundary. On this basis, during water-entry phase of the free-falling wedge, macroscopic energy conversion of overall body-fluid system and microscopic energy transformation in fluid field are analyzed and discussed. Finally, based on test cases, many useful conclusions about mechanical energy transport for water entry problem are made and presented.

  3. Model Based User's Access Requirement Analysis of E-Governance Systems

    Science.gov (United States)

    Saha, Shilpi; Jeon, Seung-Hwan; Robles, Rosslin John; Kim, Tai-Hoon; Bandyopadhyay, Samir Kumar

    The strategic and contemporary importance of e-governance has been recognized across the world. In India too, various ministries of Govt. of India and State Governments have taken e-governance initiatives to provide e-services to citizens and the business they serve. To achieve the mission objectives, and make such e-governance initiatives successful it would be necessary to improve the trust and confidence of the stakeholders. It is assumed that the delivery of government services will share the same public network information that is being used in the community at large. In particular, the Internet will be the principal means by which public access to government and government services will be achieved. To provide the security measures main aim is to identify user's access requirement for the stakeholders and then according to the models of Nath's approach. Based on this analysis, the Govt. can also make standards of security based on the e-governance models. Thus there will be less human errors and bias. This analysis leads to the security architecture of the specific G2C application.

  4. Asymptotically free SU(5) models

    International Nuclear Information System (INIS)

    Kogan, Ya.I.; Ter-Martirosyan, K.A.; Zhelonkin, A.V.

    1981-01-01

    The behaviour of Yukawa and Higgs effective charges of the minimal SU(5) unification model is investigated. The model includes ν=3 (or more, up to ν=7) generations of quarks and leptons and, in addition, the 24-plet of heavy fermions. A number of solutions of the renorm-group equations are found, which reproduce the known data about quarks and leptons and, due to a special choice of the coupling constants at the unification point are asymptotically free in all charges. The requirement of the asymptotical freedom leads to some restrictions on the masses of particles and on their mixing angles [ru

  5. Pseudospectral modeling and dispersion analysis of Rayleigh waves in viscoelastic media

    Science.gov (United States)

    Zhang, K.; Luo, Y.; Xia, J.; Chen, C.

    2011-01-01

    Multichannel Analysis of Surface Waves (MASW) is one of the most widely used techniques in environmental and engineering geophysics to determine shear-wave velocities and dynamic properties, which is based on the elastic layered system theory. Wave propagation in the Earth, however, has been recognized as viscoelastic and the propagation of Rayleigh waves presents substantial differences in viscoelastic media as compared with elastic media. Therefore, it is necessary to carry out numerical simulation and dispersion analysis of Rayleigh waves in viscoelastic media to better understand Rayleigh-wave behaviors in the real world. We apply a pseudospectral method to the calculation of the spatial derivatives using a Chebyshev difference operator in the vertical direction and a Fourier difference operator in the horizontal direction based on the velocity-stress elastodynamic equations and relations of linear viscoelastic solids. This approach stretches the spatial discrete grid to have a minimum grid size near the free surface so that high accuracy and resolution are achieved at the free surface, which allows an effective incorporation of the free surface boundary conditions since the Chebyshev method is nonperiodic. We first use an elastic homogeneous half-space model to demonstrate the accuracy of the pseudospectral method comparing with the analytical solution, and verify the correctness of the numerical modeling results for a viscoelastic half-space comparing the phase velocities of Rayleigh wave between the theoretical values and the dispersive image generated by high-resolution linear Radon transform. We then simulate three types of two-layer models to analyze dispersive-energy characteristics for near-surface applications. Results demonstrate that the phase velocity of Rayleigh waves in viscoelastic media is relatively higher than in elastic media and the fundamental mode increases by 10-16% when the frequency is above 10. Hz due to the velocity dispersion of P

  6. WebGimm: An integrated web-based platform for cluster analysis, functional analysis, and interactive visualization of results.

    Science.gov (United States)

    Joshi, Vineet K; Freudenberg, Johannes M; Hu, Zhen; Medvedovic, Mario

    2011-01-17

    Cluster analysis methods have been extensively researched, but the adoption of new methods is often hindered by technical barriers in their implementation and use. WebGimm is a free cluster analysis web-service, and an open source general purpose clustering web-server infrastructure designed to facilitate easy deployment of integrated cluster analysis servers based on clustering and functional annotation algorithms implemented in R. Integrated functional analyses and interactive browsing of both, clustering structure and functional annotations provides a complete analytical environment for cluster analysis and interpretation of results. The Java Web Start client-based interface is modeled after the familiar cluster/treeview packages making its use intuitive to a wide array of biomedical researchers. For biomedical researchers, WebGimm provides an avenue to access state of the art clustering procedures. For Bioinformatics methods developers, WebGimm offers a convenient avenue to deploy their newly developed clustering methods. WebGimm server, software and manuals can be freely accessed at http://ClusterAnalysis.org/.

  7. Biomechanical interpretation of a free-breathing lung motion model

    International Nuclear Information System (INIS)

    Zhao Tianyu; White, Benjamin; Lamb, James; Low, Daniel A; Moore, Kevin L; Yang Deshan; Mutic, Sasa; Lu Wei

    2011-01-01

    The purpose of this paper is to develop a biomechanical model for free-breathing motion and compare it to a published heuristic five-dimensional (5D) free-breathing lung motion model. An ab initio biomechanical model was developed to describe the motion of lung tissue during free breathing by analyzing the stress–strain relationship inside lung tissue. The first-order approximation of the biomechanical model was equivalent to a heuristic 5D free-breathing lung motion model proposed by Low et al in 2005 (Int. J. Radiat. Oncol. Biol. Phys. 63 921–9), in which the motion was broken down to a linear expansion component and a hysteresis component. To test the biomechanical model, parameters that characterize expansion, hysteresis and angles between the two motion components were reported independently and compared between two models. The biomechanical model agreed well with the heuristic model within 5.5% in the left lungs and 1.5% in the right lungs for patients without lung cancer. The biomechanical model predicted that a histogram of angles between the two motion components should have two peaks at 39.8° and 140.2° in the left lungs and 37.1° and 142.9° in the right lungs. The data from the 5D model verified the existence of those peaks at 41.2° and 148.2° in the left lungs and 40.1° and 140° in the right lungs for patients without lung cancer. Similar results were also observed for the patients with lung cancer, but with greater discrepancies. The maximum-likelihood estimation of hysteresis magnitude was reported to be 2.6 mm for the lung cancer patients. The first-order approximation of the biomechanical model fit the heuristic 5D model very well. The biomechanical model provided new insights into breathing motion with specific focus on motion trajectory hysteresis.

  8. Model-based analysis of costs and outcomes of non-invasive prenatal testing for Down's syndrome using cell free fetal DNA in the UK National Health Service.

    Directory of Open Access Journals (Sweden)

    Stephen Morris

    Full Text Available Non-invasive prenatal testing (NIPT for Down's syndrome (DS using cell free fetal DNA in maternal blood has the potential to dramatically alter the way prenatal screening and diagnosis is delivered. Before NIPT can be implemented into routine practice, information is required on its costs and benefits. We investigated the costs and outcomes of NIPT for DS as contingent testing and as first-line testing compared with the current DS screening programme in the UK National Health Service.We used a pre-existing model to evaluate the costs and outcomes associated with NIPT compared with the current DS screening programme. The analysis was based on a hypothetical screening population of 10,000 pregnant women. Model inputs were taken from published sources. The main outcome measures were number of DS cases detected, number of procedure-related miscarriages and total cost.At a screening risk cut-off of 1∶150 NIPT as contingent testing detects slightly fewer DS cases, has fewer procedure-related miscarriages, and costs the same as current DS screening (around UK£280,000 at a cost of £500 per NIPT. As first-line testing NIPT detects more DS cases, has fewer procedure-related miscarriages, and is more expensive than current screening at a cost of £50 per NIPT. When NIPT uptake increases, NIPT detects more DS cases with a small increase in procedure-related miscarriages and costs.NIPT is currently available in the private sector in the UK at a price of £400-£900. If the NHS cost was at the lower end of this range then at a screening risk cut-off of 1∶150 NIPT as contingent testing would be cost neutral or cost saving compared with current DS screening. As first-line testing NIPT is likely to produce more favourable outcomes but at greater cost. Further research is needed to evaluate NIPT under real world conditions.

  9. Free surface modelling with two-fluid model and reduced numerical diffusion of the interface

    International Nuclear Information System (INIS)

    Strubelj, Luka; Tiselj, Izrok

    2008-01-01

    Full text of publication follows: The free surface flows are successfully modelled with one of existing free surface models, such as: level set method, volume of fluid method (with/without surface reconstruction), front tracking, two-fluid model (two momentum equations) with modified interphase force and others. The main disadvantage of two-fluid model used for simulations of free surface flows is numerical diffusion of the interface, which can be significantly reduced using the method presented in this paper. Several techniques for reduction of numerical diffusion of the interface have been implemented in the volume of fluid model and are based on modified numerical schemes for advection of volume fraction near the interface. The same approach could be used also for two-fluid method, but according to our experience more successful reduction of numerical diffusion of the interface can be achieved with conservative level set method. Within the conservative level set method, continuity equation for volume fraction is solved and after that the numerical diffusion of the interface is reduced in such a way that the thickness of the interface is kept constant during the simulation. Reduction of the interface diffusion can be also called interface sharpening. In present paper the two-fluid model with interface sharpening is validated on Rayleigh-Taylor instability. Under assumptions of isothermal and incompressible flow of two immiscible fluids, we simulated a system with the fluid of higher density located above the fluid of smaller density in two dimensions. Due to gravity in the system, fluid with higher density moves below the fluid with smaller density. Initial condition is not a flat interface between the fluids, but a sine wave with small amplitude, which develops into a mushroom-like structure. Mushroom-like structure in simulation of Rayleigh-Taylor instability later develops to small droplets as result of numerical dispersion of interface (interface sharpening

  10. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  11. A new energy transfer model for turbulent free shear flow

    Science.gov (United States)

    Liou, William W.-W.

    1992-01-01

    A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.

  12. Unified performance analysis of hybrid-ARQ with incremental redundancy over free-space optical channels

    KAUST Repository

    Zedini, Emna

    2014-09-01

    In this paper, we carry out a unified performance analysis of hybrid automatic repeat request (HARQ) with incremental redundancy (IR) from an information theoretic perspective over a point-to-point free-space optical (FSO) system. First, we introduce a novel unified expression for the distribution of a single FSO link modeled by the Gamma fading that accounts for pointing errors subject to both types of detection techniques at the receiver side (i.e. heterodyne detection and intensity modulation with direct detection (IM/DD)). Then, we provide analytical expressions for the outage probability, the average number of transmissions, and the average transmission rate for HARQ with IR, assuming a maximum number of rounds for the HARQ protocol. In our study, the communication rate per HARQ round is constant. Our analysis demonstrates the importance of HARQ in improving the performance and reliability of FSO communication systems. All the given results are verified via computer-based Monte-Carlo simulations.

  13. Application of GPCR Structures for Modelling of Free Fatty Acid Receptors.

    Science.gov (United States)

    Tikhonova, Irina G

    2017-01-01

    Five G protein-coupled receptors (GPCRs) have been identified to be activated by free fatty acids (FFA). Among them, FFA1 (GPR40) and FFA4 (GPR120) bind long-chain fatty acids, FFA2 (GPR43) and FFA3 (GPR41) bind short-chain fatty acids and GPR84 binds medium-chain fatty acids. Free fatty acid receptors have now emerged as potential targets for the treatment of diabetes, obesity and immune diseases. The recent progress in crystallography of GPCRs has now enabled the elucidation of the structure of FFA1 and provided reliable templates for homology modelling of other FFA receptors. Analysis of the crystal structure and improved homology models, along with mutagenesis data and structure activity, highlighted an unusual arginine charge-pairing interaction in FFA1-3 for receptor modulation, distinct structural features for ligand binding to FFA1 and FFA4 and an arginine of the second extracellular loop as a possible anchoring point for FFA at GPR84. Structural data will be helpful for searching novel small-molecule modulators at the FFA receptors.

  14. Generating Converged Accurate Free Energy Surfaces for Chemical Reactions with a Force-Matched Semiempirical Model.

    Science.gov (United States)

    Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir

    2018-04-10

    We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .

  15. Reactivation in working memory: an attractor network model of free recall.

    Science.gov (United States)

    Lansner, Anders; Marklund, Petter; Sikström, Sverker; Nilsson, Lars-Göran

    2013-01-01

    The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.

  16. Reactivation in working memory: an attractor network model of free recall.

    Directory of Open Access Journals (Sweden)

    Anders Lansner

    Full Text Available The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view.

  17. Reactivation in Working Memory: An Attractor Network Model of Free Recall

    Science.gov (United States)

    Lansner, Anders; Marklund, Petter; Sikström, Sverker; Nilsson, Lars-Göran

    2013-01-01

    The dynamic nature of human working memory, the general-purpose system for processing continuous input, while keeping no longer externally available information active in the background, is well captured in immediate free recall of supraspan word-lists. Free recall tasks produce several benchmark memory phenomena, like the U-shaped serial position curve, reflecting enhanced memory for early and late list items. To account for empirical data, including primacy and recency as well as contiguity effects, we propose here a neurobiologically based neural network model that unifies short- and long-term forms of memory and challenges both the standard view of working memory as persistent activity and dual-store accounts of free recall. Rapidly expressed and volatile synaptic plasticity, modulated intrinsic excitability, and spike-frequency adaptation are suggested as key cellular mechanisms underlying working memory encoding, reactivation and recall. Recent findings on the synaptic and molecular mechanisms behind early LTP and on spiking activity during delayed-match-to-sample tasks support this view. PMID:24023690

  18. Can neurological evidence refute free will? The failure of a phenomenological analysis of acts in Libet’s denial of «positive free will»

    Directory of Open Access Journals (Sweden)

    Josef Seifert

    2013-07-01

    Full Text Available In a first part of this paper I expound briefly the essential characteristics of free will. The second part deals with the objections of Benjamin Libet, allegedly based on brain-scientific foundations, against «positive free will». The third and main part shows that Libet’s anti-positive-free-will-position is due to an almost complete failure of a phenomenology of the conscious acts that precede, accompany and follow voluntary movement. The fourth part defends the thesis that Libet’s experimental results, far from supporting his philosophical stance, contain strong empirical confirmations of human free will, which, apart from a phenomenology of human acts, becomes further clear upon noticing striking philosophical deficiencies and contradictions in his distinction between ‘positive’ and ‘negative’ free will. The conclusions summarize the results, according to which positive free will and causality through freedom exist and are confirmed by Libet’s and other test results. Free will is the primary and model case of an efficient cause, instead of contradicting or challenging the principles of causality and of sufficient reason

  19. Free drop impact analysis of shipping cask

    International Nuclear Information System (INIS)

    Pfeiffer, P.A.; Kennedy, J.M.

    1989-01-01

    The WHAMS-2D and WHAMS-3D codes were used to analyze the dynamic response of the RAS/TREAT shielded shipping cask subjected to transient leadings for the purpose of assessing potential damage to the various components that comprise the the cask. The paper describes how these codes can be used to provide and intermediate level of detail between full three-dimensional finite element calculations and hand calculations which are cost effective for design purposes. Three free drops were adressed: (1) a thirty foot axial drop on either end; (2) a thirty foot oblique angle drop with the cask having several different orientations from the vertical with impact on the cask corner; and (3) a thirty foot side drop with simultaneous impact on the lifting trunnion and the bottom end. Results are presented for two models of the side and oblique angle drops; one model includes only the mass of the lapped sleeves of depleted uranium (DU) while the other includes the mass and stiffness of the DU. The results of the end drop analyses are given for models with and without imperfections in the cask. Comparison of the analysis to hand calculations and simplified analyses are given. (orig.)

  20. Model-Free Adaptive Control Algorithm with Data Dropout Compensation

    Directory of Open Access Journals (Sweden)

    Xuhui Bu

    2012-01-01

    Full Text Available The convergence of model-free adaptive control (MFAC algorithm can be guaranteed when the system is subject to measurement data dropout. The system output convergent speed gets slower as dropout rate increases. This paper proposes a MFAC algorithm with data compensation. The missing data is first estimated using the dynamical linearization method, and then the estimated value is introduced to update control input. The convergence analysis of the proposed MFAC algorithm is given, and the effectiveness is also validated by simulations. It is shown that the proposed algorithm can compensate the effect of the data dropout, and the better output performance can be obtained.

  1. Multi-granularity immunization strategy based on SIRS model in scale-free network

    Science.gov (United States)

    Nian, Fuzhong; Wang, Ke

    2015-04-01

    In this paper, a new immunization strategy was established to prevent the epidemic spreading based on the principle of "Multi-granularity" and "Pre-warning Mechanism", which send different pre-warning signal with the risk rank of the susceptible node to be infected. The pre-warning means there is a higher risk that the susceptible node is more likely to be infected. The multi-granularity means the susceptible node is linked with multi-infected nodes. In our model, the effect of the different situation of the multi-granularity immunizations is compared and different spreading rates are adopted to describe the epidemic behavior of nodes. In addition the threshold value of epidemic outbreak is investigated, which makes the result more convincing. The theoretical analysis and the simulations indicate that the proposed immunization strategy is effective and it is also economic and feasible.

  2. Numerical and experimental investigation of the 3D free surface flow in a model Pelton turbine

    International Nuclear Information System (INIS)

    Fiereder, R; Riemann, S; Schilling, R

    2010-01-01

    This investigation focuses on the numerical and experimental analysis of the 3D free surface flow in a Pelton turbine. In particular, two typical flow conditions occurring in a full scale Pelton turbine - a configuration with a straight inlet as well as a configuration with a 90 degree elbow upstream of the nozzle - are considered. Thereby, the effect of secondary flow due to the 90 degree bending of the upstream pipe on the characteristics of the jet is explored. The hybrid flow field consists of pure liquid flow within the conduit and free surface two component flow of the liquid jet emerging out of the nozzle into air. The numerical results are validated against experimental investigations performed in the laboratory of the Institute of Fluid Mechanics (FLM). For the numerical simulation of the flow the in-house unstructured fully parallelized finite volume solver solver3D is utilized. An advanced interface capturing model based on the classic Volume of Fluid method is applied. In order to ensure sharp interface resolution an additional convection term is added to the transport equation of the volume fraction. A collocated variable arrangement is used and the set of non-linear equations, containing fluid conservation equations and model equations for turbulence and volume fraction, are solved in a segregated manner. For pressure-velocity coupling the SIMPLE and PISO algorithms are implemented. Detailed analysis of the observed flow patterns in the jet and of the jet geometry are presented.

  3. Numerical and experimental investigation of the 3D free surface flow in a model Pelton turbine

    Energy Technology Data Exchange (ETDEWEB)

    Fiereder, R; Riemann, S; Schilling, R, E-mail: fiereder@lhm.mw.tum.d [Department of Fluid Mechanics, Technische Universitaet Muenchen Bolzmannstrasse 15, Garching, 85748 (Germany)

    2010-08-15

    This investigation focuses on the numerical and experimental analysis of the 3D free surface flow in a Pelton turbine. In particular, two typical flow conditions occurring in a full scale Pelton turbine - a configuration with a straight inlet as well as a configuration with a 90 degree elbow upstream of the nozzle - are considered. Thereby, the effect of secondary flow due to the 90 degree bending of the upstream pipe on the characteristics of the jet is explored. The hybrid flow field consists of pure liquid flow within the conduit and free surface two component flow of the liquid jet emerging out of the nozzle into air. The numerical results are validated against experimental investigations performed in the laboratory of the Institute of Fluid Mechanics (FLM). For the numerical simulation of the flow the in-house unstructured fully parallelized finite volume solver solver3D is utilized. An advanced interface capturing model based on the classic Volume of Fluid method is applied. In order to ensure sharp interface resolution an additional convection term is added to the transport equation of the volume fraction. A collocated variable arrangement is used and the set of non-linear equations, containing fluid conservation equations and model equations for turbulence and volume fraction, are solved in a segregated manner. For pressure-velocity coupling the SIMPLE and PISO algorithms are implemented. Detailed analysis of the observed flow patterns in the jet and of the jet geometry are presented.

  4. Numerical and experimental investigation of the 3D free surface flow in a model Pelton turbine

    Science.gov (United States)

    Fiereder, R.; Riemann, S.; Schilling, R.

    2010-08-01

    This investigation focuses on the numerical and experimental analysis of the 3D free surface flow in a Pelton turbine. In particular, two typical flow conditions occurring in a full scale Pelton turbine - a configuration with a straight inlet as well as a configuration with a 90 degree elbow upstream of the nozzle - are considered. Thereby, the effect of secondary flow due to the 90 degree bending of the upstream pipe on the characteristics of the jet is explored. The hybrid flow field consists of pure liquid flow within the conduit and free surface two component flow of the liquid jet emerging out of the nozzle into air. The numerical results are validated against experimental investigations performed in the laboratory of the Institute of Fluid Mechanics (FLM). For the numerical simulation of the flow the in-house unstructured fully parallelized finite volume solver solver3D is utilized. An advanced interface capturing model based on the classic Volume of Fluid method is applied. In order to ensure sharp interface resolution an additional convection term is added to the transport equation of the volume fraction. A collocated variable arrangement is used and the set of non-linear equations, containing fluid conservation equations and model equations for turbulence and volume fraction, are solved in a segregated manner. For pressure-velocity coupling the SIMPLE and PISO algorithms are implemented. Detailed analysis of the observed flow patterns in the jet and of the jet geometry are presented.

  5. GPU-based acceleration of computations in nonlinear finite element deformation analysis.

    Science.gov (United States)

    Mafi, Ramin; Sirouspour, Shahin

    2014-03-01

    The physics of deformation for biological soft-tissue is best described by nonlinear continuum mechanics-based models, which then can be discretized by the FEM for a numerical solution. However, computational complexity of such models have limited their use in applications requiring real-time or fast response. In this work, we propose a graphic processing unit-based implementation of the FEM using implicit time integration for dynamic nonlinear deformation analysis. This is the most general formulation of the deformation analysis. It is valid for large deformations and strains and can account for material nonlinearities. The data-parallel nature and the intense arithmetic computations of nonlinear FEM equations make it particularly suitable for implementation on a parallel computing platform such as graphic processing unit. In this work, we present and compare two different designs based on the matrix-free and conventional preconditioned conjugate gradients algorithms for solving the FEM equations arising in deformation analysis. The speedup achieved with the proposed parallel implementations of the algorithms will be instrumental in the development of advanced surgical simulators and medical image registration methods involving soft-tissue deformation. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Indicator Based and Indicator - Free Electrochemical DNA Biosensors

    National Research Council Canada - National Science Library

    Kerman, Kagan

    2001-01-01

    The utility and advantages of an indicator free and MB based sequence specific DNA hybridization biosensor based on guanine and adenine oxidation signals and MB reduction signals have been demonstrated...

  7. An efficient binomial model-based measure for sequence comparison and its application.

    Science.gov (United States)

    Liu, Xiaoqing; Dai, Qi; Li, Lihua; He, Zerong

    2011-04-01

    Sequence comparison is one of the major tasks in bioinformatics, which could serve as evidence of structural and functional conservation, as well as of evolutionary relations. There are several similarity/dissimilarity measures for sequence comparison, but challenges remains. This paper presented a binomial model-based measure to analyze biological sequences. With help of a random indicator, the occurrence of a word at any position of sequence can be regarded as a random Bernoulli variable, and the distribution of a sum of the word occurrence is well known to be a binomial one. By using a recursive formula, we computed the binomial probability of the word count and proposed a binomial model-based measure based on the relative entropy. The proposed measure was tested by extensive experiments including classification of HEV genotypes and phylogenetic analysis, and further compared with alignment-based and alignment-free measures. The results demonstrate that the proposed measure based on binomial model is more efficient.

  8. Polynomial fuzzy model-based control systems stability analysis and control synthesis using membership function dependent techniques

    CERN Document Server

    Lam, Hak-Keung

    2016-01-01

    This book presents recent research on the stability analysis of polynomial-fuzzy-model-based control systems where the concept of partially/imperfectly matched premises and membership-function dependent analysis are considered. The membership-function-dependent analysis offers a new research direction for fuzzy-model-based control systems by taking into account the characteristic and information of the membership functions in the stability analysis. The book presents on a research level the most recent and advanced research results, promotes the research of polynomial-fuzzy-model-based control systems, and provides theoretical support and point a research direction to postgraduate students and fellow researchers. Each chapter provides numerical examples to verify the analysis results, demonstrate the effectiveness of the proposed polynomial fuzzy control schemes, and explain the design procedure. The book is comprehensively written enclosing detailed derivation steps and mathematical derivations also for read...

  9. A CAD based geometry model for simulation and analysis of particle detector data

    Energy Technology Data Exchange (ETDEWEB)

    Milde, Michael; Losekamm, Martin; Poeschl, Thomas; Greenwald, Daniel; Paul, Stephan [Technische Universitaet Muenchen, 85748 Garching (Germany)

    2016-07-01

    The development of a new particle detector requires a good understanding of its setup. A detailed model of the detector's geometry is not only needed during construction, but also for simulation and data analysis. To arrive at a consistent description of the detector geometry a representation is needed that can be easily implemented in different software tools used during data analysis. We developed a geometry representation based on CAD files that can be easily used within the Geant4 simulation framework and analysis tools based on the ROOT framework. This talk presents the structure of the geometry model and show its implementation using the example of the event reconstruction developed for the Multi-purpose Active-target Particle Telescope (MAPT). The detector consists of scintillating plastic fibers and can be used as a tracking detector and calorimeter with omnidirectional acceptance. To optimize the angular resolution and the energy reconstruction of measured particles, a detailed detector model is needed at all stages of the reconstruction.

  10. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. THE REACH AND RICHNESS OF WIKINOMICS: IS THE FREE WEB-BASED ENCYCLOPEDIA WIKIPEDIA ONLY FOR RICH COUNTRIES?

    DEFF Research Database (Denmark)

    Rask, Morten

    2007-01-01

    In this article, a model of the patterns of correlation in Wikipedia, reach and richness, lays the foundation for studying whether the free Web-based encyclopedia Wikipedia is only for developed countries. Based on data from 12 different Wikipedia language editions, the author finds that the cent......In this article, a model of the patterns of correlation in Wikipedia, reach and richness, lays the foundation for studying whether the free Web-based encyclopedia Wikipedia is only for developed countries. Based on data from 12 different Wikipedia language editions, the author finds...... that the central structural effect is on the level of human development in the current country. In other words, Wikipedia is in general more for rich countries than for less developed countries. It is suggested that policy makers make investments in increasing the general level of literacy, education, and standard...

  12. Application of thermodynamics-based rate-dependent constitutive models of concrete in the seismic analysis of concrete dams

    Directory of Open Access Journals (Sweden)

    Leng Fei

    2008-09-01

    Full Text Available This paper discusses the seismic analysis of concrete dams with consideration of material nonlinearity. Based on a consistent rate-dependent model and two thermodynamics-based models, two thermodynamics-based rate-dependent constitutive models were developed with consideration of the influence of the strain rate. They can describe the dynamic behavior of concrete and be applied to nonlinear seismic analysis of concrete dams taking into account the rate sensitivity of concrete. With the two models, a nonlinear analysis of the seismic response of the Koyna Gravity Dam and the Dagangshan Arch Dam was conducted. The results were compared with those of a linear elastic model and two rate-independent thermodynamics-based constitutive models, and the influences of constitutive models and strain rate on the seismic response of concrete dams were discussed. It can be concluded from the analysis that, during seismic response, the tensile stress is the control stress in the design and seismic safety evaluation of concrete dams. In different models, the plastic strain and plastic strain rate of concrete dams show a similar distribution. When the influence of the strain rate is considered, the maximum plastic strain and plastic strain rate decrease.

  13. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    International Nuclear Information System (INIS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-01-01

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model

  14. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yunfan, E-mail: yunfanlu@yeah.net; Wang, Jun; Niu, Hongli

    2015-06-12

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model.

  15. Free Vibration Analysis of Composite Plates via Refined Theories Accounting for Uncertainties

    Directory of Open Access Journals (Sweden)

    G. Giunta

    2011-01-01

    Full Text Available The free vibration analysis of composite thin and relatively thick plates accounting for uncertainty is addressed in this work. Classical and refined two-dimensional models derived via Carrera's Unified Formulation (CUF are considered. Material properties and geometrical parameters are supposed to be random. The fundamental frequency related to the first bending eigenmode is stochastically described in terms of the mean value, the standard deviation, the related confidence intervals and the cumulative distribution function. The Monte Carlo Method is employed to account for uncertainty. Cross-ply, simply supported, orthotropic plates are accounted for. Symmetric and anti-symmetric lay-ups are investigated. Displacements based and mixed two-dimensional theories are adopted. Equivalent single layer and layer wise approaches are considered. A Navier type solution is assumed. The conducted analyses have shown that for the considered cases, the fundamental natural frequency is not very sensitive to the uncertainty in the material parameters, while uncertainty in the geometrical parameters should be accounted for. In the case of thin plates, all the considered models yield statistically matching results. For relatively thick plates, the difference in the mean value of the natural frequency is due to the different number of degrees of freedom in the model.

  16. Reynolds-Averaged Navier-Stokes Modeling of Turbulent Free Shear Layers

    Science.gov (United States)

    Schilling, Oleg

    2017-11-01

    Turbulent mixing of gases in free shear layers is simulated using a weighted essentially nonoscillatory implementation of ɛ- and L-based Reynolds-averaged Navier-Stokes models. Specifically, the air/air shear layer with velocity ratio 0.6 studied experimentally by Bell and Mehta (1990) is modeled. The detailed predictions of turbulent kinetic energy dissipation rate and lengthscale models are compared to one another, and to the experimental data. The role of analytical, self-similar solutions for model calibration and physical insights is also discussed. It is shown that turbulent lengthscale-based models are unable to predict both the growth parameter (spreading rate) and turbulent kinetic energy normalized by the square of the velocity difference of the streams. The terms in the K, ɛ, and L equation budgets are compared between the models, and it is shown that the production and destruction mechanisms are substantially different in the ɛ and L equations. Application of the turbulence models to the Brown and Roshko (1974) experiments with streams having various velocity and density ratios is also briefly discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  17. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  18. A multidimensional subdiffusion model: An arbitrage-free market

    International Nuclear Information System (INIS)

    Li Guo-Hua; Zhang Hong; Luo Mao-Kang

    2012-01-01

    To capture the subdiffusive characteristics of financial markets, the subordinated process, directed by the inverse α-stale subordinator S α (t) for 0 < α < 1, has been employed as the model of asset prices. In this article, we introduce a multidimensional subdiffusion model that has a bond and K correlated stocks. The stock price process is a multidimensional subdiffusion process directed by the inverse α-stable subordinator. This model describes the period of stagnation for each stock and the behavior of the dependency between multiple stocks. Moreover, we derive the multidimensional fractional backward Kolmogorov equation for the subordinated process using the Laplace transform technique. Finally, using a martingale approach, we prove that the multidimensional subdiffusion model is arbitrage-free, and also gives an arbitrage-free pricing rule for contingent claims associated with the martingale measure. (interdisciplinary physics and related areas of science and technology)

  19. g-PRIME: A Free, Windows Based Data Acquisition and Event Analysis Software Package for Physiology in Classrooms and Research Labs.

    Science.gov (United States)

    Lott, Gus K; Johnson, Bruce R; Bonow, Robert H; Land, Bruce R; Hoy, Ronald R

    2009-01-01

    We present g-PRIME, a software based tool for physiology data acquisition, analysis, and stimulus generation in education and research. This software was developed in an undergraduate neurophysiology course and strongly influenced by instructor and student feedback. g-PRIME is a free, stand-alone, windows application coded and "compiled" in Matlab (does not require a Matlab license). g-PRIME supports many data acquisition interfaces from the PC sound card to expensive high throughput calibrated equipment. The program is designed as a software oscilloscope with standard trigger modes, multi-channel visualization controls, and data logging features. Extensive analysis options allow real time and offline filtering of signals, multi-parameter threshold-and-window based event detection, and two-dimensional display of a variety of parameters including event time, energy density, maximum FFT frequency component, max/min amplitudes, and inter-event rate and intervals. The software also correlates detected events with another simultaneously acquired source (event triggered average) in real time or offline. g-PRIME supports parameter histogram production and a variety of elegant publication quality graphics outputs. A major goal of this software is to merge powerful engineering acquisition and analysis tools with a biological approach to studies of nervous system function.

  20. Rural Teachers' Views: What Are Gender-Based Challenges Facing Free Primary Education in Lesotho?

    Science.gov (United States)

    Morojele, Pholoho

    2013-01-01

    This paper gives prominence to rural teachers' accounts of gender-based challenges facing Free Primary Education in Lesotho. It draws on feminist interpretations of social constructionism to discuss factors within the Basotho communities that affect gender equality in the schools. The inductive analysis offered makes use of the data generated from…

  1. A Free Energy Model for Hysteresis Ferroelectric Materials

    National Research Council Canada - National Science Library

    Smith, Ralph C; Ounaies, Zoubeida; Seelecke, Stefan; Smith, Joshua

    2003-01-01

    This paper provides a theory for quantifying the hysteresis and constitutive nonlinearities inherent to piezoceramic compounds through a combination of free energy analysis and stochastic homogenization techniques...

  2. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Science.gov (United States)

    Monakhova, Yulia B; Mushtakova, Svetlana P

    2017-05-01

    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  3. Model-free adaptive speed control on travelling wave ultrasonic motor

    Science.gov (United States)

    Di, Sisi; Li, Huafeng

    2018-01-01

    This paper introduced a new data-driven control (DDC) method for the speed control of ultrasonic motor (USM). The model-free adaptive control (MFAC) strategy was presented in terms of its principles, algorithms, and parameter selection. To verify the efficiency of the proposed method, a speed-frequency-time model, which contained all the measurable nonlinearity and uncertainties based on experimental data was established for simulation to mimic the USM operation system. Furthermore, the model was identified using particle swarm optimization (PSO) method. Then, the control of the simulated system using MFAC was evaluated under different expectations in terms of overshoot, rise time and steady-state error. Finally, the MFAC results were compared with that of proportion iteration differentiation (PID) to demonstrate its advantages in controlling general random system.

  4. Probabilistic analysis of free ways for maintenance

    International Nuclear Information System (INIS)

    Torres V, A.; Rivero O, J.J.

    2004-01-01

    The safety during the maintenance interventions is treated in limited manner and in general independent of the systems of management of the maintenance. This variable is affected by multiple technical or human factors many times subjective and difficult to quantifying, what limits the design of preventive plans. However, some factors constitute common points: the isolation configurations during the free ways (bank drafts in the oil industry) and the human errors associated to their violation. This characteristic allowed to develop the analysis of such situations through the methodology of fault trees that it links faults of teams and human errors cohesively. The methodology has been automated inside the MOSEG Win Ver 1.0 code and the same one can embrace from the analysis of a particular situation of free way until that of a complete strategy of maintenance from the point of view of the safety of the maintenance personal. (Author)

  5. Giant magnetoresistance of hysteresis-free Cu/Co-based multilayers

    International Nuclear Information System (INIS)

    Huetten, A.; Hempel, T.; Schepper, W.; Kleineberg, U.; Reiss, G.

    2001-01-01

    It has been demonstrated that hysteresis-free multilayers based on {Cu/Co} and {Cu/Ni 57 Co 43 } can be experimentally realized obtaining room temperature GMR effect amplitudes from 6.5% up to 20%. A critical window for the layer thickness for hysteresis-free GMR curves can be achieved for both systems, ranging from 0.38 to 0.45 nm and 0.59 to 0.7 nm, respectively. The corresponding sensitivities range from 0.075 up to 0.114%/Oe, but are still below that of normal {Cu/Co} multilayers. Hysteresis-free multilayers based on these systems are stable up to 180 deg. C upon isochronal annealing. It is shown that hysteresis-free {Cu/Co or Ni 57 Co 43 }-multilayers are neither a solution to achieve good temperature stability nor a higher sensitivity compared with normal ones and hence are not candidates for application

  6. A Continuous-Exchange Cell-Free Protein Synthesis System Based on Extracts from Cultured Insect Cells

    Science.gov (United States)

    Stech, Marlitt; Quast, Robert B.; Sachse, Rita; Schulze, Corina; Wüstenhagen, Doreen A.; Kubick, Stefan

    2014-01-01

    In this study, we present a novel technique for the synthesis of complex prokaryotic and eukaryotic proteins by using a continuous-exchange cell-free (CECF) protein synthesis system based on extracts from cultured insect cells. Our approach consists of two basic elements: First, protein synthesis is performed in insect cell lysates which harbor endogenous microsomal vesicles, enabling a translocation of de novo synthesized target proteins into the lumen of the insect vesicles or, in the case of membrane proteins, their embedding into a natural membrane scaffold. Second, cell-free reactions are performed in a two chamber dialysis device for 48 h. The combination of the eukaryotic cell-free translation system based on insect cell extracts and the CECF translation system results in significantly prolonged reaction life times and increased protein yields compared to conventional batch reactions. In this context, we demonstrate the synthesis of various representative model proteins, among them cytosolic proteins, pharmacological relevant membrane proteins and glycosylated proteins in an endotoxin-free environment. Furthermore, the cell-free system used in this study is well-suited for the synthesis of biologically active tissue-type-plasminogen activator, a complex eukaryotic protein harboring multiple disulfide bonds. PMID:24804975

  7. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  8. Lattice Boltzmann model for thermal free surface flows with liquid-solid phase transition

    International Nuclear Information System (INIS)

    Attar, Elham; Koerner, Carolin

    2011-01-01

    Purpose: The main objective of this work is to develop an algorithm to use the Lattice Boltzmann method for solving free surface thermal flow problems with solid/liquid phase changes. Approach: A multi-distribution function model is applied to simulate hydrodynamic flow and the coupled thermal diffusion-convection problem. Findings: The free surface problem, i.e. the reconstruction of the missing distribution functions at the interface, can be solved by applying a physical transparent momentum and heat flux based methodology. The developed method is subsequently applied to some test cases in order to assess its computational potentials. Practical implications: Many industrial processes involve problems where non-isothermal motion and simultaneous solidification of fluids with free surface is important. Examples are all castings processes and especially foaming processes which are characterized by a huge and strongly changing surface. Value: A reconstruction algorithm to treat a thermal hydrodynamic problem with free surfaces is presented which is physically transparent and easy to implement.

  9. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  10. Safety assessment technology on the free drop impact and puncture analysis of the cask for radioactive material transport

    International Nuclear Information System (INIS)

    Lee, Dew Hey; Lee, Young Shin; Ryu, Chung Hyun; Kim, Hyun Su; Lee, Ho Chul; Hong, Song Jin; Choi, Young Jin; Lee, Jae Hyung; Na, Jae Yun

    2001-03-01

    In this study, the regulatory condition and analysis condition is analyzed for the free drop and puncture impact analysis to develop the safety assessment technology. Impact analysis is performed with finite element method which is one of the many analysis methods of the shipping cask. LS-DYNA3D and ABAQUS is suitable for the free drop and the puncture impact analysis of the shipping cask. For the analysis model, the KSC-4 that is the shipping cask to transport spent nuclear fuel is investigated. The results of both LS-DYNA3D and ABAQUS is completely corresponded. And The integrity of the shipping cask is verified. Using this study, the reliable safety assessment technology is supplied to the staff. The efficient and reliable regulatory tasks is performed using the standard safety assessment technology

  11. Safety assessment technology on the free drop impact and puncture analysis of the cask for radioactive material transport

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dew Hey [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of); Lee, Young Shin; Ryu, Chung Hyun; Kim, Hyun Su; Lee, Ho Chul; Hong, Song Jin; Choi, Young Jin; Lee, Jae Hyung; Na, Jae Yun [Chungnam National Univ., Taejon (Korea, Republic of)

    2001-03-15

    In this study, the regulatory condition and analysis condition is analyzed for the free drop and puncture impact analysis to develop the safety assessment technology. Impact analysis is performed with finite element method which is one of the many analysis methods of the shipping cask. LS-DYNA3D and ABAQUS is suitable for the free drop and the puncture impact analysis of the shipping cask. For the analysis model, the KSC-4 that is the shipping cask to transport spent nuclear fuel is investigated. The results of both LS-DYNA3D and ABAQUS is completely corresponded. And The integrity of the shipping cask is verified. Using this study, the reliable safety assessment technology is supplied to the staff. The efficient and reliable regulatory tasks is performed using the standard safety assessment technology.

  12. Simultaneous determination of reference free-stream temperature and convective heat transfer coefficients

    International Nuclear Information System (INIS)

    Jeong, Gi Ho; Song, Ki Bum; Kim, Kui Soon

    2001-01-01

    This paper deals with the development of a new method that can obtain heat transfer coefficient and reference free stream temperature simultaneously. The method is based on transient heat transfer experiments using two narrow-band TLCs. The method is validated through error analysis in terms of the random uncertainties in the measured temperatures. It is shown how the uncertainties in heat transfer coefficient and free stream temperature can be reduced. The general method described in this paper is applicable to many heat transfer models with unknown free stream temperature

  13. Team-Based Models for End-of-Life Care: An Evidence-Based Analysis

    Science.gov (United States)

    2014-01-01

    Background End of life refers to the period when people are living with advanced illness that will not stabilize and from which they will not recover and will eventually die. It is not limited to the period immediately before death. Multiple services are required to support people and their families during this time period. The model of care used to deliver these services can affect the quality of the care they receive. Objectives Our objective was to determine whether an optimal team-based model of care exists for service delivery at end of life. In systematically reviewing such models, we considered their core components: team membership, services offered, modes of patient contact, and setting. Data Sources A literature search was performed on October 14, 2013, using Ovid MEDLINE, Ovid MEDLINE In-Process and Other Non-Indexed Citations, Ovid Embase, EBSCO Cumulative Index to Nursing & Allied Health Literature (CINAHL), and EBM Reviews, for studies published from January 1, 2000, to October 14, 2013. Review Methods Abstracts were reviewed by a single reviewer and full-text articles were obtained that met the inclusion criteria. Studies were included if they evaluated a team model of care compared with usual care in an end-of-life adult population. A team was defined as having at least 2 health care disciplines represented. Studies were limited to English publications. A meta-analysis was completed to obtain pooled effect estimates where data permitted. The GRADE quality of the evidence was evaluated. Results Our literature search located 10 randomized controlled trials which, among them, evaluated the following 6 team-based models of care: hospital, direct contact home, direct contact home, indirect contact comprehensive, indirect contact comprehensive, direct contact comprehensive, direct, and early contact Direct contact is when team members see the patient; indirect contact is when they advise another health care practitioner (e.g., a family doctor) who sees

  14. Predicting future conflict between team-members with parameter-free models of social networks

    Science.gov (United States)

    Rovira-Asenjo, Núria; Gumí, Tània; Sales-Pardo, Marta; Guimerà, Roger

    2013-06-01

    Despite the well-documented benefits of working in teams, teamwork also results in communication, coordination and management costs, and may lead to personal conflict between team members. In a context where teams play an increasingly important role, it is of major importance to understand conflict and to develop diagnostic tools to avert it. Here, we investigate empirically whether it is possible to quantitatively predict future conflict in small teams using parameter-free models of social network structure. We analyze data of conflict appearance and resolution between 86 team members in 16 small teams, all working in a real project for nine consecutive months. We find that group-based models of complex networks successfully anticipate conflict in small teams whereas micro-based models of structural balance, which have been traditionally used to model conflict, do not.

  15. Pressure Control in Distillation Columns: A Model-Based Analysis

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Bisgaard, Thomas; Kristensen, Henrik

    2014-01-01

    A comprehensive assessment of pressure control in distillation columns is presented, including the consequences for composition control and energy consumption. Two types of representative control structures are modeled, analyzed, and benchmarked. A detailed simulation test, based on a real...... industrial distillation column, is used to assess the differences between the two control structures and to demonstrate the benefits of pressure control in the operation. In the second part of the article, a thermodynamic analysis is carried out to establish the influence of pressure on relative volatility...

  16. Improved Model for Predicting the Free Energy Contribution of Dinucleotide Bulges to RNA Duplex Stability.

    Science.gov (United States)

    Tomcho, Jeremy C; Tillman, Magdalena R; Znosko, Brent M

    2015-09-01

    Predicting the secondary structure of RNA is an intermediate in predicting RNA three-dimensional structure. Commonly, determining RNA secondary structure from sequence uses free energy minimization and nearest neighbor parameters. Current algorithms utilize a sequence-independent model to predict free energy contributions of dinucleotide bulges. To determine if a sequence-dependent model would be more accurate, short RNA duplexes containing dinucleotide bulges with different sequences and nearest neighbor combinations were optically melted to derive thermodynamic parameters. These data suggested energy contributions of dinucleotide bulges were sequence-dependent, and a sequence-dependent model was derived. This model assigns free energy penalties based on the identity of nucleotides in the bulge (3.06 kcal/mol for two purines, 2.93 kcal/mol for two pyrimidines, 2.71 kcal/mol for 5'-purine-pyrimidine-3', and 2.41 kcal/mol for 5'-pyrimidine-purine-3'). The predictive model also includes a 0.45 kcal/mol penalty for an A-U pair adjacent to the bulge and a -0.28 kcal/mol bonus for a G-U pair adjacent to the bulge. The new sequence-dependent model results in predicted values within, on average, 0.17 kcal/mol of experimental values, a significant improvement over the sequence-independent model. This model and new experimental values can be incorporated into algorithms that predict RNA stability and secondary structure from sequence.

  17. Free material stiffness design of laminated composite structures using commercial finite element analysis codes

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    In this work optimum stiffness design of laminated composite structures is performed using the commercially available programs ANSYS and MATLAB. Within these programs a Free Material Optimization algorithm is implemented based on an optimality condition and a heuristic update scheme. The heuristic...... update scheme is needed because commercially available finite element analysis software is used. When using a commercial finite element analysis code it is not straight forward to implement a computationally efficient gradient based optimization algorithm. Examples considered in this work are a clamped......, where full access to the finite element analysis core is granted. This comparison displays the possibility of using commercially available programs for stiffness design of laminated composite structures....

  18. Identification of DNA-binding protein target sequences by physical effective energy functions: free energy analysis of lambda repressor-DNA complexes.

    Directory of Open Access Journals (Sweden)

    Caselle Michele

    2007-09-01

    Full Text Available Abstract Background Specific binding of proteins to DNA is one of the most common ways gene expression is controlled. Although general rules for the DNA-protein recognition can be derived, the ambiguous and complex nature of this mechanism precludes a simple recognition code, therefore the prediction of DNA target sequences is not straightforward. DNA-protein interactions can be studied using computational methods which can complement the current experimental methods and offer some advantages. In the present work we use physical effective potentials to evaluate the DNA-protein binding affinities for the λ repressor-DNA complex for which structural and thermodynamic experimental data are available. Results The binding free energy of two molecules can be expressed as the sum of an intermolecular energy (evaluated using a molecular mechanics forcefield, a solvation free energy term and an entropic term. Different solvation models are used including distance dependent dielectric constants, solvent accessible surface tension models and the Generalized Born model. The effect of conformational sampling by Molecular Dynamics simulations on the computed binding energy is assessed; results show that this effect is in general negative and the reproducibility of the experimental values decreases with the increase of simulation time considered. The free energy of binding for non-specific complexes, estimated using the best energetic model, agrees with earlier theoretical suggestions. As a results of these analyses, we propose a protocol for the prediction of DNA-binding target sequences. The possibility of searching regulatory elements within the bacteriophage λ genome using this protocol is explored. Our analysis shows good prediction capabilities, even in absence of any thermodynamic data and information on the naturally recognized sequence. Conclusion This study supports the conclusion that physics-based methods can offer a completely complementary

  19. Analysis technology in the thick plate free drop impact, heat and thermal stress of the cask for radioactive material transport

    International Nuclear Information System (INIS)

    Lee, Dew Hey; Lee, Young Shin; Ryu, Chung Hyun; Kim, Hyun Su; Choi, Kyung Joo; Choi, Young Jin; Lee, Jae Hyung; Na, Jae Yun; Kim, Seong Jong

    2002-03-01

    In this study, The regulatory condition and analysis condition is analyzed for thick plate free drop, heat and thermal stress analysis to develop the safety assessment technology. Analysis is performed with finite element method which is one of the many analysis methods of the shipping cask. ANSYS, LS-DYNA3D and ABAQUS is suitable for thick plate free drop, heat and thermal stress analysis of the shipping cask. For the analysis model, the KSC-4 that is the shipping cask to transport spent nuclear fuel is investigated. The results of both LS-DYNA3D and ABAQUS for thick plate free drop and the results of ANSYS, LS-DYNA3D and ABAQUS for heat and thermal stress analysis is completely corresponded. And the integrity of the shipping cask is verified. Using this study, the reliable safety assessment technology is supplied to the staff. The efficient and reliable regulatory tasks is performed using the standard safety assessment technology

  20. Analysis technology in the thick plate free drop impact, heat and thermal stress of the cask for radioactive material transport

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dew Hey [Korea Institute of Nuclear and Safety, Taejon (Korea, Republic of); Lee, Young Shin; Ryu, Chung Hyun; Kim, Hyun Su; Choi, Kyung Joo; Choi, Young Jin; Lee, Jae Hyung; Na, Jae Yun; Kim, Seong Jong [Chungnam National Univ., Taejon (Korea, Republic of)

    2002-03-15

    In this study, The regulatory condition and analysis condition is analyzed for thick plate free drop, heat and thermal stress analysis to develop the safety assessment technology. Analysis is performed with finite element method which is one of the many analysis methods of the shipping cask. ANSYS, LS-DYNA3D and ABAQUS is suitable for thick plate free drop, heat and thermal stress analysis of the shipping cask. For the analysis model, the KSC-4 that is the shipping cask to transport spent nuclear fuel is investigated. The results of both LS-DYNA3D and ABAQUS for thick plate free drop and the results of ANSYS, LS-DYNA3D and ABAQUS for heat and thermal stress analysis is completely corresponded. And the integrity of the shipping cask is verified. Using this study, the reliable safety assessment technology is supplied to the staff. The efficient and reliable regulatory tasks is performed using the standard safety assessment technology.

  1. Pricing Decision under Dual-Channel Structure considering Fairness and Free-Riding Behavior

    Directory of Open Access Journals (Sweden)

    Yongmei Liu

    2014-01-01

    Full Text Available Under dual-channel structure, the free-riding behavior based on different service levels between online channel and offline channel cannot be avoided, which would lead to channel unfairness. This study implies that the dual-channel supply chain is built up by online channel controlled by manufacturer and traditional channel controlled by retailer, respectively. Under this channel structure, we rebuild the linear demand function considering free-riding behavior and modify the pricing model based on channel fairness. Then the influences of fair factor and free-riding behavior on manufacturer and retailer pricing and performance are discussed. Finally, we propose some numerical analysis to provide some valuable recommendations for manufacturer and retailer improving channel management performance.

  2. Betweenness-based algorithm for a partition scale-free graph

    International Nuclear Information System (INIS)

    Zhang Bai-Da; Wu Jun-Jie; Zhou Jing; Tang Yu-Hua

    2011-01-01

    Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom—up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top—down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches. (interdisciplinary physics and related areas of science and technology)

  3. Free-tropospheric BrO investigations based on GOME

    Science.gov (United States)

    Post, P.; van Roozendael, M.; Backman, L.; Damski, J.; Thölix, L.; Fayt, C.; Taalas, P.

    2003-04-01

    Bromine compounds contribute significantly to the stratospheric ozone depletion. However measurements of most bromine compounds are sparse or non-existent, and experimental studies essentially rely on BrO observations. The differences between balloon and ground based measurements of stratospheric BrO columns and satellite total column measurements are too large to be explained by measurement uncertainties. Therefore, it has been assumed that there is a concentration of BrO in the free troposphere of about 1-3 ppt. In a previous work, we have calculated the tropospheric BrO abundance as the difference between total BrO and stratospheric BrO columns. The total vertical column densities of BrO are extracted from GOME measurements using IASB-BIRA algorithms. The stratospheric amount has been calculated using chemical transport models (CTM). Results from SLIMCAT and FinROSE simulations are used for this purpose. SLIMCAT is a widely used 3D CTM that has been tested against balloon measurements. FinROSE is a 3D CTM developed at FMI. We have tried several different tropospheric BrO profiles. Our results show that a profile with high BrO concentrations in the boundary layer usually gives unrealistically high tropospheric column values over areas of low albedo (like oceans). This suggests that the tropospheric BrO would be predominantly distributed in the free troposphere. In this work, attempts are made to identify the signature of a free tropospheric BrO content when comparing cloudy and non-cloudy scenes. The possible impact of orography on measured BrO columns is also investigated.

  4. Material model of pelvic bone based on modal analysis: a study on the composite bone.

    Science.gov (United States)

    Henyš, Petr; Čapek, Lukáš

    2017-02-01

    Digital models based on finite element (FE) analysis are widely used in orthopaedics to predict the stress or strain in the bone due to bone-implant interaction. The usability of the model depends strongly on the bone material description. The material model that is most commonly used is based on a constant Young's modulus or on the apparent density of bone obtained from computer tomography (CT) data. The Young's modulus of bone is described in many experimental works with large variations in the results. The concept of measuring and validating the material model of the pelvic bone based on modal analysis is introduced in this pilot study. The modal frequencies, damping, and shapes of the composite bone were measured precisely by an impact hammer at 239 points. An FE model was built using the data pertaining to the geometry and apparent density obtained from the CT of the composite bone. The isotropic homogeneous Young's modulus and Poisson's ratio of the cortical and trabecular bone were estimated from the optimisation procedure including Gaussian statistical properties. The performance of the updated model was investigated through the sensitivity analysis of the natural frequencies with respect to the material parameters. The maximal error between the numerical and experimental natural frequencies of the bone reached 1.74 % in the first modal shape. Finally, the optimised parameters were matched with the data sheets of the composite bone. The maximal difference between the calibrated material properties and that obtained from the data sheet was 34 %. The optimisation scheme of the FE model based on the modal analysis data provides extremely useful calibration of the FE models with the uncertainty bounds and without the influence of the boundary conditions.

  5. An aggregated perylene-based broad-spectrum, efficient and label-free quencher for multiplexed fluorescent bioassays.

    Science.gov (United States)

    Liu, Tao; Hu, Rong; Lv, Yi-Fan; Wu, Yuan; Liang, Hao; Huan, Shuang-Yan; Zhang, Xiao-Bing; Tan, Weihong; Yu, Ru-Qin

    2014-08-15

    Fluorescent sensing systems based on the quenching of fluorophores have found wide applications in bioassays. An efficient quencher will endow the sensing system a high sensitivity. The frequently used quenchers are based on organic molecules or nanomaterials, which usually need tedious synthesizing and modifying steps, and exhibit different quenching efficiencies to different fluorophores. In this work, we for the first time report that aggregated perylene derivative can serve as a broad-spectrum and label-free quencher that is able to efficiently quench a variety of fluorophores, such as green, red and far red dyes labeled on DNA. By choosing nucleases as model biomolecules, such a broad-spectrum quencher was then employed to construct a multiplexed bioassay platform through a label-free manner. Due to the high quenching efficiency of the aggregated perylene, the proposed platform could detect nuclease with high sensitivity, with a detection limit of 0.03U/mL for EcoRV, and 0.05U/mL for EcoRI. The perylene quencher does not affect the activity of nuclease, which makes it possible to design post-addition type bioassay platform. Moreover, the proposed platform allows simultaneous and multicolor analysis of nucleases in homogeneous solution, demonstrating its value of potential application in rapid screening of multiple bio-targets. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Analysis of the Bogoliubov free energy functional

    DEFF Research Database (Denmark)

    Reuvers, Robin

    In this thesis, we analyse a variational reformulation of the Bogoliubov approximation that is used to describe weakly-interacting translationally-invariant Bose gases. For the resulting model, the `Bogoliubov free energy functional', we demonstrate existence of minimizers as well as the presence...

  7. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    ) weighted-least-square regression. 3) Initialization of estimation by use of linear algebra providing a first guess. 4) Sequential parameter and simultaneous GC parameter by using of 4 different minimization algorithms. 5) Thorough uncertainty analysis: a) based on asymptotic approximation of parameter...... covariance matrix b) based on boot strap method. Providing 95%-confidence intervals of parameters and predicted property. 6) Performance statistics analysis and model application. The application of the methodology is shown for a new GC model built to predict lower flammability limit (LFL) for refrigerants...... their credibility and robustness in wider industrial and scientific applications....

  8. Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E

    2018-03-01

    Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.

  9. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  10. Minimal Self-Models and the Free Energy Principle

    Directory of Open Access Journals (Sweden)

    Jakub eLimanowski

    2013-09-01

    Full Text Available The term "minimal phenomenal selfhood" describes the basic, pre-reflective experience of being a self (Blanke & Metzinger, 2009. Theoretical accounts of the minimal self have long recognized the importance and the ambivalence of the body as both part of the physical world, and the enabling condition for being in this world (Gallagher, 2005; Grafton, 2009. A recent account of minimal phenomenal selfhood (MPS, Metzinger, 2004a centers on the consideration that minimal selfhood emerges as the result of basic self-modeling mechanisms, thereby being founded on pre-reflective bodily processes. The free energy principle (FEP, Friston, 2010 is a novel unified theory of cortical function that builds upon the imperative that self-organizing systems entail hierarchical generative models of the causes of their sensory input, which are optimized by minimizing free energy as an approximation of the log-likelihood of the model. The implementation of the FEP via predictive coding mechanisms and in particular the active inference principle emphasizes the role of embodiment for predictive self-modeling, which has been appreciated in recent publications. In this review, we provide an overview of these conceptions and illustrate thereby the potential power of the FEP in explaining the mechanisms underlying minimal selfhood and its key constituents, multisensory integration, interoception, agency, perspective, and the experience of mineness. We conclude that the conceptualization of MPS can be well mapped onto a hierarchical generative model furnished by the free energy principle and may constitute the basis for higher-level, cognitive forms of self-referral, as well as the understanding of other minds.

  11. Kinetics of the Thermal Degradation of Granulated Scrap Tyres: a Model-free Analysis

    Directory of Open Access Journals (Sweden)

    Félix A. LÓPEZ

    2013-12-01

    Full Text Available Pyrolysis is a technology with a promising future in the recycling of scrap tyres. This paper determines the thermal decomposition behaviour and kinetics of granulated scrap tyres (GST by examining the thermogravimetric/derivative thermogravimetric (TGA/DTG data obtained during their pyrolysis in an inert atmosphere at different heating rates. The model-free methods of Friedman, Flynn-Wall-Ozawa and Coats-Redfern were used to determine the reaction kinetics from the DTG data. The apparent activation energy and pre-exponential factor for the degradation of GST were calculated. A comparison with the results obtained by other authors was made.DOI: http://dx.doi.org/10.5755/j01.ms.19.4.2947

  12. Multi-dimensional scavenging analysis of a free-piston linear alternator based on numerical simulation

    Energy Technology Data Exchange (ETDEWEB)

    Mao, Jinlong; Zuo, Zhengxing; Li, Wen; Feng, Huihua [School of Mechanical Engineering, Beijing Institute of Technology, Beijing 100081 (China)

    2011-04-15

    A free-piston linear alternator (FPLA) is being developed by the Beijing Institute of Technology to improve the thermal efficiency relative to conventional crank-driven engines. A two-stroke scavenging process recharges the engine and is crucial to realizing the continuous operation of a free-piston engine. In order to study the FPLA scavenging process, the scavenging system was configured using computational fluid dynamics. As the piston dynamics of the FPLA are different to conventional crank-driven two-stroke engines, a time-based numerical simulation program was built using Matlab to define the piston's motion profiles. A wide range of design and operating options were investigated including effective stroke length, valve overlapping distance, operating frequency and charging pressure to find out their effects on the scavenging performance. The results indicate that a combination of high effective stroke length to bore ratio and long valve overlapping distance with a low supercharging pressure has the potential to achieve high scavenging and trapping efficiencies with low short-circuiting losses. (author)

  13. Rasch model based analysis of the Force Concept Inventory

    Directory of Open Access Journals (Sweden)

    Maja Planinic

    2010-03-01

    Full Text Available The Force Concept Inventory (FCI is an important diagnostic instrument which is widely used in the field of physics education research. It is therefore very important to evaluate and monitor its functioning using different tools for statistical analysis. One of such tools is the stochastic Rasch model, which enables construction of linear measures for persons and items from raw test scores and which can provide important insight in the structure and functioning of the test (how item difficulties are distributed within the test, how well the items fit the model, and how well the items work together to define the underlying construct. The data for the Rasch analysis come from the large-scale research conducted in 2006-07, which investigated Croatian high school students’ conceptual understanding of mechanics on a representative sample of 1676 students (age 17–18 years. The instrument used in research was the FCI. The average FCI score for the whole sample was found to be (27.7±0.4%, indicating that most of the students were still non-Newtonians at the end of high school, despite the fact that physics is a compulsory subject in Croatian schools. The large set of obtained data was analyzed with the Rasch measurement computer software WINSTEPS 3.66. Since the FCI is routinely used as pretest and post-test on two very different types of population (non-Newtonian and predominantly Newtonian, an additional predominantly Newtonian sample (N=141, average FCI score of 64.5% of first year students enrolled in introductory physics course at University of Zagreb was also analyzed. The Rasch model based analysis suggests that the FCI has succeeded in defining a sufficiently unidimensional construct for each population. The analysis of fit of data to the model found no grossly misfitting items which would degrade measurement. Some items with larger misfit and items with significantly different difficulties in the two samples of students do require further

  14. Driver-centred vehicle automation: using network analysis for agent-based modelling of the driver in highly automated driving systems.

    Science.gov (United States)

    Banks, Victoria A; Stanton, Neville A

    2016-11-01

    To the average driver, the concept of automation in driving infers that they can become completely 'hands and feet free'. This is a common misconception, however, one that has been shown through the application of Network Analysis to new Cruise Assist technologies that may feature on our roads by 2020. Through the adoption of a Systems Theoretic approach, this paper introduces the concept of driver-initiated automation which reflects the role of the driver in highly automated driving systems. Using a combination of traditional task analysis and the application of quantitative network metrics, this agent-based modelling paper shows how the role of the driver remains an integral part of the driving system implicating the need for designers to ensure they are provided with the tools necessary to remain actively in-the-loop despite giving increasing opportunities to delegate their control to the automated subsystems. Practitioner Summary: This paper describes and analyses a driver-initiated command and control system of automation using representations afforded by task and social networks to understand how drivers remain actively involved in the task. A network analysis of different driver commands suggests that such a strategy does maintain the driver in the control loop.

  15. Funding analysis of bilateral autologous free-flap breast reconstructions in Australia.

    Science.gov (United States)

    Sinha, Shiba; Ruskin, Olivia; McCombe, David; Morrison, Wayne; Webb, Angela

    2015-08-01

    Bilateral breast reconstructions are being increasingly performed. Autologous free-flap reconstructions represent the gold standard for post-mastectomy breast reconstruction but are resource intensive. This study aims to investigate the difference between hospital reimbursement and true cost of bilateral autologous free-flap reconstructions. Retrospective analysis of patients who underwent bilateral autologous free-flap reconstructions at a single Australian tertiary referral centre was performed. Hospital reimbursement was determined from coding analysis. A true cost analysis was also performed. Comparisons were made considering the effect of timing, indication and complications of the procedure. Forty-six bilateral autologous free-flap procedures were performed (87 deep inferior epigastric perforators (DIEPs), four superficial inferior epigastric artery perforator flaps (SIEAs) and one muscle-sparing free transverse rectus abdominis myocutaneous flap (MS-TRAM)). The mean funding discrepancy between hospital reimbursement and actual cost was $12,137 ± $8539 (mean ± standard deviation (SD)) (n = 46). Twenty-four per cent (n = 11) of the cases had been coded inaccurately. If these cases were excluded from analysis, the mean funding discrepancy per case was $9168 ± $7453 (n = 35). Minor and major complications significantly increased the true cost and funding discrepancy (p = 0.02). Bilateral free-flap breast reconstructions performed in Australian public hospitals result in a funding discrepancy. Failure to be economically viable threatens the provision of this procedure in the public system. Plastic surgeons and hospital managers need to adopt measures in order to make these gold-standard procedures cost neutral. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  16. Analysis of trait-based models in marine ecosystems

    DEFF Research Database (Denmark)

    Heilmann, Irene Louise Torpe

    -temporal pattern formation in a predator–prey system where animals move towards higher fitness. Reaction-diffusion systems have been used extensively to describe spatio-temporal patterns in a variety of systems. However, animals rarely move completely at random, as expressed by diffusion. This has lead to models...... with taxis terms, describing individuals moving in the direction of an attractant. An example is chemotaxis models, where bacteria are attracted to a chemical substance. From an evolutionary perspective, it is expected that animals act as to optimize their fitness. Based on this principle, a predator......–prey system with fitness taxis and diffusion is proposed. Here, fitness taxis refer to animals moving towards higher values of fitness, and the specific growth rates of the populations are used as a measure of the fitness values. To determine the conditions for pattern formation, a linear stability analysis...

  17. Processing of free radical damaged DNA bases

    International Nuclear Information System (INIS)

    Wallace, S.

    2003-01-01

    Free radicals produced during the radiolysis of water gives rise to a plethora of DNA damages including single strand breaks, sites of base loss and a wide variety of purine and pyrimidine base lesions. All these damages are processed in cells by base excision repair. The oxidative DNA glycosylases which catalyze the first step in the removal of a base damage during base excision repair evolved primarily to protect the cells from the deleterious mutagenic effects of single free radical-induced DNA lesions arising during oxidative metabolism. This is evidenced by the high spontaneous mutation rate in bacterial mutants lacking the oxidative DNA glycosylases. However, when a low LET photon transverses the DNA molecule, a burst of free radicals is produced during the radiolysis of water that leads to the formation of clustered damages in the DNA molecule, that are recognized by the oxidative DNA glycosylases. When substrates containing two closely opposed sugar damages or base and sugar damages are incubated with the oxidative DNA glycosylases in vitro, one strand is readily incised by the lyase activity of the DNA glycosylase. Whether or not the second strand is incised depends on the distance between the strand break resulting from the incised first strand and the remaining DNA lesion on the other strand. If the lesions are more than two or three base pairs apart, the second strand is readily cleaved by the DNA glycosylase, giving rise to a double strand break. Even if the entire base excision repair system is reconstituted in vitro, whether or not a double strand break ensues depends solely upon the ability of the DNA glycosylase to cleave the second strand. These data predicted that cells deficient in the oxidative DNA glycosylases would be radioresistant while those that overproduce an oxidative DNA glycosylase would be radiosensitive. This prediction was indeed borne in Escherichia coli that is, mutants lacking the oxidative DNA glycosylases are radioresistant

  18. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  19. Wayside Bearing Fault Diagnosis Based on a Data-Driven Doppler Effect Eliminator and Transient Model Analysis

    Science.gov (United States)

    Liu, Fang; Shen, Changqing; He, Qingbo; Zhang, Ao; Liu, Yongbin; Kong, Fanrang

    2014-01-01

    A fault diagnosis strategy based on the wayside acoustic monitoring technique is investigated for locomotive bearing fault diagnosis. Inspired by the transient modeling analysis method based on correlation filtering analysis, a so-called Parametric-Mother-Doppler-Wavelet (PMDW) is constructed with six parameters, including a center characteristic frequency and five kinematic model parameters. A Doppler effect eliminator containing a PMDW generator, a correlation filtering analysis module, and a signal resampler is invented to eliminate the Doppler effect embedded in the acoustic signal of the recorded bearing. Through the Doppler effect eliminator, the five kinematic model parameters can be identified based on the signal itself. Then, the signal resampler is applied to eliminate the Doppler effect using the identified parameters. With the ability to detect early bearing faults, the transient model analysis method is employed to detect localized bearing faults after the embedded Doppler effect is eliminated. The effectiveness of the proposed fault diagnosis strategy is verified via simulation studies and applications to diagnose locomotive roller bearing defects. PMID:24803197

  20. Uncertainty analysis in agent-based modelling and consequential life cycle assessment coupled models : a critical review

    NARCIS (Netherlands)

    Baustert, P.M.; Benetto, E.

    2017-01-01

    The evolution of life cycle assessment (LCA) from a merely comparative tool for the assessment of products to a policy analysis tool proceeds by incorporating increasingly complex modelling approaches. In more recent studies of complex systems, such as the agriculture sector or mobility, agent-based

  1. Support vector machine learning-based fMRI data group analysis.

    Science.gov (United States)

    Wang, Ze; Childress, Anna R; Wang, Jiongjiong; Detre, John A

    2007-07-15

    To explore the multivariate nature of fMRI data and to consider the inter-subject brain response discrepancies, a multivariate and brain response model-free method is fundamentally required. Two such methods are presented in this paper by integrating a machine learning algorithm, the support vector machine (SVM), and the random effect model. Without any brain response modeling, SVM was used to extract a whole brain spatial discriminance map (SDM), representing the brain response difference between the contrasted experimental conditions. Population inference was then obtained through the random effect analysis (RFX) or permutation testing (PMU) on the individual subjects' SDMs. Applied to arterial spin labeling (ASL) perfusion fMRI data, SDM RFX yielded lower false-positive rates in the null hypothesis test and higher detection sensitivity for synthetic activations with varying cluster size and activation strengths, compared to the univariate general linear model (GLM)-based RFX. For a sensory-motor ASL fMRI study, both SDM RFX and SDM PMU yielded similar activation patterns to GLM RFX and GLM PMU, respectively, but with higher t values and cluster extensions at the same significance level. Capitalizing on the absence of temporal noise correlation in ASL data, this study also incorporated PMU in the individual-level GLM and SVM analyses accompanied by group-level analysis through RFX or group-level PMU. Providing inferences on the probability of being activated or deactivated at each voxel, these individual-level PMU-based group analysis methods can be used to threshold the analysis results of GLM RFX, SDM RFX or SDM PMU.

  2. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Jin-wook [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: Jinwook@kaeri.re.kr; Seong, Seung-Hwan [Korea Atomic Energy Research Institute, P.O. Box 105, Yusong, Daejon 305-600 (Korea, Republic of)], E-mail: shseong@kaeri.re.kr; Lee, Un-Chul [Department of Nuclear Engineering, Seoul National University, Shinlim-Dong, Gwanak-Gu, Seoul 151-742 (Korea, Republic of)

    2007-09-15

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band.

  3. A fast-running core prediction model based on neural networks for load-following operations in a soluble boron-free reactor

    International Nuclear Information System (INIS)

    Jang, Jin-wook; Seong, Seung-Hwan; Lee, Un-Chul

    2007-01-01

    A fast prediction model for load-following operations in a soluble boron-free reactor has been proposed, which can predict the core status when three or more control rod groups are moved at a time. This prediction model consists of two multilayer feedforward neural network models to retrieve the axial offset and the reactivity, and compensation models to compensate for the reactivity and axial offset arising from the xenon transient. The neural network training data were generated by taking various overlaps among the control rod groups into consideration for training the neural network models, and the accuracy of the constructed neural network models was verified. Validation results of predicting load following operations for a soluble boron-free reactor show that this model has a good capability to predict the positions of the control rods for sustaining the criticality of a core during load-following operations to ensure that the tolerable axial offset band is not exceeded and it can provide enough corresponding time for the operators to take the necessary actions to prevent a deviation from the tolerable operating band

  4. Analysis of the free-energy surface of proteins from reversible folding simulations.

    Directory of Open Access Journals (Sweden)

    Lucy R Allen

    2009-07-01

    Full Text Available Computer generated trajectories can, in principle, reveal the folding pathways of a protein at atomic resolution and possibly suggest general and simple rules for predicting the folded structure of a given sequence. While such reversible folding trajectories can only be determined ab initio using all-atom transferable force-fields for a few small proteins, they can be determined for a large number of proteins using coarse-grained and structure-based force-fields, in which a known folded structure is by construction the absolute energy and free-energy minimum. Here we use a model of the fast folding helical lambda-repressor protein to generate trajectories in which native and non-native states are in equilibrium and transitions are accurately sampled. Yet, representation of the free-energy surface, which underlies the thermodynamic and dynamic properties of the protein model, from such a trajectory remains a challenge. Projections over one or a small number of arbitrarily chosen progress variables often hide the most important features of such surfaces. The results unequivocally show that an unprojected representation of the free-energy surface provides important and unbiased information and allows a simple and meaningful description of many-dimensional, heterogeneous trajectories, providing new insight into the possible mechanisms of fast-folding proteins.

  5. Solution phase and membrane immobilized iron-based free radical reactions: Fundamentals and applications for water treatment

    Science.gov (United States)

    Lewis, Scott Romak

    Membrane-based separation processes have been used extensively for drinking water purification, wastewater treatment, and numerous other applications. Reactive membranes synthesized through functionalization of the membrane pores offer enhanced reactivity due to increased surface area at the polymer-solution interface and low diffusion limitations. Oxidative techniques utilizing free radicals have proven effective for both the destruction of toxic organics and non-environmental applications. Most previous work focuses on reactions in the homogeneous phase; however, the immobilization of reactants in membrane pores offers several advantages. The use of polyanions immobilized in a membrane or chelates in solution prevents ferric hydroxide precipitation at near-neutral pH, a common limitation of iron(Fe(II/III))-catalyzed hydrogen peroxide (H 2O2) decomposition. The objectives of this research are to develop a membrane-based platform for the generation of free radicals, degrade toxic organic compounds using this and similar solution-based reactions, degrade toxic organic compounds in droplet form, quantify hydroxyl radical production in these reactions, and develop kinetic models for both processes. In this study, a functionalized membrane containing poly(acrylic acid) (PAA) was used to immobilize iron ions and conduct free radical reactions by permeating H2O2 through the membrane. The membrane's responsive behavior to pH and divalent cations was investigated and modeled. The conversion of Fe(II) to Fe(III) in the membrane and its effect on the decomposition of hydrogen peroxide were monitored and used to develop kinetic models for predicting H2O2 decomposition in these systems. The rate of hydroxyl radical production, and hence contaminant degradation can be varied by changing the residence time, H2O2 concentration, and/or iron loading. Using these membrane-immobilized systems, successful removal of toxic organic compounds, such as pentachlorophenol (PCP), from water

  6. Development and validation of a free-piston engine generator numerical model

    International Nuclear Information System (INIS)

    Jia, Boru; Zuo, Zhengxing; Tian, Guohong; Feng, Huihua; Roskilly, A.P.

    2015-01-01

    Highlights: • Detailed numerical model of free-piston engine generator is presented. • Sub models for both starting process and steady operation are derived. • Simulation results show good agreement with prototype test data. • Engine performance with different starting motor force and varied loads are simulated. • The efficiency of the prototype is estimated to be 31.5% at a power output of 4 kW under full load. - Abstract: This paper focuses on the numerical modelling of a spark ignited free-piston engine generator and the model validation with test results. Detailed sub-models for both starting process and steady operation were derived. The compression and expansion processes were not regarded as ideal gas isentropic processes; both heat transfer and air leakage were taken into consideration. The simulation results show good agreement with the prototype test data for both the starting process and steady operation. During the starting process, the difference of the in-cylinder gas pressure can be controlled within 1 bar for every running cycle. For the steady operation process, the difference was less than 5% and the areas enclosed on the pressure–volume diagram were similar, indicating that the power produced by the engine and the engine efficiency could be predicted by this model. Based on this model, the starting process with different starting motor forces and the combustion process with various throttle openings were simulated. The engine performance during stable operation at 100% engine load was predicted, and the efficiency of the prototype was estimated to be 31.5% at power output of 4 kW

  7. Free energy landscape and transition pathways from Watson–Crick to Hoogsteen base pairing in free duplex DNA

    Science.gov (United States)

    Yang, Changwon; Kim, Eunae; Pak, Youngshang

    2015-01-01

    Houghton (HG) base pairing plays a central role in the DNA binding of proteins and small ligands. Probing detailed transition mechanism from Watson–Crick (WC) to HG base pair (bp) formation in duplex DNAs is of fundamental importance in terms of revealing intrinsic functions of double helical DNAs beyond their sequence determined functions. We investigated a free energy landscape of a free B-DNA with an adenosine–thymine (A–T) rich sequence to probe its conformational transition pathways from WC to HG base pairing. The free energy landscape was computed with a state-of-art two-dimensional umbrella molecular dynamics simulation at the all-atom level. The present simulation showed that in an isolated duplex DNA, the spontaneous transition from WC to HG bp takes place via multiple pathways. Notably, base flipping into the major and minor grooves was found to play an important role in forming these multiple transition pathways. This finding suggests that naked B-DNA under normal conditions has an inherent ability to form HG bps via spontaneous base opening events. PMID:26250116

  8. Students learn systems-based care and facilitate system change as stakeholders in a free clinic experience.

    Science.gov (United States)

    Colbert, Colleen Y; Ogden, Paul E; Lowe, Darla; Moffitt, Michael J

    2010-10-01

    Systems-based practice (SBP) is rarely taught or evaluated during medical school, yet is one of the required competencies once students enter residency. We believe Texas A&M College of Medicine students learn about systems issues informally, as they care for patients at a free clinic in Temple, TX. The mandatory free clinic rotation is part of the Internal Medicine clerkship and does not include formal instruction in SBP. During 2008-2009, a sample of students (n = 31) on the IMED clerkship's free clinic rotation participated in a program evaluation/study regarding their experiences. Focus groups (M = 5 students/group) were held at the end of each outpatient rotation. Students were asked: "Are you aware of any system issues which can affect either the delivery of or access to care at the free clinic?" Data saturation was reached after six focus groups, when investigators noted a repetition of responses. Based upon investigator consensus opinion, data collection was discontinued. Based upon a content analysis, six themes were identified: access to specialists, including OB-GYN, was limited; cost containment; lack of resources affects delivery of care; delays in care due to lack of insurance; understanding of larger healthcare system and free clinic role; and delays in tests due to language barriers. Medical students were able to learn about SBP issues during free clinic rotations. Students experienced how SBP issues affected the health care of uninsured individuals. We believe these findings may be transferable to medical schools with mandatory free clinic rotations.

  9. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  10. Asymptotically Free Natural Supersymmetric Twin Higgs Model

    Science.gov (United States)

    Badziak, Marcin; Harigaya, Keisuke

    2018-05-01

    Twin Higgs (TH) models explain the absence of new colored particles responsible for natural electroweak symmetry breaking (EWSB). All known ultraviolet completions of TH models require some nonperturbative dynamics below the Planck scale. We propose a supersymmetric model in which the TH mechanism is introduced by a new asymptotically free gauge interaction. The model features natural EWSB for squarks and gluino heavier than 2 TeV even if supersymmetry breaking is mediated around the Planck scale, and has interesting flavor phenomenology including the top quark decay into the Higgs boson and the up quark which may be discovered at the LHC.

  11. Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.

    Science.gov (United States)

    Muraki, Eiji

    1999-01-01

    Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…

  12. Coupling compositional liquid gas Darcy and free gas flows at porous and free-flow domains interface

    Energy Technology Data Exchange (ETDEWEB)

    Masson, R., E-mail: roland.masson@unice.fr [LJAD, University Nice Sophia Antipolis, CNRS UMR 7351 (France); Team COFFEE INRIA Sophia Antipolis Méditerranée (France); Trenty, L., E-mail: laurent.trenty@andra.fr [Andra, Chatenay Malabry (France); Zhang, Y., E-mail: yumeng.zhang@unice.fr [LJAD, University Nice Sophia Antipolis, CNRS UMR 7351 (France); Team COFFEE INRIA Sophia Antipolis Méditerranée (France)

    2016-09-15

    This paper proposes an efficient splitting algorithm to solve coupled liquid gas Darcy and free gas flows at the interface between a porous medium and a free-flow domain. This model is compared to the reduced model introduced in [6] using a 1D approximation of the gas free flow. For that purpose, the gas molar fraction diffusive flux at the interface in the free-flow domain is approximated by a two point flux approximation based on a low-frequency diagonal approximation of a Steklov–Poincaré type operator. The splitting algorithm and the reduced model are applied in particular to the modelling of the mass exchanges at the interface between the storage and the ventilation galleries in radioactive waste deposits.

  13. Genetic Algorithms for Agent-Based Infrastructure Interdependency Modeling and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    May Permann

    2007-03-01

    Today’s society relies greatly upon an array of complex national and international infrastructure networks such as transportation, electric power, telecommunication, and financial networks. This paper describes initial research combining agent-based infrastructure modeling software and genetic algorithms (GAs) to help optimize infrastructure protection and restoration decisions. This research proposes to apply GAs to the problem of infrastructure modeling and analysis in order to determine the optimum assets to restore or protect from attack or other disaster. This research is just commencing and therefore the focus of this paper is the integration of a GA optimization method with a simulation through the simulation’s agents.

  14. Comparative assessment of computational methods for the determination of solvation free energies in alcohol-based molecules.

    Science.gov (United States)

    Martins, Silvia A; Sousa, Sergio F

    2013-06-05

    The determination of differences in solvation free energies between related drug molecules remains an important challenge in computational drug optimization, when fast and accurate calculation of differences in binding free energy are required. In this study, we have evaluated the performance of five commonly used polarized continuum model (PCM) methodologies in the determination of solvation free energies for 53 typical alcohol and alkane small molecules. In addition, the performance of these PCM methods, of a thermodynamic integration (TI) protocol and of the Poisson-Boltzmann (PB) and generalized Born (GB) methods, were tested in the determination of solvation free energies changes for 28 common alkane-alcohol transformations, by the substitution of an hydrogen atom for a hydroxyl substituent. The results show that the solvation model D (SMD) performs better among the PCM-based approaches in estimating solvation free energies for alcohol molecules, and solvation free energy changes for alkane-alcohol transformations, with an average error below 1 kcal/mol for both quantities. However, for the determination of solvation free energy changes on alkane-alcohol transformation, PB and TI yielded better results. TI was particularly accurate in the treatment of hydroxyl groups additions to aromatic rings (0.53 kcal/mol), a common transformation when optimizing drug-binding in computer-aided drug design. Copyright © 2013 Wiley Periodicals, Inc.

  15. Stochastic analysis of laminated composite plates on elastic foundation: The cases of post-buckling behavior and nonlinear free vibration

    International Nuclear Information System (INIS)

    Singh, B.N.; Lal, Achchhe

    2010-01-01

    This study deals with the stochastic post-buckling and nonlinear free vibration analysis of a laminated composite plate resting on a two parameters Pasternak foundation with Winkler cubic nonlinearity having uncertain system properties. The system properties are modeled as basic random variables. A C 0 nonlinear finite element formulation of the random problem based on higher-order shear deformation theory in the von Karman sense is presented. A direct iterative method in conjunction with a stochastic nonlinear finite element method proposed earlier by the authors is extended to analyze the effect of uncertainty in system properties on the post-buckling and nonlinear free vibration of the composite plates having Winler type of geometric nonlinearity. Mean as well as standard deviation of the responses have been obtained for various combinations of geometric parameters, foundation parameters, stacking sequences and boundary conditions and compared with those available in the literature and Monte Carlo simulation.

  16. Numerical solution of quadratic matrix equations for free vibration analysis of structures

    Science.gov (United States)

    Gupta, K. K.

    1975-01-01

    This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.

  17. A Practical Cryogen-Free CO2 Purification and Freezing Technique for Stable Isotope Analysis.

    Science.gov (United States)

    Sakai, Saburo; Matsuda, Shinichi

    2017-04-18

    Since isotopic analysis by mass spectrometry began in the early 1900s, sample gas for light-element isotopic measurements has been purified by the use of cryogens and vacuum-line systems. However, this conventional purification technique can achieve only certain temperatures that depend on the cryogens and can be sustained only as long as there is a continuous cryogen supply. Here, we demonstrate a practical cryogen-free CO 2 purification technique using an electrical operated cryocooler for stable isotope analysis. This approach is based on portable free-piston Stirling cooling technology and controls the temperature to an accuracy of 0.1 °C in a range from room temperature to -196 °C (liquid-nitrogen temperature). The lowest temperature can be achieved in as little as 10 min. We successfully purified CO 2 gas generated by carbonates and phosphoric acid reaction and found its sublimation point to be -155.6 °C at 0.1 Torr in the vacuum line. This means that the temperature required for CO 2 trapping is much higher than the liquid-nitrogen temperature. Our portable cooling system offers the ability to be free from the inconvenience of cryogen use for stable isotope analysis. It also offers a new cooling method applicable to a number of fields that use gas measurements.

  18. Global model of zenith tropospheric delay proposed based on EOF analysis

    Science.gov (United States)

    Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng

    2017-07-01

    Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.

  19. Groundwater potentiality mapping using geoelectrical-based aquifer hydraulic parameters: A GIS-based multi-criteria decision analysis modeling approach

    Directory of Open Access Journals (Sweden)

    Kehinde Anthony Mogaji Hwee San Lim

    2017-01-01

    Full Text Available This study conducted a robust analysis on acquired 2D resistivity imaging data and borehole pumping test records to optimize groundwater potentiality mapping in Perak province, Malaysia using derived aquifer hydraulic properties. The transverse resistance (TR parameter was determined from the interpreted 2D resistivity imaging data by applying the Dar-Zarrouk parameter equation. Linear regression and GIS techniques were used to regress the estimated values for TR parameters with the aquifer transmissivity values extracted from the geospatially produced BPT records-based aquifer transmissivity map to develop the aquifer transmissivity parameter predictive (ATPP model. The reliability evaluated ATPP model using the Theil inequality coefficient measurement approach was used to establish geoelectrical-based hydraulic parameters (GHP modeling equations for the modeling of transmissivity (Tr, hydraulic conductivity (K, storativity (St, and hydraulic diffusivity (D properties. The applied GHP modeling equation results to the delineated aquifer media was used to produce aquifer potential conditioning factor maps for Tr, K, St, and D. The maps were modeled to develop an aquifer potential mapping index (APMI model via applying the multi-criteria decision analysis-analytic hierarchy process principle. The area groundwater reservoir productivity potential model map produced based on the processed APMI model estimates in the GIS environment was found to be 71% accurate. This study establishes a good alternative approach to determine aquifer hydraulic parameters even in areas where pumping test information is unavailable using a cost effective geophysical data. The produced map can be explored for hydrological decision making.

  20. Sensitivity analysis of an individual-based model for simulation of influenza epidemics.

    Directory of Open Access Journals (Sweden)

    Elaine O Nsoesie

    Full Text Available Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty

  1. Blind Separation of Acoustic Signals Combining SIMO-Model-Based Independent Component Analysis and Binary Masking

    Directory of Open Access Journals (Sweden)

    Hiekata Takashi

    2006-01-01

    Full Text Available A new two-stage blind source separation (BSS method for convolutive mixtures of speech is proposed, in which a single-input multiple-output (SIMO-model-based independent component analysis (ICA and a new SIMO-model-based binary masking are combined. SIMO-model-based ICA enables us to separate the mixed signals, not into monaural source signals but into SIMO-model-based signals from independent sources in their original form at the microphones. Thus, the separated signals of SIMO-model-based ICA can maintain the spatial qualities of each sound source. Owing to this attractive property, our novel SIMO-model-based binary masking can be applied to efficiently remove the residual interference components after SIMO-model-based ICA. The experimental results reveal that the separation performance can be considerably improved by the proposed method compared with that achieved by conventional BSS methods. In addition, the real-time implementation of the proposed BSS is illustrated.

  2. Model Predictive Control for Offset-Free Reference Tracking

    Czech Academy of Sciences Publication Activity Database

    Belda, Květoslav

    2016-01-01

    Roč. 5, č. 1 (2016), s. 8-13 ISSN 1805-3386 Institutional support: RVO:67985556 Keywords : offset-free reference tracking * predictive control * ARX model * state-space model * multi-input multi-output system * robotic system * mechatronic system Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2016/AS/belda-0458355.pdf

  3. Model based fault diagnosis in a centrifugal pump application using structural analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  4. Model Based Fault Diagnosis in a Centrifugal Pump Application using Structural Analysis

    DEFF Research Database (Denmark)

    Kallesøe, C. S.; Izadi-Zamanabadi, Roozbeh; Rasmussen, Henrik

    2004-01-01

    A model based approach for fault detection and isolation in a centrifugal pump is proposed in this paper. The fault detection algorithm is derived using a combination of structural analysis, Analytical Redundant Relations (ARR) and observer designs. Structural considerations on the system are used...

  5. TAD-free analysis of architectural proteins and insulators.

    Science.gov (United States)

    Mourad, Raphaël; Cuvier, Olivier

    2018-03-16

    The three-dimensional (3D) organization of the genome is intimately related to numerous key biological functions including gene expression and DNA replication regulations. The mechanisms by which molecular drivers functionally organize the 3D genome, such as topologically associating domains (TADs), remain to be explored. Current approaches consist in assessing the enrichments or influences of proteins at TAD borders. Here, we propose a TAD-free model to directly estimate the blocking effects of architectural proteins, insulators and DNA motifs on long-range contacts, making the model intuitive and biologically meaningful. In addition, the model allows analyzing the whole Hi-C information content (2D information) instead of only focusing on TAD borders (1D information). The model outperforms multiple logistic regression at TAD borders in terms of parameter estimation accuracy and is validated by enhancer-blocking assays. In Drosophila, the results support the insulating role of simple sequence repeats and suggest that the blocking effects depend on the number of repeats. Motif analysis uncovered the roles of the transcriptional factors pannier and tramtrack in blocking long-range contacts. In human, the results suggest that the blocking effects of the well-known architectural proteins CTCF, cohesin and ZNF143 depend on the distance between loci, where each protein may participate at different scales of the 3D chromatin organization.

  6. Fatigue-free PZT-based nanocomposites

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, H J; Sando, M [Nat. Ind. Res. Inst., Nagoya (Japan); Tajima, K [Synergy Ceramics Lab., Fine Ceramics Research Association, Nagoya (Japan); Niihara, K [ISIR, Osaka Univ., Mihogaoka, Ibaraki (Japan)

    1999-03-01

    The goal of this study is to fabricate fatigue-free piezoelectrics-based nanocomposites. Lead zirconate titanate (PZT) and metallic platinum (Pt) were selected as a matrix and secondary phase dispersoid. Fine Pt particles were homogeneously dispersed in the PZT matrix. Fatigue properties of the unpoled PZT-based nanocomposite under electrical cyclic loading were investigated. The electrical-field-induced crack growth was monitored by an optical microscope, and it depended on the number of cycles the sample was subjected to. Resistance to fatigue was significantly enhanced in the nanocomposite. The excellent fatigue behavior of the PZT/Pt nanocomposites may result from the grain boundary strenghtening due to the interaction between the matrix and Pt particles. (orig.) 8 refs.

  7. A game theory-based trust measurement model for social networks.

    Science.gov (United States)

    Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong

    2016-01-01

    In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.

  8. Gauge coupling unification in realistic free-fermionic string models

    International Nuclear Information System (INIS)

    Dienes, K.R.; Faraggi, A.E.

    1995-01-01

    We discuss the unification of gauge couplings within the framework of a wide class of realistic free-fermionic string models which have appeared in the literature, including the flipped SU(5), SO(6)xSO(4), and various SU(3)xSU(2)xU(1) models. If the matter spectrum below the string scale is that of the Minimal Supersymmetric Standard Model (MSSM), then string unification is in disagreement with experiment. We therefore examine several effects that may modify the minimal string predictions. First, we develop a systematic procedure for evaluating the one-loop heavy string threshold corrections in free-fermionic string models, and we explicitly evaluate these corrections for each of the realistic models. We find that these string threshold corrections are small, and we provide general arguments explaining why such threshold corrections are suppressed in string theory. Thus heavy thresholds cannot resolve the disagreement with experiment. We also study the effect of non-standard hypercharge normalizations, light SUSY thresholds, and intermediate-scale gauge structure, and similarly conclude that these effects cannot resolve the disagreement with low-energy data. Finally, we examine the effects of additional color triplets and electroweak doublets beyond the MSSM. Although not required in ordinary grand unification scenarios, such states generically appear within the context of certain realistic free-fermionic string models. We show that if these states exist at the appropriate thresholds, then the gauge couplings will indeed unify at the string scale. Thus, within these string models, string unification can be in agreement with low-energy data. (orig.)

  9. Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis

    Science.gov (United States)

    2016-07-31

    distribution unlimited Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis Matthew...vital importance for hydrocarbon -fueled propulsion systems: fuel thermal performance as indicated by physical and chemical effects of cooling passage... analysis . The selection and acquisition of a set of chemically diverse fuels is pivotal for a successful outcome since test method validation and

  10. K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.

    Science.gov (United States)

    Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue

    2018-05-15

    Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.

  11. Toward the M(F)--Theory Embedding of Realistic Free-Fermion Models

    CERN Document Server

    Berglund, P; Faraggi, A E; Nanopoulos, Dimitri V; Qiu, Z; Berglund, Per; Ellis, John; Faraggi, Alon E.; Qiu, Zongan

    1998-01-01

    We construct a Landau-Ginzburg model with the same data and symmetries as a $Z_2\\times Z_2$ orbifold that corresponds to a class of realistic free-fermion models. Within the class of interest, we show that this orbifolding connects between different $Z_2\\times Z_2$ orbifold models and commutes with the mirror symmetry. Our work suggests that duality symmetries previously discussed in the context of specific $M$ and $F$ theory compactifications may be extended to the special $Z_2\\times Z_2$ orbifold that characterizes realistic free-fermion models.

  12. Equation-free analysis of two-component system signalling model reveals the emergence of co-existing phenotypes in the absence of multistationarity.

    Directory of Open Access Journals (Sweden)

    Rebecca B Hoyle

    Full Text Available Phenotypic differences of genetically identical cells under the same environmental conditions have been attributed to the inherent stochasticity of biochemical processes. Various mechanisms have been suggested, including the existence of alternative steady states in regulatory networks that are reached by means of stochastic fluctuations, long transient excursions from a stable state to an unstable excited state, and the switching on and off of a reaction network according to the availability of a constituent chemical species. Here we analyse a detailed stochastic kinetic model of two-component system signalling in bacteria, and show that alternative phenotypes emerge in the absence of these features. We perform a bifurcation analysis of deterministic reaction rate equations derived from the model, and find that they cannot reproduce the whole range of qualitative responses to external signals demonstrated by direct stochastic simulations. In particular, the mixed mode, where stochastic switching and a graded response are seen simultaneously, is absent. However, probabilistic and equation-free analyses of the stochastic model that calculate stationary states for the mean of an ensemble of stochastic trajectories reveal that slow transcription of either response regulator or histidine kinase leads to the coexistence of an approximate basal solution and a graded response that combine to produce the mixed mode, thus establishing its essential stochastic nature. The same techniques also show that stochasticity results in the observation of an all-or-none bistable response over a much wider range of external signals than would be expected on deterministic grounds. Thus we demonstrate the application of numerical equation-free methods to a detailed biochemical reaction network model, and show that it can provide new insight into the role of stochasticity in the emergence of phenotypic diversity.

  13. Calibration and analysis of genome-based models for microbial ecology.

    Science.gov (United States)

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  14. The Analysis of Organizational Diagnosis on Based Six Box Model in Universities

    Science.gov (United States)

    Hamid, Rahimi; Siadat, Sayyed Ali; Reza, Hoveida; Arash, Shahin; Ali, Nasrabadi Hasan; Azizollah, Arbabisarjou

    2011-01-01

    Purpose: The analysis of organizational diagnosis on based six box model at universities. Research method: Research method was descriptive-survey. Statistical population consisted of 1544 faculty members of universities which through random strafed sampling method 218 persons were chosen as the sample. Research Instrument were organizational…

  15. Keratinocytes propagated in serum-free, feeder-free culture conditions fail to form stratified epidermis in a reconstituted skin model.

    Directory of Open Access Journals (Sweden)

    Rebecca Lamb

    Full Text Available Primary human epidermal stem cells isolated from skin tissues and subsequently expanded in tissue culture are used for human therapeutic use to reconstitute skin on patients and to generate artificial skin in culture for academic and commercial research. Classically, epidermal cells, known as keratinocytes, required fibroblast feeder support and serum-containing media for serial propagation. In alignment with global efforts to remove potential animal contaminants, many serum-free, feeder-free culture methods have been developed that support derivation and growth of these cells in 2-dimensional culture. Here we show that keratinocytes grown continually in serum-free and feeder-free conditions were unable to form into a stratified, mature epidermis in a skin equivalent model. This is not due to loss of cell potential as keratinocytes propagated in serum-free, feeder-free conditions retain their ability to form stratified epidermis when re-introduced to classic serum-containing media. Extracellular calcium supplementation failed to improve epidermis development. In contrast, the addition of serum to commercial, growth media developed for serum-free expansion of keratinocytes facilitated 3-dimensional stratification in our skin equivalent model. Moreover, the addition of heat-inactivated serum improved the epidermis structure and thickness, suggesting that serum contains factors that both aid and inhibit stratification.

  16. Free-drop analysis of the transport container for hydrogen isotopes

    International Nuclear Information System (INIS)

    Lee, M. S.; Hong, C. S.; Baek, S. W.; Ahn, D. H.; Kim, K. R.; Lee, S. H.; Lim, S. P.; Jung, H. S.

    2002-01-01

    The vessel used for the transport of radioactive materials, containing hydrogen isotopes is evaluated for hypothetical accident conditions according to national regulations. The computational analysis is a cost effective tool to minimize testing and streamline the regulatory procedures, and supports experimental programs to qualify the container for the safe transport of radioactive materials. The numerical analysis of 9m free-drop onto a flat unyielding, horizontal surface has been performed using the explicit finite element computer program ABAQUS. Especially free-drop simulations for 30.deg. C tilted condition is precisely estimated

  17. Is tuberculosis treatment really free in China? A study comparing two areas with different management models.

    Directory of Open Access Journals (Sweden)

    Sangsang Qiu

    Full Text Available China has implemented a free-service policy for tuberculosis. However, patients still have to pay a substantial proportion of their annual income for treatment of this disease. This study describes the economic burden on patients with tuberculosis; identifies related factors by comparing two areas with different management models; and provides policy recommendation for tuberculosis control reform in China.There are three tuberculosis management models in China: the tuberculosis dispensary model, specialist model and integrated model. We selected Zhangjiagang (ZJG and Taixing (TX as the study sites, which correspond to areas implementing the integrated model and dispensary model, respectively. Patients diagnosed and treated for tuberculosis since January 2010 were recruited as study subjects. A total of 590 patients (316 patients from ZJG and 274 patients from TX were interviewed with a response rate of 81%. The economic burden attributed to tuberculosis, including direct costs and indirect costs, was estimated and compared between the two study sites. The Mann-Whitney U Test was used to compare the cost differences between the two groups. Potential factors related to the total out-of-pocket costs were analyzed based on a step-by-step multivariate linear regression model after the logarithmic transformation of the costs.The average (median, interquartile range total cost was 18793.33 (9965, 3200-24400 CNY for patients in ZJG, which was significantly higher than for patients in TX (mean: 6598.33, median: 2263, interquartile range: 983-6688 (Z = 10.42, P < 0.001. After excluding expenses covered by health insurance, the average out-of-pocket costs were 14304.4 CNY in ZJG and 5639.2 CNY in TX. Based on the multivariable linear regression analysis, factors related to the total out-of-pocket costs were study site, age, number of clinical visits, residence, diagnosis delay, hospitalization, intake of liver protective drugs and use of the second

  18. A statistical mechanics model for free-for-all airplane passenger boarding

    Science.gov (United States)

    Steffen, Jason H.

    2008-12-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics, where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.

  19. A statistical mechanics model for free-for-all airplane passenger boarding

    International Nuclear Information System (INIS)

    Steffen, Jason H.; Fermilab

    2008-01-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty

  20. A statistical mechanics model for free-for-all airplane passenger boarding

    Energy Technology Data Exchange (ETDEWEB)

    Steffen, Jason H.; /Fermilab

    2008-08-01

    I discuss a model for free-for-all passenger boarding which is employed by some discount air carriers. The model is based on the principles of statistical mechanics where each seat in the aircraft has an associated energy which reflects the preferences of travelers. As each passenger enters the airplane they select their seats using Boltzmann statistics, proceed to that location, load their luggage, sit down, and the partition function seen by remaining passengers is modified to reflect this fact. I discuss the various model parameters and make qualitative comparisons of this passenger boarding model with those that involve assigned seats. The model can be used to predict the probability that certain seats will be occupied at different times during the boarding process. These results might provide a useful description of this boarding method. The model is a relatively unusual application of undergraduate level physics and describes a situation familiar to many students and faculty.

  1. Extended nonabelian symmetries for free fermionic model

    International Nuclear Information System (INIS)

    Zaikov, R.P.

    1993-08-01

    The higher spin symmetry for both Dirac and Majorana massless free fermionic field models are considered. An infinite Lie algebra which is a linear realization of the higher spin extension of the cross products of the Virasoro and affine Kac-Moody algebras is obtained. The corresponding current algebra is closed which is not the case of analogous current algebra in the WZNW model. The gauging procedure for the higher spin symmetry is also given. (author). 12 refs

  2. A Requirements Analysis Model Based on QFD

    Institute of Scientific and Technical Information of China (English)

    TANG Zhi-wei; Nelson K.H.Tang

    2004-01-01

    The enterprise resource planning (ERP) system has emerged to offer an integrated IT solution and more and more enterprises are increasing by adopting this system and regarding it as an important innovation. However, there is already evidence of high failure risks in ERP project implementation, one major reason is poor analysis of the requirements for system implementation. In this paper, the importance of requirements analysis for ERP project implementation is highlighted, and a requirements analysis model by applying quality function deployment (QFD) is presented, which will support to conduct requirements analysis for ERP project.

  3. LFQuant: a label-free fast quantitative analysis tool for high-resolution LC-MS/MS proteomics data.

    Science.gov (United States)

    Zhang, Wei; Zhang, Jiyang; Xu, Changming; Li, Ning; Liu, Hui; Ma, Jie; Zhu, Yunping; Xie, Hongwei

    2012-12-01

    Database searching based methods for label-free quantification aim to reconstruct the peptide extracted ion chromatogram based on the identification information, which can limit the search space and thus make the data processing much faster. The random effect of the MS/MS sampling can be remedied by cross-assignment among different runs. Here, we present a new label-free fast quantitative analysis tool, LFQuant, for high-resolution LC-MS/MS proteomics data based on database searching. It is designed to accept raw data in two common formats (mzXML and Thermo RAW), and database search results from mainstream tools (MASCOT, SEQUEST, and X!Tandem), as input data. LFQuant can handle large-scale label-free data with fractionation such as SDS-PAGE and 2D LC. It is easy to use and provides handy user interfaces for data loading, parameter setting, quantitative analysis, and quantitative data visualization. LFQuant was compared with two common quantification software packages, MaxQuant and IDEAL-Q, on the replication data set and the UPS1 standard data set. The results show that LFQuant performs better than them in terms of both precision and accuracy, and consumes significantly less processing time. LFQuant is freely available under the GNU General Public License v3.0 at http://sourceforge.net/projects/lfquant/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Modeling and Analysis of 3d Printing Ws-Bpel Business Processes Based on Servicenet

    Directory of Open Access Journals (Sweden)

    Zhang Cheng-Lei

    2017-01-01

    Full Text Available To solve the problem that whether the described Web service by business process execution language were interactive compatible, a method of WS-BPEL(Web Services Business Process Execution Language parsing and execution was proposed. The service compatibility checking algorithm based on the Mediation model, which can provide multi-level service checking compatibility, and realize the goal of Service Cooperation or the demand of Value-Added Services. Based on BPMN specification, a task modeling and management tool was proposed to support the service components for assembly component. It supports both Web service automatic retrieval and service content analysis based on QoS information, and the task execution model between the BPMN specification task descriptions was transformed into the BPEL specification task description model. Finally, a model transformation strategy based on meta-model mapping was put forward. The algorithm was designed and examples were given to demonstrate the efficiency of 3D Printing WS-BPEL.

  5. Free energy landscape and transition pathways from Watson-Crick to Hoogsteen base pairing in free duplex DNA.

    Science.gov (United States)

    Yang, Changwon; Kim, Eunae; Pak, Youngshang

    2015-09-18

    Houghton (HG) base pairing plays a central role in the DNA binding of proteins and small ligands. Probing detailed transition mechanism from Watson-Crick (WC) to HG base pair (bp) formation in duplex DNAs is of fundamental importance in terms of revealing intrinsic functions of double helical DNAs beyond their sequence determined functions. We investigated a free energy landscape of a free B-DNA with an adenosine-thymine (A-T) rich sequence to probe its conformational transition pathways from WC to HG base pairing. The free energy landscape was computed with a state-of-art two-dimensional umbrella molecular dynamics simulation at the all-atom level. The present simulation showed that in an isolated duplex DNA, the spontaneous transition from WC to HG bp takes place via multiple pathways. Notably, base flipping into the major and minor grooves was found to play an important role in forming these multiple transition pathways. This finding suggests that naked B-DNA under normal conditions has an inherent ability to form HG bps via spontaneous base opening events. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Temporal expression-based analysis of metabolism.

    Directory of Open Access Journals (Sweden)

    Sara B Collins

    Full Text Available Metabolic flux is frequently rerouted through cellular metabolism in response to dynamic changes in the intra- and extra-cellular environment. Capturing the mechanisms underlying these metabolic transitions in quantitative and predictive models is a prominent challenge in systems biology. Progress in this regard has been made by integrating high-throughput gene expression data into genome-scale stoichiometric models of metabolism. Here, we extend previous approaches to perform a Temporal Expression-based Analysis of Metabolism (TEAM. We apply TEAM to understanding the complex metabolic dynamics of the respiratorily versatile bacterium Shewanella oneidensis grown under aerobic, lactate-limited conditions. TEAM predicts temporal metabolic flux distributions using time-series gene expression data. Increased predictive power is achieved by supplementing these data with a large reference compendium of gene expression, which allows us to take into account the unique character of the distribution of expression of each individual gene. We further propose a straightforward method for studying the sensitivity of TEAM to changes in its fundamental free threshold parameter θ, and reveal that discrete zones of distinct metabolic behavior arise as this parameter is changed. By comparing the qualitative characteristics of these zones to additional experimental data, we are able to constrain the range of θ to a small, well-defined interval. In parallel, the sensitivity analysis reveals the inherently difficult nature of dynamic metabolic flux modeling: small errors early in the simulation propagate to relatively large changes later in the simulation. We expect that handling such "history-dependent" sensitivities will be a major challenge in the future development of dynamic metabolic-modeling techniques.

  7. From Creatures of Habit to Goal-Directed Learners: Tracking the Developmental Emergence of Model-Based Reinforcement Learning.

    Science.gov (United States)

    Decker, Johannes H; Otto, A Ross; Daw, Nathaniel D; Hartley, Catherine A

    2016-06-01

    Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior. © The Author(s) 2016.

  8. A photometric analysis of ZZ Ceti stars: A parameter-free temperature indicator?

    Energy Technology Data Exchange (ETDEWEB)

    Bergeron, P [Departement de Physique, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec H3C 3J7 (Canada); Leggett, S K [Gemini Observatory, Northern Operations Center, 670 North A' ohoku Place, Hilo, Hawaii 96720 (United States); Harris, H C, E-mail: bergeron@astro.umontreal.c, E-mail: sleggett@gemini.ed, E-mail: hch@nofs.navy.mi [US Naval Observatory, Flagstaff Station, Flagstaff, Arizona 86001 (United States)

    2009-06-01

    We present a model atmosphere analysis of optical VRI and infrared JHK photometric data of about two dozen ZZ Ceti stars. We first show from a theoretical point of view that the resulting energy distributions are not particularly sensitive to surface gravity or to the assumed convective efficiency, a result which suggests a parameter-free effective temperature indicator for ZZ Ceti stars. We then fit the observed energy distributions with our grid of model atmospheres and compare the photometric effective temperatures with the spectroscopic values obtained from fits to the hydrogen line profiles. Our results are finally discussed in the context of the determination of the empirical boundaries of the ZZ Ceti instability strip.

  9. Label-Free Electrical Detection Using Carbon Nanotube-Based Biosensors

    Directory of Open Access Journals (Sweden)

    Kenzo Maehashi

    2009-07-01

    Full Text Available Label-free detections of biomolecules have attracted great attention in a lot of life science fields such as genomics, clinical diagnosis and practical pharmacy. In this article, we reviewed amperometric and potentiometric biosensors based on carbon nanotubes (CNTs. In amperometric detections, CNT-modified electrodes were used as working electrodes to significantly enhance electroactive surface area. In contrast, the potentiometric biosensors were based on aptamer-modified CNT field-effect transistors (CNTFETs. Since aptamers are artificial oligonucleotides and thus are smaller than the Debye length, proteins can be detected with high sensitivity. In this review, we discussed on the technology, characteristics and developments for commercialization in label-free CNT-based biosensors.

  10. Model Based Analysis of Insider Threats

    DEFF Research Database (Denmark)

    Chen, Taolue; Han, Tingting; Kammueller, Florian

    2016-01-01

    In order to detect malicious insider attacks it is important to model and analyse infrastructures and policies of organisations and the insiders acting within them. We extend formal approaches that allow modelling such scenarios by quantitative aspects to enable a precise analysis of security...... designs. Our framework enables evaluating the risks of an insider attack to happen quantitatively. The framework first identifies an insider's intention to perform an inside attack, using Bayesian networks, and in a second phase computes the probability of success for an inside attack by this actor, using...

  11. Studies on surface tension effect for free surface flow around floating models; Futai mokei mawari no jiyu hyomenryu ni oyobosu hyomen choryoku no eikyo ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, K [Yokohama National Univ., Yokohama (Japan). Faculty of Engineering; Akiba, H [Toyo Construction Co. Ltd., Tokyo (Japan)

    1997-12-31

    The effect of surface tension on free surface flow around floating models is discussed experimentally and numerically. Three-dimensional free surface flow around vertical circular cylinders floating in a circulating water channel was visually observed, where a surface-active agent was added to water. The results are analyzed using Weber number. The numerical analysis was done for vertical cylinder and CY100 models using the Rankine source method. Weber number of at least around 120 is necessary to eliminate the effect of surface tension from free surface flow around the CY100 model. The numerical analysis for the cylinder model needs simulation with wavelength shorter than that of free surface wave used by the Rankine source method. The model for the resistance test should be at least around 7m long to eliminate the effect of surface tension at Froude number of 0.1 or higher. 15 refs., 12 figs., 2 tabs.

  12. Studies on surface tension effect for free surface flow around floating models; Futai mokei mawari no jiyu hyomenryu ni oyobosu hyomen choryoku no eikyo ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, K. [Yokohama National Univ., Yokohama (Japan). Faculty of Engineering; Akiba, H. [Toyo Construction Co. Ltd., Tokyo (Japan)

    1996-12-31

    The effect of surface tension on free surface flow around floating models is discussed experimentally and numerically. Three-dimensional free surface flow around vertical circular cylinders floating in a circulating water channel was visually observed, where a surface-active agent was added to water. The results are analyzed using Weber number. The numerical analysis was done for vertical cylinder and CY100 models using the Rankine source method. Weber number of at least around 120 is necessary to eliminate the effect of surface tension from free surface flow around the CY100 model. The numerical analysis for the cylinder model needs simulation with wavelength shorter than that of free surface wave used by the Rankine source method. The model for the resistance test should be at least around 7m long to eliminate the effect of surface tension at Froude number of 0.1 or higher. 15 refs., 12 figs., 2 tabs.

  13. Free Vibration Analyses of FGM Thin Plates by Isogeometric Analysis Based on Classical Plate Theory and Physical Neutral Surface

    Directory of Open Access Journals (Sweden)

    Shuohui Yin

    2013-01-01

    Full Text Available The isogeometric analysis with nonuniform rational B-spline (NURBS based on the classical plate theory (CPT is developed for free vibration analyses of functionally graded material (FGM thin plates. The objective of this work is to provide an efficient and accurate numerical simulation approach for the nonhomogeneous thin plates and shells. Higher order basis functions can be easily obtained in IGA, thus the formulation of CPT based on the IGA can be simplified. For the FGM thin plates, material property gradient in the thickness direction is unsymmetrical about the midplane, so effects of midplane displacements cannot be ignored, whereas the CPT neglects midplane displacements. To eliminate the effects of midplane displacements without introducing new unknown variables, the physical neutral surface is introduced into the CPT. The approximation of the deflection field and the geometric description are performed by using the NURBS basis functions. Compared with the first-order shear deformation theory, the present method has lower memory consumption and higher efficiency. Several numerical results show that the present method yields highly accurate solutions.

  14. Analysis for Ad Hoc Network Attack-Defense Based on Stochastic Game Model

    Directory of Open Access Journals (Sweden)

    Yuanjie LI

    2014-06-01

    Full Text Available The attack actions analysis for Ad Hoc networks can provide a reference for the design security mechanisms. This paper presents an analysis method of security of Ad Hoc networks based on Stochastic Game Nets (SGN. This method can establish a SGN model of Ad Hoc networks and calculate to get the Nash equilibrium strategy. After transforming the SGN model into a continuous-time Markov Chain (CTMC, the security of Ad Hoc networks can be evaluated and analyzed quantitatively by calculating the stationary probability of CTMC. Finally, the Matlab simulation results show that the probability of successful attack is related to the attack intensity and expected payoffs, but not attack rate.

  15. Economical analysis based on a simplified model of micro distillery

    International Nuclear Information System (INIS)

    Bristoti, A.; Adams, R.

    1987-01-01

    The investment costs of the hydrate alcohol distillery as well as the energy balance of its production made inviable its diffusion on a small scale. The present economical analysis is based on a simplified model of micro distillery where the reduction on investment costs in based on technical data based on the possibility of the utilization of an hydrate alcohol with a higher water constant than the usual, i. e. around 85 0 GL. The engineering project of this plant eliminates all pumps, all the liquids (water, sugar cane, syrup and wine) are fed by gravity. The bagasse is considered a noble by-product and it is utilized in poultry industry as a substitute of wood dust. All the heat needs of the micro distillery are supplied by wood. (author)

  16. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    Science.gov (United States)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that

  17. Data from quantitative label free proteomics analysis of rat spleen.

    Science.gov (United States)

    Dudekula, Khadar; Le Bihan, Thierry

    2016-09-01

    The dataset presented in this work has been obtained using a label-free quantitative proteomic analysis of rat spleen. A robust method for extraction of proteins from rat spleen tissue and LC-MS-MS analysis was developed using a urea and SDS-based buffer. Different fractionation methods were compared. A total of 3484 different proteins were identified from the pool of all experiments run in this study (a total of 2460 proteins with at least two peptides). A total of 1822 proteins were identified from nine non-fractionated pulse gels, 2288 proteins and 2864 proteins were identified by SDS-PAGE fractionation into three and five fractions respectively. The proteomics data are deposited in ProteomeXchange Consortium via PRIDE PXD003520, Progenesis and Maxquant output are presented in the supported information. The generated list of proteins under different regimes of fractionation allow assessing the nature of the identified proteins; variability in the quantitative analysis associated with the different sampling strategy and allow defining a proper number of replicates for future quantitative analysis.

  18. A semi-analytical three-dimensional free vibration analysis of functionally graded curved panels

    Energy Technology Data Exchange (ETDEWEB)

    Zahedinejad, P. [Department of Mechanical Engineering, Islamic Azad University, Branch of Shiraz, Shiraz (Iran, Islamic Republic of); Malekzadeh, P., E-mail: malekzadeh@pgu.ac.i [Department of Mechanical Engineering, Persian Gulf University, Persian Gulf University Boulevard, Bushehr 75168 (Iran, Islamic Republic of); Center of Excellence for Computational Mechanics, Shiraz University, Shiraz (Iran, Islamic Republic of); Farid, M. [Department of Mechanical Engineering, Islamic Azad University, Branch of Shiraz, Shiraz (Iran, Islamic Republic of); Karami, G. [Department of Mechanical Engineering and Applied Mechanics, North Dakota State University, Fargo, ND 58105-5285 (United States)

    2010-08-15

    Based on the three-dimensional elasticity theory, free vibration analysis of functionally graded (FG) curved thick panels under various boundary conditions is studied. Panel with two opposite edges simply supported and arbitrary boundary conditions at the other edges are considered. Two different models of material properties variations based on the power law distribution in terms of the volume fractions of the constituents and the exponential distribution of the material properties through the thickness are considered. Differential quadrature method in conjunction with the trigonometric functions is used to discretize the governing equations. With a continuous material properties variation assumption through the thickness of the curved panel, differential quadrature method is efficiently used to discretize the governing equations and to implement the related boundary conditions at the top and bottom surfaces of the curved panel and in strong form. The convergence of the method is demonstrated and to validate the results, comparisons are made with the solutions for isotropic and FG curved panels. By examining the results of thick FG curved panels for various geometrical and material parameters and subjected to different boundary conditions, the influence of these parameters and in particular, those due to functionally graded material parameters are studied.

  19. Cybernetic modeling based on pathway analysis for Penicillium chrysogenum fed-batch fermentation.

    Science.gov (United States)

    Geng, Jun; Yuan, Jingqi

    2010-08-01

    A macrokinetic model employing cybernetic methodology is proposed to describe mycelium growth and penicillin production. Based on the primordial and complete metabolic network of Penicillium chrysogenum found in the literature, the modeling procedure is guided by metabolic flux analysis and cybernetic modeling framework. The abstracted cybernetic model describes the transients of the consumption rates of the substrates, the assimilation rates of intermediates, the biomass growth rate, as well as the penicillin formation rate. Combined with the bioreactor model, these reaction rates are linked with the most important state variables, i.e., mycelium, substrate and product concentrations. Simplex method is used to estimate the sensitive parameters of the model. Finally, validation of the model is carried out with 20 batches of industrial-scale penicillin cultivation.

  20. Modeling paraxial wave propagation in free-electron laser oscillators

    NARCIS (Netherlands)

    Karssenberg, J.G.; van der Slot, Petrus J.M.; Volokhine, I.; Verschuur, Jeroen W.J.; Boller, Klaus J.

    2006-01-01

    Modeling free-electron laser (FEL) oscillators requires calculation of both the light-beam interaction within the undulator and the light propagation outside the undulator. We have developed a paraxial optical propagation code that can be combined with various existing models of gain media, for

  1. Analysis of free-surface flows through energy considerations: Single-phase versus two-phase modeling.

    Science.gov (United States)

    Marrone, Salvatore; Colagrossi, Andrea; Di Mascio, Andrea; Le Touzé, David

    2016-05-01

    The study of energetic free-surface flows is challenging because of the large range of interface scales involved due to multiple fragmentations and reconnections of the air-water interface with the formation of drops and bubbles. Because of their complexity the investigation of such phenomena through numerical simulation largely increased during recent years. Actually, in the last decades different numerical models have been developed to study these flows, especially in the context of particle methods. In the latter a single-phase approximation is usually adopted to reduce the computational costs and the model complexity. While it is well known that the role of air largely affects the local flow evolution, it is still not clear whether this single-phase approximation is able to predict global flow features like the evolution of the global mechanical energy dissipation. The present work is dedicated to this topic through the study of a selected problem simulated with both single-phase and two-phase models. It is shown that, interestingly, even though flow evolutions are different, energy evolutions can be similar when including or not the presence of air. This is remarkable since, in the problem considered, with the two-phase model about half of the energy is lost in the air phase while in the one-phase model the energy is mainly dissipated by cavity collapses.

  2. Automated Analysis of Corpora Callosa

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.

    2003-01-01

    This report describes and evaluates the steps needed to perform modern model-based interpretation of the corpus callosum in MRI. The process is discussed from the initial landmark-free contours to full-fledged statistical models based on the Active Appearance Models framework. Topics treated incl...... include landmark placement, background modelling and multi-resolution analysis. Preliminary quantitative and qualitative validation in a cross-sectional study show that fully automated analysis and segmentation of the corpus callosum are feasible....

  3. Rapid and interference-free analysis of nine B-group vitamins in energy drinks using trilinear component modeling of liquid chromatography-mass spectrometry data.

    Science.gov (United States)

    Hu, Yong; Wu, Hai-Long; Yin, Xiao-Li; Gu, Hui-Wen; Xiao, Rong; Xie, Li-Xia; Liu, Zhi; Fang, Huan; Wang, Li; Yu, Ru-Qin

    2018-04-01

    The aim of the present work was to develop a rapid and interference-free method based on liquid chromatography-mass spectrometry (LC-MS) for the simultaneous determination of nine B-group vitamins in various energy drinks. A smart and green strategy that modeled the three-way data array of LC-MS with second-order calibration methods based on alternating trilinear decomposition (ATLD) and alternating penalty trilinear decomposition (APTLD) algorithms was developed. By virtue of "mathematical separation" and "second-order advantage", the proposed strategy successfully solved the co-eluted peaks and unknown interferents in LC-MS analysis with the elution time less than 4.5min and simple sample preparation. Satisfactory quantitative results were obtained by the ATLD-LC-MS and APTLD-LC-MS methods for the spiked recovery assays, with the average spiked recoveries ranging from 87.2-113.9% to 92.0-111.7%, respectively. These results acquired from the proposed methods were confirmed by the LC-MS/MS method, which shows a quite good consistency with each other. All these results demonstrated that the developed chemometrics-assisted LC-MS strategy had advantages of being rapid, green, accurate and low-cost, and it could be an attractive alternative for the determination of multiple vitamins in complex food matrices, which required no laborious sample preparation, tedious condition optimization or more sophisticated instrumentations. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. An Exploration of the System Dynamics Field : A Model-Based Policy Analysis

    NARCIS (Netherlands)

    Rose, A.C.

    2014-01-01

    This report presents a first look study at the field of System Dynamics. The objective of the study is to perform a model-based policy analysis in order to investigate the future advancement of the System Dynamics field. The aim of this investigation is to determine what this advancement should look

  5. A Nationwide Analysis of Cost Variation for Autologous Free Flap Breast Reconstruction.

    Science.gov (United States)

    Billig, Jessica I; Lu, Yiwen; Momoh, Adeyiza O; Chung, Kevin C

    2017-11-01

    Cost variation among hospitals has been demonstrated for surgical procedures. Uncovering these differences has helped guide measures taken to reduce health care spending. To date, the fiscal consequence of hospital variation for autologous free flap breast reconstruction is unknown. To investigate factors that influence cost variation for autologous free flap breast reconstruction. A secondary cross-sectional analysis was performed using the Healthcare Cost and Utilization Project National Inpatient Sample database from 2008 to 2010. The dates of analysis were September 2016 to February 2017. The setting was a stratified sample of all US community hospitals. Participants were female patients who were diagnosed as having breast cancer or were at high risk for breast cancer and underwent autologous free flap breast reconstruction. Variables of interest included demographic data, hospital characteristics, length of stay, complications (surgical and systemic), and inpatient cost. The study used univariate and generalized linear mixed models to examine associations between patient and hospital characteristics and cost. A total of 3302 patients were included in the study, with a median age of 50 years (interquartile range, 44-57 years). The mean cost for autologous free flap breast reconstruction was $22 677 (interquartile range, $14 907-$33 391). Flap reconstructions performed at high-volume hospitals were significantly more costly than those performed at low-volume hospitals ($24 360 vs $18 918, P Logistic regression demonstrated that hospital volume correlated with increased cost (Exp[β], 1.06; 95% CI, 1.02-1.11; P = .003). Fewer surgical complications (16.4% [169 of 1029] vs 23.7% [278 of 1174], P cost variation among patients undergoing autologous free flap breast reconstruction. Experience, as measured by a hospital's volume, provides quality health care with fewer complications but is more costly. Longer length of stay contributed to regional

  6. Comprehensive ecosystem model-experiment synthesis using multiple datasets at two temperate forest free-air CO2 enrichment experiments: model performance and compensating biases

    Energy Technology Data Exchange (ETDEWEB)

    Walker, Anthony P [ORNL; Hanson, Paul J [ORNL; DeKauwe, Martin G [Macquarie University; Medlyn, Belinda [Macquarie University; Zaehle, S [Max Planck Institute for Biogeochemistry; Asao, Shinichi [Colorado State University, Fort Collins; Dietze, Michael [University of Illinois, Urbana-Champaign; Hickler, Thomas [Goethe University, Frankfurt, Germany; Huntinford, Chris [Centre for Ecology and Hydrology, Wallingford, United Kingdom; Iversen, Colleen M [ORNL; Jain, Atul [University of Illinois, Urbana-Champaign; Lomas, Mark [University of Sheffield; Luo, Yiqi [University of Oklahoma; McCarthy, Heather R [Duke University; Parton, William [Colorado State University, Fort Collins; Prentice, I. Collin [Macquarie University; Thornton, Peter E [ORNL; Wang, Shusen [Canada Centre for Remote Sensing (CCRS); Wang, Yingping [CSIRO Marine and Atmospheric Research; Warlind, David [Lund University, Sweden; Weng, Ensheng [University of Oklahoma, Norman; Warren, Jeffrey [ORNL; Woodward, F. Ian [University of Sheffield; Oren, Ram [Duke University; Norby, Richard J [ORNL

    2014-01-01

    Free Air CO2 Enrichment (FACE) experiments provide a remarkable wealth of data to test the sensitivities of terrestrial ecosystem models (TEMs). In this study, a broad set of 11 TEMs were compared to 22 years of data from two contrasting FACE experiments in temperate forests of the south eastern US the evergreen Duke Forest and the deciduous Oak Ridge forest. We evaluated the models' ability to reproduce observed net primary productivity (NPP), transpiration and Leaf Area index (LAI) in ambient CO2 treatments. Encouragingly, many models simulated annual NPP and transpiration within observed uncertainty. Daily transpiration model errors were often related to errors in leaf area phenology and peak LAI. Our analysis demonstrates that the simulation of LAI often drives the simulation of transpiration and hence there is a need to adopt the most appropriate of hypothesis driven methods to simulate and predict LAI. Of the three competing hypotheses determining peak LAI (1) optimisation to maximise carbon export, (2) increasing SLA with canopy depth and (3) the pipe model the pipe model produced LAI closest to the observations. Modelled phenology was either prescribed or based on broader empirical calibrations to climate. In some cases, simulation accuracy was achieved through compensating biases in component variables. For example, NPP accuracy was sometimes achieved with counter-balancing biases in nitrogen use efficiency and nitrogen uptake. Combined analysis of parallel measurements aides the identification of offsetting biases; without which over-confidence in model abilities to predict ecosystem function may emerge, potentially leading to erroneous predictions of change under future climates.

  7. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    Science.gov (United States)

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  8. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    Science.gov (United States)

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  9. Linear models of coregionalization for multivariate lattice data: Order-dependent and order-free cMCARs.

    Science.gov (United States)

    MacNab, Ying C

    2016-08-01

    This paper concerns with multivariate conditional autoregressive models defined by linear combination of independent or correlated underlying spatial processes. Known as linear models of coregionalization, the method offers a systematic and unified approach for formulating multivariate extensions to a broad range of univariate conditional autoregressive models. The resulting multivariate spatial models represent classes of coregionalized multivariate conditional autoregressive models that enable flexible modelling of multivariate spatial interactions, yielding coregionalization models with symmetric or asymmetric cross-covariances of different spatial variation and smoothness. In the context of multivariate disease mapping, for example, they facilitate borrowing strength both over space and cross variables, allowing for more flexible multivariate spatial smoothing. Specifically, we present a broadened coregionalization framework to include order-dependent, order-free, and order-robust multivariate models; a new class of order-free coregionalized multivariate conditional autoregressives is introduced. We tackle computational challenges and present solutions that are integral for Bayesian analysis of these models. We also discuss two ways of computing deviance information criterion for comparison among competing hierarchical models with or without unidentifiable prior parameters. The models and related methodology are developed in the broad context of modelling multivariate data on spatial lattice and illustrated in the context of multivariate disease mapping. The coregionalization framework and related methods also present a general approach for building spatially structured cross-covariance functions for multivariate geostatistics. © The Author(s) 2016.

  10. A generalized mean-squared displacement from inelastic fixed window scans of incoherent neutron scattering as a model-free indicator of anomalous diffusion confinement

    International Nuclear Information System (INIS)

    Roosen-Runge, F.; Seydel, T.

    2015-01-01

    Elastic fixed window scans of incoherent neutron scattering are an established and frequently employed method to study dynamical changes, usually over a broad temperature range or during a process such as a conformational change in the sample. In particular, the apparent mean-squared displacement can be extracted via a model-free analysis based on a solid physical interpretation as an effective amplitude of molecular motions. Here, we provide a new account of elastic and inelastic fixed window scans, defining a generalized mean-squared displacement for all fixed energy transfers. We show that this generalized mean-squared displacement in principle contains all information on the real mean-square displacement accessible in the instrumental time window. The derived formula provides a clear understanding of the effects of instrumental resolution on the apparent mean-squared displacement. Finally, we show that the generalized mean-square displacement can be used as a model-free indicator on confinement effects within the instrumental time window. (authors)

  11. Modeling and Analysis of Space Based Transceivers

    Science.gov (United States)

    Moore, Michael S.; Price, Jeremy C.; Abbott, Ben; Liebetreu, John; Reinhart, Richard C.; Kacpura, Thomas J.

    2007-01-01

    This paper presents the tool chain, methodology, and initial results of a study to provide a thorough, objective, and quantitative analysis of the design alternatives for space Software Defined Radio (SDR) transceivers. The approach taken was to develop a set of models and tools for describing communications requirements, the algorithm resource requirements, the available hardware, and the alternative software architectures, and generate analysis data necessary to compare alternative designs. The Space Transceiver Analysis Tool (STAT) was developed to help users identify and select representative designs, calculate the analysis data, and perform a comparative analysis of the representative designs. The tool allows the design space to be searched quickly while permitting incremental refinement in regions of higher payoff.

  12. An alternative model of free fall

    Science.gov (United States)

    Lattery, Mark

    2018-03-01

    In Two World Systems (Galileo 1632/1661 Dialogues Concerning Two New Sciences (New York: Prometheus)), Galileo attempted to unify terrestrial and celestial motions using the Aristotelian principle of circularity. The result was a model of free fall that correctly predicts the linear increase of the velocity of an object released from rest near the surface of the Earth. This historical episode provides an opportunity to communicate the nature of science to students.

  13. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    Science.gov (United States)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  14. 3D BUILDING MODELS SEGMENTATION BASED ON K-MEANS++ CLUSTER ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2016-10-01

    Full Text Available 3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  15. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    Science.gov (United States)

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  16. Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging

    Science.gov (United States)

    Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.

  17. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  18. On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.

    Science.gov (United States)

    Yamazaki, Keisuke

    2012-07-01

    Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Analysis of laser remote fusion cutting based on a mathematical model

    Energy Technology Data Exchange (ETDEWEB)

    Matti, R. S. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden); Department of Mechanical Engineering, College of Engineering, University of Mosul, Mosul (Iraq); Ilar, T.; Kaplan, A. F. H. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden)

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  20. Analysis of laser remote fusion cutting based on a mathematical model

    International Nuclear Information System (INIS)

    Matti, R. S.; Ilar, T.; Kaplan, A. F. H.

    2013-01-01

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too

  1. Preliminary Results for a Monocular Marker-Free Gait Measurement System

    Directory of Open Access Journals (Sweden)

    Jane Courtney

    2006-01-01

    Full Text Available This paper presents results from a novel monocular marker-free gait measurement system. The system was designed for physical and occupational therapists to monitor the progress of patients through therapy. It is based on a novel human motion capturemethod derived from model-based tracking. Testing is performed on two monocular, sagittal-view, sample gait videos – one with both the environment and the subject’s appearance and movement restricted and one in a natural environment with unrestrictedclothing and motion. Results of the modelling, tracking and analysis stages are presented along with standard gait graphs and parameters.

  2. In-silico oncology: an approximate model of brain tumor mass effect based on directly manipulated free form deformation

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Stefan; Mang, Andreas; Toma, Alina; Buzug, Thorsten M. [University of Luebeck (Germany). Institute of Medical Engineering

    2010-12-15

    The present work introduces a novel method for approximating mass effect of primary brain tumors. The spatio-temporal dynamics of cancerous cells are modeled by means of a deterministic reaction-diffusion equation. Diffusion tensor information obtained from a probabilistic diffusion tensor imaging atlas is incorporated into the model to simulate anisotropic diffusion of cancerous cells. To account for the expansive nature of the tumor, the computed net cell density of malignant cells is linked to a parametric deformation model. This mass effect model is based on the so-called directly manipulated free form deformation. Spatial correspondence between two successive simulation steps is established by tracking landmarks, which are attached to the boundary of the gross tumor volume. The movement of these landmarks is used to compute the new configuration of the control points and, hence, determines the resulting deformation. To prevent a deformation of rigid structures (i.e. the skull), fixed shielding landmarks are introduced. In a refinement step, an adaptive landmark scheme ensures a dense sampling of the tumor isosurface, which in turn allows for an appropriate representation of the tumor shape. The influence of different parameters on the model is demonstrated by a set of simulations. Additionally, simulation results are qualitatively compared to an exemplary set of clinical magnetic resonance images of patients diagnosed with high-grade glioma. Careful visual inspection of the results demonstrates the potential of the implemented model and provides first evidence that the computed approximation of tumor mass effect is sensible. The shape of diffusive brain tumors (glioblastoma multiforme) can be recovered and approximately matches the observations in real clinical data. (orig.)

  3. Parameter estimation of a nonlinear Burger's model using nanoindentation and finite element-based inverse analysis

    Science.gov (United States)

    Hamim, Salah Uddin Ahmed

    Nanoindentation involves probing a hard diamond tip into a material, where the load and the displacement experienced by the tip is recorded continuously. This load-displacement data is a direct function of material's innate stress-strain behavior. Thus, theoretically it is possible to extract mechanical properties of a material through nanoindentation. However, due to various nonlinearities associated with nanoindentation the process of interpreting load-displacement data into material properties is difficult. Although, simple elastic behavior can be characterized easily, a method to characterize complicated material behavior such as nonlinear viscoelasticity is still lacking. In this study, a nanoindentation-based material characterization technique is developed to characterize soft materials exhibiting nonlinear viscoelasticity. Nanoindentation experiment was modeled in finite element analysis software (ABAQUS), where a nonlinear viscoelastic behavior was incorporated using user-defined subroutine (UMAT). The model parameters were calibrated using a process called inverse analysis. In this study, a surrogate model-based approach was used for the inverse analysis. The different factors affecting the surrogate model performance are analyzed in order to optimize the performance with respect to the computational cost.

  4. Finite element analysis of vibration energy harvesting using lead-free piezoelectric materials: A comparative study

    Directory of Open Access Journals (Sweden)

    Anuruddh Kumar

    2014-06-01

    Full Text Available In this article, the performance of various piezoelectric materials is simulated for the unimorph cantilever-type piezoelectric energy harvester. The finite element method (FEM is used to model the piezolaminated unimorph cantilever structure. The first-order shear deformation theory (FSDT and linear piezoelectric theory are implemented in finite element simulations. The genetic algorithm (GA optimization approach is carried out to optimize the structural parameters of mechanical energy-based energy harvester for maximum power density and power output. The numerical simulation demonstrates the performance of lead-free piezoelectric materials in unimorph cantilever-based energy harvester. The lead-free piezoelectric material K0.5Na0.5NbO3-LiSbO3-CaTiO3 (2 wt.% has demonstrated maximum mean power and maximum mean power density for piezoelectric energy harvester in the ambient frequency range of 90–110 Hz. Overall, the lead-free piezoelectric materials of K0.5Na0.5NbO3-LiSbO3 (KNN-LS family have shown better performance than the conventional lead-based piezoelectric material lead zirconate titanate (PZT in the context of piezoelectric energy harvesting devices.

  5. The structure of the solution obtained with Reynolds-stress-transport models at the free-stream edges of turbulent flows

    Science.gov (United States)

    Cazalbou, J.-B.; Chassaing, P.

    2002-02-01

    The behavior of Reynolds-stress-transport models at the free-stream edges of turbulent flows is investigated. Current turbulent-diffusion models are found to produce propagative (possibly weak) solutions of the same type as those reported earlier by Cazalbou, Spalart, and Bradshaw [Phys. Fluids 6, 1797 (1994)] for two-equation models. As in the latter study, an analysis is presented that provides qualitative information on the flow structure predicted near the edge if a condition on the values of the diffusion constants is satisfied. In this case, the solution appears to be fairly insensitive to the residual free-stream turbulence levels needed with conventional numerical methods. The main specific result is that, depending on the diffusion model, the propagative solution can force turbulence toward definite and rather extreme anisotropy states at the edge (one- or two-component limit). This is not the case with the model of Daly and Harlow [Phys. Fluids 13, 2634 (1970)]; it may be one of the reasons why this "old" scheme is still the most widely used, even in recent Reynolds-stress-transport models. In addition, the analysis helps us to interpret some difficulties encountered in computing even very simple flows with Lumley's pressure-diffusion model [Adv. Appl. Mech. 18, 123 (1978)]. A new realizability condition, according to which the diffusion model should not globally become "anti-diffusive," is introduced, and a recalibration of Lumley's model satisfying this condition is performed using information drawn from the analysis.

  6. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration: In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    Science.gov (United States)

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SD(SE)): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SD(SE): 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice.

  7. Three-dimensional free vibration analysis of thick laminated circular ...

    African Journals Online (AJOL)

    Dr Oke

    1 ,2 Department of Mechanical Engineering, Maulana Azad National Institute of Technology, Bhopal-462003, INDIA ... In this communication, a numerical analysis regarding free vibration of thick laminated .... ANSYS finite element software.

  8. Cobalt-free nickel-base superalloys

    International Nuclear Information System (INIS)

    Koizumi, Yutaka; Yamazaki, Michio; Harada, Hiroshi

    1979-01-01

    Cobalt-free nickel-base cast superalloys have been developed. Cobalt is considered to be a beneficial element to strengthen the alloys but should be eliminated in alloys to be used for direct cycle helium turbine driven by helium gas from HTGR (high temp. gas reactor). The elimination of cobalt is required to avoid the formation of radioactive 60 Co from the debris or scales of the alloys. Cobalt-free alloys are also desirable from another viewpoint, i.e. recently the shortage of the element has become a serious problem in industry. Cobalt-free Mar-M200 type alloys modified by the additions of 0.15 - 0.2 wt% B and 1 - 1.5 wt% Hf were found to have a creep rupture strength superior or comparable to that of the original Mar-M200 alloy bearing cobalt. The ductility in tensile test at 800 0 C, as cast or after prolonged heating at 900 0 C (the tensile test was done without removing the surface layer affected by the heating), was also improved by the additions of 0.15 - 0.2% B and 1 - 1.5% Hf. The morphology of grain boundaries became intricated by the additions of 0.15 - 0.2% B and 1 - 1.5% Hf, to such a degree that one can hardly distinguish grain boundaries by microscopes. The change in the grain boundary morphology was considered, as suggested previously by one of the authors (M.Y.), to be the reason for the improvements in the creep rupture strength and tensile ductility. (author)

  9. A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis.

    Science.gov (United States)

    Tervonen, Tommi; van Valkenhoef, Gert; Buskens, Erik; Hillege, Hans L; Postmus, Douwe

    2011-05-30

    Drug benefit-risk (BR) analysis is based on firm clinical evidence regarding various safety and efficacy outcomes. In this paper, we propose a new and more formal approach for constructing a supporting multi-criteria model that fully takes into account the evidence on efficacy and adverse drug reactions. Our approach is based on the stochastic multi-criteria acceptability analysis methodology, which allows us to compute the typical value judgments that support a decision, to quantify decision uncertainty, and to compute a comprehensive BR profile. We construct a multi-criteria model for the therapeutic group of second-generation antidepressants. We assess fluoxetine and venlafaxine together with placebo according to incidence of treatment response and three common adverse drug reactions by using data from a published study. Our model shows that there are clear trade-offs among the treatment alternatives. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Comparative analysis of Brassica napus plasma membrane proteins under phosphorus deficiency using label-free and MaxQuant-based proteomics approaches.

    Science.gov (United States)

    Chen, Shuisen; Luo, Ying; Ding, Guangda; Xu, Fangsen

    2016-02-05

    Phosphorus (P) deficiency is a primary constraint for plant growth in terrestrial ecosystems. To better understand the genotypic differences in the adaptation mechanism of Brassica napus to P deficiency, we purified the plasma membrane (PM) from the roots of two genotypes: P-efficient "Eyou Changjia" and P-inefficient "B104-2". Combining label-free quantitative proteomics with the MaxQuant approach, a total of 71 proteins that significantly changed in abundances were identified in the two genotypes in response to P-free starvation, including 31 in "Eyou Changjia" and 40 in "B104-2". Based on comparative genomics study, 28 proteins were mapped to the confidence intervals of quantitative trait loci (QTLs) for P efficiency related traits. Seven decreased proteins with transporter activity were found to be located in the PM by subcellular localization analyses. These proteins involved in intracellular protein transport and ATP hydrolysis coupled proton transport were mapped to the QTL for P content and dry weight. Compared with "B104-2", more decreased proteins referring to transporter activity were found in "Eyou Changjia", showing that substance exchange was decreased in response to short-term P-free starvation. Together with the finding, more decreased proteins functioning in signal transduction and protein synthesis/degradation suggested that "Eyou Changjia" could slow the progression of growth and save more P in response to short-term P-free starvation. P deficiency seriously limits the production and quality of B. napus. Roots absorb water and nutrients and anchor the plant in the soil. Therefore, to study root PM proteome under P stress would be helpful to understand the adaptation mechanism for P deficiency. However, PM proteome analysis in B. napus has been seldom reported due to the high hydrophobicity and low abundance of PM. Thus, we herein investigated the PM proteome alteration of roots in two B. napus genotypes, with different P deficient tolerances, in

  11. A blocked takeover in the Polish power sector: A model-based analysis

    International Nuclear Information System (INIS)

    Kamiński, Jacek

    2014-01-01

    As the President of the Office of Competition and Consumer Protection refused to approve a government initiated takeover in the Polish power sector and the Court of Competition and Consumer Protection did not make a ruling on that case, the takeover was finally prohibited. In this context, the main aim of this paper is to carry out a quantitative analysis of the impact of the takeover in question on electricity prices and quantities, consumer and producer surpluses, dead weight loss and emissions. The scope of the study covers the Polish power generation sector and the analysis was carried out for 2009. A game theory-based electricity market equilibrium model developed for Poland was applied. The model includes several country-specific conditions, such as a coal-based power generation fuel-mix, a large share of biomass co-combustion, etc. For the sake of clarity, only four scenarios are assumed. The paper concludes that the declared synergy savings did not compensate for the increase in dead weight loss and the transfer of surplus from consumers to producers caused by increased market power. - Highlights: • A takeover blocked by the President of the Office of Competition and Consumer Protection was analysed. • A game theory-based model of the Polish wholesale electricity market was applied. • The impact of the takeover on electricity prices and generation levels, surplus transfers and dead weight loss was estimated. • The results were compared with the declared synergy savings

  12. Free-fermion descriptions of parafermion chains and string-net models

    Science.gov (United States)

    Meichanetzidis, Konstantinos; Turner, Christopher J.; Farjami, Ashk; Papić, Zlatko; Pachos, Jiannis K.

    2018-03-01

    Topological phases of matter remain a focus of interest due to their unique properties: fractionalization, ground-state degeneracy, and exotic excitations. While some of these properties can occur in systems of free fermions, their emergence is generally associated with interactions between particles. Here, we quantify the role of interactions in general classes of topological states of matter in one and two spatial dimensions, including parafermion chains and string-net models. Surprisingly, we find that certain topological states can be exactly described by free fermions, while others saturate the maximum possible distance from their optimal free-fermion description [C. J. Turner et al., Nat. Commun. 8, 14926 (2017), 10.1038/ncomms14926]. Our work opens the door to understanding the complexity of topological models by establishing new types of fermionization procedures to describe their low-energy physics, thus making them amenable to experimental realizations.

  13. Emerging infectious diseases in free-ranging wildlife-Australian zoo based wildlife hospitals contribute to national surveillance.

    Directory of Open Access Journals (Sweden)

    Keren Cox-Witton

    Full Text Available Emerging infectious diseases are increasingly originating from wildlife. Many of these diseases have significant impacts on human health, domestic animal health, and biodiversity. Surveillance is the key to early detection of emerging diseases. A zoo based wildlife disease surveillance program developed in Australia incorporates disease information from free-ranging wildlife into the existing national wildlife health information system. This program uses a collaborative approach and provides a strong model for a disease surveillance program for free-ranging wildlife that enhances the national capacity for early detection of emerging diseases.

  14. Model-free approach to the estimation of radiation hazards. I. Theory

    International Nuclear Information System (INIS)

    Zaider, M.; Brenner, D.J.

    1986-01-01

    The experience of the Japanese atomic bomb survivors constitutes to date the major data base for evaluating the effects of low doses of ionizing radiation on human populations. Although numerous analyses have been performed and published concerning this experience, it is clear that no consensus has emerged as to the conclusions that may be drawn to assist in setting realistic radiation protection guidelines. In part this is an inherent consequences of the rather limited amount of data available. In this paper the authors address an equally important problem; namely, the use of arbitrary parametric risk models which have little theoretical foundation, yet almost totally determine the final conclusions drawn. They propose the use of a model-free approach to the estimation of radiation hazards

  15. Model-based control of observer bias for the analysis of presence-only data in ecology.

    Directory of Open Access Journals (Sweden)

    David I Warton

    Full Text Available Presence-only data, where information is available concerning species presence but not species absence, are subject to bias due to observers being more likely to visit and record sightings at some locations than others (hereafter "observer bias". In this paper, we describe and evaluate a model-based approach to accounting for observer bias directly--by modelling presence locations as a function of known observer bias variables (such as accessibility variables in addition to environmental variables, then conditioning on a common level of bias to make predictions of species occurrence free of such observer bias. We implement this idea using point process models with a LASSO penalty, a new presence-only method related to maximum entropy modelling, that implicitly addresses the "pseudo-absence problem" of where to locate pseudo-absences (and how many. The proposed method of bias-correction is evaluated using systematically collected presence/absence data for 62 plant species endemic to the Blue Mountains near Sydney, Australia. It is shown that modelling and controlling for observer bias significantly improves the accuracy of predictions made using presence-only data, and usually improves predictions as compared to pseudo-absence or "inventory" methods of bias correction based on absences from non-target species. Future research will consider the potential for improving the proposed bias-correction approach by estimating the observer bias simultaneously across multiple species.

  16. Application of a free parameter model to plastic scintillation samples

    Energy Technology Data Exchange (ETDEWEB)

    Tarancon Sanz, Alex, E-mail: alex.tarancon@ub.edu [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Kossert, Karsten, E-mail: Karsten.Kossert@ptb.de [Physikalisch-Technische Bundesanstalt (PTB), Bundesallee 100, 38116 Braunschweig (Germany)

    2011-08-21

    In liquid scintillation (LS) counting, the CIEMAT/NIST efficiency tracing method and the triple-to-double coincidence ratio (TDCR) method have proved their worth for reliable activity measurements of a number of radionuclides. In this paper, an extended approach to apply a free-parameter model to samples containing a mixture of solid plastic scintillation microspheres and radioactive aqueous solutions is presented. Several beta-emitting radionuclides were measured in a TDCR system at PTB. For the application of the free parameter model, the energy loss in the aqueous phase must be taken into account, since this portion of the particle energy does not contribute to the creation of scintillation light. The energy deposit in the aqueous phase is determined by means of Monte Carlo calculations applying the PENELOPE software package. To this end, great efforts were made to model the geometry of the samples. Finally, a new geometry parameter was defined, which was determined by means of a tracer radionuclide with known activity. This makes the analysis of experimental TDCR data of other radionuclides possible. The deviations between the determined activity concentrations and reference values were found to be lower than 3%. The outcome of this research work is also important for a better understanding of liquid scintillation counting. In particular the influence of (inverse) micelles, i.e. the aqueous spaces embedded in the organic scintillation cocktail, can be investigated. The new approach makes clear that it is important to take the energy loss in the aqueous phase into account. In particular for radionuclides emitting low-energy electrons (e.g. M-Auger electrons from {sup 125}I), this effect can be very important.

  17. Skull base chordomas: analysis of dose-response characteristics

    International Nuclear Information System (INIS)

    Niemierko, Andrzej; Terahara, Atsuro; Goitein, Michael

    1997-01-01

    Objective: To extract dose-response characteristics from dose-volume histograms and corresponding actuarial survival statistics for 115 patients with skull base chordomas. Materials and Methods: We analyzed data for 115 patients with skull base chordoma treated with combined photon and proton conformal radiotherapy to doses in the range 66.6Gy - 79.2Gy. Data set for each patient included gender, histology, age, tumor volume, prescribed dose, overall treatment time, time to recurrence or time to last observation, target dose-volume histogram, and several dosimetric parameters (minimum/mean/median/maximum target dose, percent of the target volume receiving the prescribed dose, dose to 90% of the target volume, and the Equivalent Uniform Dose (EUD). Data were analyzed using the Kaplan-Meier survivor function estimate, the proportional hazards (Cox) model, and parametric modeling of the actuarial probability of recurrence. Parameters of dose-response characteristics were obtained using the maximum likelihood method. Results: Local failure developed in 42 (36%) of patients, with actuarial local control rates at 5 years of 59.2%. The proportional hazards model revealed significant dependence of gender on the probability of recurrence, with female patients having significantly poorer prognosis (hazard ratio of 2.3 with the p value of 0.008). The Wilcoxon and the log-rank tests of the corresponding Kaplan-Meier recurrence-free survival curves confirmed statistical significance of this effect. The Cox model with stratification by gender showed significance of tumor volume (p=0.01), the minimum target dose (p=0.02), and the EUD (p=0.02). Other parameters were not significant at the α level of significance of 0.05, including the prescribed dose (p=0.21). Parametric analysis using a combined model of tumor control probability (to account for non-uniformity of target dose distribution) and the Weibull failure time model (to account for censoring) allowed us to estimate

  18. INFLUENCE ANALYSIS OF WATERLOGGING BASED ON DEEP LEARNING MODEL IN WUHAN

    Directory of Open Access Journals (Sweden)

    Y. Pan

    2017-09-01

    Full Text Available This paper analyses a large number of factors related to the influence degree of urban waterlogging in depth, and constructs the Stack Autoencoder model to explore the relationship between the waterlogging points’ influence degree and their surrounding spatial data, which will be used to realize the comprehensive analysis in the waterlogging influence on the work and life of residents. According to the data of rainstorm waterlogging in 2016 July in Wuhan, the model is validated. The experimental results show that the model has higher accuracy than the traditional linear regression model. Based on the experimental model and waterlogging points distribution information in Wuhan over the years, the influence degree of different waterlogging points can be quantitatively described, which will be beneficial to the formulation of urban flood control measures and provide a reference for the design of city drainage pipe network.

  19. THE INFLUENCE OF SPATIAL RESOLUTION ON NONLINEAR FORCE-FREE MODELING

    Energy Technology Data Exchange (ETDEWEB)

    DeRosa, M. L.; Schrijver, C. J. [Lockheed Martin Solar and Astrophysics Laboratory, 3251 Hanover St. B/252, Palo Alto, CA 94304 (United States); Wheatland, M. S.; Gilchrist, S. A. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Leka, K. D.; Barnes, G. [NorthWest Research Associates, 3380 Mitchell Ln., Boulder, CO 80301 (United States); Amari, T.; Canou, A. [CNRS, Centre de Physique Théorique de l’École Polytechnique, F-91128, Palaiseau Cedex (France); Thalmann, J. K. [Institute of Physics/IGAM, University of Graz, Universitätsplatz 5, A-8010 Graz (Austria); Valori, G. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Wiegelmann, T. [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077, Göttingen (Germany); Malanushenko, A. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Sun, X. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Régnier, S. [Department of Mathematics and Information Sciences, Faculty of Engineering and Environment, Northumbria University, Newcastle-Upon-Tyne, NE1 8ST (United Kingdom)

    2015-10-01

    The nonlinear force-free field (NLFFF) model is often used to describe the solar coronal magnetic field, however a series of earlier studies revealed difficulties in the numerical solution of the model in application to photospheric boundary data. We investigate the sensitivity of the modeling to the spatial resolution of the boundary data, by applying multiple codes that numerically solve the NLFFF model to a sequence of vector magnetogram data at different resolutions, prepared from a single Hinode/Solar Optical Telescope Spectro-Polarimeter scan of NOAA Active Region 10978 on 2007 December 13. We analyze the resulting energies and relative magnetic helicities, employ a Helmholtz decomposition to characterize divergence errors, and quantify changes made by the codes to the vector magnetogram boundary data in order to be compatible with the force-free model. This study shows that NLFFF modeling results depend quantitatively on the spatial resolution of the input boundary data, and that using more highly resolved boundary data yields more self-consistent results. The free energies of the resulting solutions generally trend higher with increasing resolution, while relative magnetic helicity values vary significantly between resolutions for all methods. All methods require changing the horizontal components, and for some methods also the vertical components, of the vector magnetogram boundary field in excess of nominal uncertainties in the data. The solutions produced by the various methods are significantly different at each resolution level. We continue to recommend verifying agreement between the modeled field lines and corresponding coronal loop images before any NLFFF model is used in a scientific setting.

  20. Bound Flavin-Cytochrome Model of Extracellular Electron Transfer in Shewanella oneidensis: Analysis by Free Energy Molecular (Postprint)

    Science.gov (United States)

    2016-06-06

    cathodic conditions, oxidized and reduced heme states were assumed, respectively. The calculated results are summarized in Table 2. The solvation free...reports favor a flavin-bound model, proposing two one- electron reductions of flavin, namely, oxidized (Ox) to semiquinone (Sq) and semiquinone to...hydroquinone (Hq), at anodic and cathodic conditions, respectively. In this work, to provide a mechanistic understanding of riboflavin (RF) binding at