WorldWideScience

Sample records for modeling analysis showed

  1. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  2. Duchenne muscular dystrophy models show their age

    OpenAIRE

    Chamberlain, Jeffrey S.

    2010-01-01

    The lack of appropriate animal models has hampered efforts to develop therapies for Duchenne muscular dystrophy (DMD). A new mouse model lacking both dystrophin and telomerase (Sacco et al., 2010) closely mimics the pathological progression of human DMD and shows that muscle stem cell activity is a key determinant of disease severity.

  3. Importance Performance Analysis as a Trade Show Performance Evaluation and Benchmarking Tool

    OpenAIRE

    Tafesse, Wondwesen; Skallerud, Kåre; Korneliussen, Tor

    2010-01-01

    Author's accepted version (post-print). The purpose of this study is to introduce importance performance analysis as a trade show performance evaluation and benchmarking tool. Importance performance analysis considers exhibitors’ performance expectation and perceived performance in unison to evaluate and benchmark trade show performance. The present study uses data obtained from exhibitors of an international trade show to demonstrate how importance performance analysis can be used to eval...

  4. Dynamic Chest Image Analysis: Model-Based Perfusion Analysis in Dynamic Pulmonary Imaging

    Directory of Open Access Journals (Sweden)

    Kiuru Aaro

    2003-01-01

    Full Text Available The "Dynamic Chest Image Analysis" project aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the dynamic pulmonary imaging technique. We have proposed and evaluated a multiresolutional method with an explicit ventilation model for ventilation analysis. This paper presents a new model-based method for pulmonary perfusion analysis. According to perfusion properties, we first devise a novel mathematical function to form a perfusion model. A simple yet accurate approach is further introduced to extract cardiac systolic and diastolic phases from the heart, so that this cardiac information may be utilized to accelerate the perfusion analysis and improve its sensitivity in detecting pulmonary perfusion abnormalities. This makes perfusion analysis not only fast but also robust in computation; consequently, perfusion analysis becomes computationally feasible without using contrast media. Our clinical case studies with 52 patients show that this technique is effective for pulmonary embolism even without using contrast media, demonstrating consistent correlations with computed tomography (CT and nuclear medicine (NM studies. This fluoroscopical examination takes only about 2 seconds for perfusion study with only low radiation dose to patient, involving no preparation, no radioactive isotopes, and no contrast media.

  5. Model shows future cut in U.S. ozone levels

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    A joint U.S. auto-oil industry research program says modeling shows that changing gasoline composition can reduce ozone levels for Los Angeles in 2010 and for New York City and Dallas-Fort Worth in 2005. The air quality modeling was based on vehicle emissions research data released late last year (OGJ, Dec. 24, 1990, p. 20). The effort is sponsored by the big three auto manufacturers and 14 oil companies. Sponsors the cars and small trucks account for about one third of ozone generated in the three cities studied but by 2005-10 will account for only 5-9%

  6. Analysis on the crime model using dynamical approach

    Science.gov (United States)

    Mohammad, Fazliza; Roslan, Ummu'Atiqah Mohd

    2017-08-01

    A research is carried out to analyze a dynamical model of the spread crime system. A Simplified 2-Dimensional Model is used in this research. The objectives of this research are to investigate the stability of the model of the spread crime, to summarize the stability by using a bifurcation analysis and to study the relationship of basic reproduction number, R0 with the parameter in the model. Our results for stability of equilibrium points shows that we have two types of stability, which are asymptotically stable and saddle node. While the result for bifurcation analysis shows that the number of criminally active and incarcerated increases as we increase the value of a parameter in the model. The result for the relationship of R0 with the parameter shows that as the parameter increases, R0 increase too, and the rate of crime increase too.

  7. Using structural equation modeling for network meta-analysis.

    Science.gov (United States)

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison

  8. Discrete Discriminant analysis based on tree-structured graphical models

    DEFF Research Database (Denmark)

    Perez de la Cruz, Gonzalo; Eslava, Guillermina

    The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant a...... analysis based on tree{structured graphical models is a simple nonlinear method competitive with, and sometimes superior to, other well{known linear methods like those assuming mutual independence between variables and linear logistic regression.......The purpose of this paper is to illustrate the potential use of discriminant analysis based on tree{structured graphical models for discrete variables. This is done by comparing its empirical performance using estimated error rates for real and simulated data. The results show that discriminant...

  9. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    Science.gov (United States)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  10. Conformational analysis of lignin models

    International Nuclear Information System (INIS)

    Santos, Helio F. dos

    2001-01-01

    The conformational equilibrium for two 5,5' biphenyl lignin models have been analyzed using a quantum mechanical semiempirical method. The gas phase and solution structures are discussed based on the NMR and X-ray experimental data. The results obtained showed that the observed conformations are solvent-dependent, being the geometries and the thermodynamic properties correlated with the experimental information. This study shows how a systematic theoretical conformational analysis can help to understand chemical processes at a molecular level. (author)

  11. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  12. Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.

    Directory of Open Access Journals (Sweden)

    Estefanía de Munck

    Full Text Available Amyotrophic lateral sclerosis (ALS is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA, a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.

  13. Single-channel model for steady thermal-hydraulic analysis in nuclear reactor

    International Nuclear Information System (INIS)

    Zhang Xiaoying; Huang Yuanyuan

    2010-01-01

    This article established a single-channel model for steady analysis in the reactor and an example of thermal-hydraulic analysis was made by using this model, including the Maximum heat flux density of fuel element, enthalpy, Coolant flow, various kinds of pressure drop, enthalpy increase in average tube and thermal tube. I also got the Coolant temperature distribution and the fuel element temperature distribution and analysis of the final result. The results show that some relevant parameters which we got in this paper are well coincide with the actual operating parameters. It is also show that the single-channel model can be used to the steady thermal-hydraulic analysis. (authors)

  14. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  15. Dynamical system analysis of interacting models

    Science.gov (United States)

    Carneiro, S.; Borges, H. A.

    2018-01-01

    We perform a dynamical system analysis of a cosmological model with linear dependence between the vacuum density and the Hubble parameter, with constant-rate creation of dark matter. We show that the de Sitter spacetime is an asymptotically stable critical point, future limit of any expanding solution. Our analysis also shows that the Minkowski spacetime is an unstable critical point, which eventually collapses to a singularity. In this way, such a prescription for the vacuum decay not only predicts the correct future de Sitter limit, but also forbids the existence of a stable Minkowski universe. We also study the effect of matter creation on the growth of structures and their peculiar velocities, showing that it is inside the current errors of redshift space distortions observations.

  16. Energy-Water Modeling and Analysis | Energy Analysis | NREL

    Science.gov (United States)

    Generation (ReEDS Model Analysis) U.S. Energy Sector Vulnerabilities to Climate Change and Extreme Weather Modeling and Analysis Energy-Water Modeling and Analysis NREL's energy-water modeling and analysis vulnerabilities from various factors, including water. Example Projects Renewable Electricity Futures Study

  17. Trojan detection model based on network behavior analysis

    International Nuclear Information System (INIS)

    Liu Junrong; Liu Baoxu; Wang Wenjin

    2012-01-01

    Based on the analysis of existing Trojan detection technology, this paper presents a Trojan detection model based on network behavior analysis. First of all, we abstract description of the Trojan network behavior, then according to certain rules to establish the characteristic behavior library, and then use the support vector machine algorithm to determine whether a Trojan invasion. Finally, through the intrusion detection experiments, shows that this model can effectively detect Trojans. (authors)

  18. Application of linearized model to the stability analysis of the pressurized water reactor

    International Nuclear Information System (INIS)

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  19. Neutrosophic Logic for Mental Model Elicitation and Analysis

    Directory of Open Access Journals (Sweden)

    Karina Pérez-Teruel

    2014-03-01

    Full Text Available Mental models are personal, internal representations of external reality that people use to interact with the world around them. They are useful in multiple situations such as muticriteria decision making, knowledge management, complex system learning and analysis. In this paper a framework for mental models elicitation and analysis based on neutrosophic Logic is presented. An illustrative example is provided to show the applicability of the proposal. The paper ends with conclusion future research directions.

  20. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    Science.gov (United States)

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  1. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  2. Spatial Heterodyne Observations of Water (SHOW) vapour in the upper troposphere and lower stratosphere from a high altitude aircraft: Modelling and sensitivity analysis

    Science.gov (United States)

    Langille, J. A.; Letros, D.; Zawada, D.; Bourassa, A.; Degenstein, D.; Solheim, B.

    2018-04-01

    A spatial heterodyne spectrometer (SHS) has been developed to measure the vertical distribution of water vapour in the upper troposphere and the lower stratosphere with a high vertical resolution (∼500 m). The Spatial Heterodyne Observations of Water (SHOW) instrument combines an imaging system with a monolithic field-widened SHS to observe limb scattered sunlight in a vibrational band of water (1363 nm-1366 nm). The instrument has been optimized for observations from NASA's ER-2 aircraft as a proof-of-concept for a future low earth orbit satellite deployment. A robust model has been developed to simulate SHOW ER-2 limb measurements and retrievals. This paper presents the simulation of the SHOW ER-2 limb measurements along a hypothetical flight track and examines the sensitivity of the measurement and retrieval approach. Water vapour fields from an Environment and Climate Change Canada forecast model are used to represent realistic spatial variability along the flight path. High spectral resolution limb scattered radiances are simulated using the SASKTRAN radiative transfer model. It is shown that the SHOW instrument onboard the ER-2 is capable of resolving the water vapour variability in the UTLS from approximately 12 km - 18 km with ±1 ppm accuracy. Vertical resolutions between 500 m and 1 km are feasible. The along track sampling capability of the instrument is also discussed.

  3. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    ) Generation of derivative property models via linear coregionalization with porosity; (5) Post-processing of the simulated models to impart desired secondary geologic attributes and to create summary and uncertainty models; and (6) Conversion of the models into real-world coordinates. The conversion to real world coordinates is performed as part of the integration of the RPM into the Integrated Site Model (ISM) 3.1; this activity is not part of the current analysis. The ISM provides a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site and consists of three components: (1) Geologic Framework Model (GFM); (2) RPM, which is the subject of this AMR; and (3) Mineralogic Model. The interrelationship of the three components of the ISM and their interface with downstream uses are illustrated in Figure 1. Figure 2 shows the geographic boundaries of the RPM and other component models of the ISM

  4. Bayesian analysis of repairable systems showing a bounded failure intensity

    International Nuclear Information System (INIS)

    Guida, Maurizio; Pulcini, Gianpaolo

    2006-01-01

    The failure pattern of repairable mechanical equipment subject to deterioration phenomena sometimes shows a finite bound for the increasing failure intensity. A non-homogeneous Poisson process with bounded increasing failure intensity is then illustrated and its characteristics are discussed. A Bayesian procedure, based on prior information on model-free quantities, is developed in order to allow technical information on the failure process to be incorporated into the inferential procedure and to improve the inference accuracy. Posterior estimation of the model-free quantities and of other quantities of interest (such as the optimal replacement interval) is provided, as well as prediction on the waiting time to the next failure and on the number of failures in a future time interval is given. Finally, numerical examples are given to illustrate the proposed inferential procedure

  5. Fat stigmatization in television shows and movies: a content analysis.

    Science.gov (United States)

    Himes, Susan M; Thompson, J Kevin

    2007-03-01

    To examine the phenomenon of fat stigmatization messages presented in television shows and movies, a content analysis was used to quantify and categorize fat-specific commentary and humor. Fat stigmatization vignettes were identified using a targeted sampling procedure, and 135 scenes were excised from movies and television shows. The material was coded by trained raters. Reliability indices were uniformly high for the seven categories (percentage agreement ranged from 0.90 to 0.98; kappas ranged from 0.66 to 0.94). Results indicated that fat stigmatization commentary and fat humor were often verbal, directed toward another person, and often presented directly in the presence of the overweight target. Results also indicated that male characters were three times more likely to engage in fat stigmatization commentary or fat humor than female characters. To our knowledge, these findings provide the first information regarding the specific gender, age, and types of fat stigmatization that occur frequently in movies and television shows. The stimuli should prove useful in future research examining the role of individual difference factors (e.g., BMI) in the reaction to viewing such vignettes.

  6. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  7. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs

  8. Comparative analysis of used car price evaluation models

    Science.gov (United States)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  9. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  10. Regression analysis of a chemical reaction fouling model

    International Nuclear Information System (INIS)

    Vasak, F.; Epstein, N.

    1996-01-01

    A previously reported mathematical model for the initial chemical reaction fouling of a heated tube is critically examined in the light of the experimental data for which it was developed. A regression analysis of the model with respect to that data shows that the reference point upon which the two adjustable parameters of the model were originally based was well chosen, albeit fortuitously. (author). 3 refs., 2 tabs., 2 figs

  11. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  12. #13ReasonsWhy Health Professionals and Educators are Tweeting: A Systematic Analysis of Uses and Perceptions of Show Content and Learning Outcomes.

    Science.gov (United States)

    Walker, Kimberly K; Burns, Kelli

    2018-04-27

    This study is a content analysis of health professionals' and educators' tweets about a popular Netflix show that depicts teen suicide: 13 Reasons Why. A content analysis of 740 tweets was conducted to determine the main themes associated with professionals' and educators' tweets about the show, as well as the valence of the tweets. Additionally, a thematic analysis of linked content in tweets (n = 178) was conducted to explore additional content shared about the show and modeling outcomes. Results indicated the largest percentage of tweets was related to social learning, particularly about outcomes that could occur from viewing the show. The valence of the tweets about outcomes was more positive than negative. However, linked materials commonly circulated in tweets signified greater concern with unintended learning outcomes. Some of the linked content included media guidelines for reporting on suicide with recommendations that entertainment producers follow the guidelines. This study emphasizes the importance of including social learning objectives in future typologies of Twitter uses and demonstrates the importance of examining linked content in Twitter studies.

  13. Constraints based analysis of extended cybernetic models.

    Science.gov (United States)

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Model of the synthesis of trisporic acid in Mucorales showing bistability.

    Science.gov (United States)

    Werner, S; Schroeter, A; Schimek, C; Vlaic, S; Wöstemeyer, J; Schuster, S

    2012-12-01

    An important substance in the signalling between individuals of Mucor-like fungi is trisporic acid (TA). This compound, together with some of its precursors, serves as a pheromone in mating between (+)- and (-)-mating types. Moreover, intermediates of the TA pathway are exchanged between the two mating partners. Based on differential equations, mathematical models of the synthesis pathways of TA in the two mating types of an idealised Mucor-fungus are here presented. These models include the positive feedback of TA on its own synthesis. The authors compare three sub-models in view of bistability, robustness and the reversibility of transitions. The proposed modelling study showed that, in a system where intermediates are exchanged, a reversible transition between the two stable steady states occurs, whereas an exchange of the end product leads to an irreversible transition. The reversible transition is physiologically favoured, because the high-production state of TA must come to an end eventually. Moreover, the exchange of intermediates and TA is compared with the 3-way handshake widely used by computers linked in a network.

  15. Application of autoregressive moving average model in reactor noise analysis

    International Nuclear Information System (INIS)

    Tran Dinh Tri

    1993-01-01

    The application of an autoregressive (AR) model to estimating noise measurements has achieved many successes in reactor noise analysis in the last ten years. The physical processes that take place in the nuclear reactor, however, are described by an autoregressive moving average (ARMA) model rather than by an AR model. Consequently more correct results could be obtained by applying the ARMA model instead of the AR model to reactor noise analysis. In this paper the system of the generalised Yule-Walker equations is derived from the equation of an ARMA model, then a method for its solution is given. Numerical results show the applications of the method proposed. (author)

  16. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Directory of Open Access Journals (Sweden)

    K. Ichii

    2010-07-01

    Full Text Available Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine – based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID, we conducted two simulations: (1 point simulations at four eddy flux sites in Japan and (2 spatial simulations for Japan with a default model (based on original settings and a modified model (based on model parameter tuning using eddy flux data. Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP, most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  17. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  18. Data analysis and approximate models model choice, location-scale, analysis of variance, nonparametric regression and image analysis

    CERN Document Server

    Davies, Patrick Laurie

    2014-01-01

    Introduction IntroductionApproximate Models Notation Two Modes of Statistical AnalysisTowards One Mode of Analysis Approximation, Randomness, Chaos, Determinism ApproximationA Concept of Approximation Approximation Approximating a Data Set by a Model Approximation Regions Functionals and EquivarianceRegularization and Optimality Metrics and DiscrepanciesStrong and Weak Topologies On Being (almost) Honest Simulations and Tables Degree of Approximation and p-values ScalesStability of Analysis The Choice of En(α, P) Independence Procedures, Approximation and VaguenessDiscrete Models The Empirical Density Metrics and Discrepancies The Total Variation Metric The Kullback-Leibler and Chi-Squared Discrepancies The Po(λ) ModelThe b(k, p) and nb(k, p) Models The Flying Bomb Data The Student Study Times Data OutliersOutliers, Data Analysis and Models Breakdown Points and Equivariance Identifying Outliers and Breakdown Outliers in Multivariate Data Outliers in Linear Regression Outliers in Structured Data The Location...

  19. Dynamical analysis of cigarette smoking model with a saturated incidence rate

    Science.gov (United States)

    Zeb, Anwar; Bano, Ayesha; Alzahrani, Ebraheem; Zaman, Gul

    2018-04-01

    In this paper, we consider a delayed smoking model in which the potential smokers are assumed to satisfy the logistic equation. We discuss the dynamical behavior of our proposed model in the form of Delayed Differential Equations (DDEs) and show conditions for asymptotic stability of the model in steady state. We also discuss the Hopf bifurcation analysis of considered model. Finally, we use the nonstandard finite difference (NSFD) scheme to show the results graphically with help of MATLAB.

  20. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    Science.gov (United States)

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  1. Critical analysis of algebraic collective models

    International Nuclear Information System (INIS)

    Moshinsky, M.

    1986-01-01

    The author shall understand by algebraic collective models all those based on specific Lie algebras, whether the latter are suggested through simple shell model considerations like in the case of the Interacting Boson Approximation (IBA), or have a detailed microscopic foundation like the symplectic model. To analyze these models critically, it is convenient to take a simple conceptual example of them in which all steps can be implemented analytically or through elementary numerical analysis. In this note he takes as an example the symplectic model in a two dimensional space i.e. based on a sp(4,R) Lie algebra, and show how through its complete discussion we can get a clearer understanding of the structure of algebraic collective models of nuclei. In particular he discusses the association of Hamiltonians, related to maximal subalgebras of our basic Lie algebra, with specific types of spectra, and the connections between spectra and shapes

  2. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    Science.gov (United States)

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    Science.gov (United States)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  4. Fluctuation microscopy analysis of amorphous silicon models

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, J.M., E-mail: jmgibson@fsu.edu [Northeastern University, Department of Physics, Boston MA 02115 (United States); FAMU/FSU Joint College of Engineering, 225 Pottsdamer Street, Tallahassee, FL 32310 (United States); Treacy, M.M.J. [Arizona State University, Department of Physics, Tempe AZ 85287 (United States)

    2017-05-15

    Highlights: • Studied competing computer models for amorphous silicon and simulated fluctuation microscopy data. • Show that only paracrystalline/random network composite can fit published data. • Specifically show that pure random network or random network with void models do not fit available data. • Identify a new means to measure volume fraction of ordered material. • Identify unreported limitations of the Debye model for simulating fluctuation microscopy data. - Abstract: Using computer-generated models we discuss the use of fluctuation electron microscopy (FEM) to identify the structure of amorphous silicon. We show that a combination of variable resolution FEM to measure the correlation length, with correlograph analysis to obtain the structural motif, can pin down structural correlations. We introduce the method of correlograph variance as a promising means of independently measuring the volume fraction of a paracrystalline composite. From comparisons with published data, we affirm that only a composite material of paracrystalline and continuous random network that is substantially paracrystalline could explain the existing experimental data, and point the way to more precise measurements on amorphous semiconductors. The results are of general interest for other classes of disordered materials.

  5. Fluctuation microscopy analysis of amorphous silicon models

    International Nuclear Information System (INIS)

    Gibson, J.M.; Treacy, M.M.J.

    2017-01-01

    Highlights: • Studied competing computer models for amorphous silicon and simulated fluctuation microscopy data. • Show that only paracrystalline/random network composite can fit published data. • Specifically show that pure random network or random network with void models do not fit available data. • Identify a new means to measure volume fraction of ordered material. • Identify unreported limitations of the Debye model for simulating fluctuation microscopy data. - Abstract: Using computer-generated models we discuss the use of fluctuation electron microscopy (FEM) to identify the structure of amorphous silicon. We show that a combination of variable resolution FEM to measure the correlation length, with correlograph analysis to obtain the structural motif, can pin down structural correlations. We introduce the method of correlograph variance as a promising means of independently measuring the volume fraction of a paracrystalline composite. From comparisons with published data, we affirm that only a composite material of paracrystalline and continuous random network that is substantially paracrystalline could explain the existing experimental data, and point the way to more precise measurements on amorphous semiconductors. The results are of general interest for other classes of disordered materials.

  6. Predictive models for monitoring and analysis of the total zooplankton

    Directory of Open Access Journals (Sweden)

    Obradović Milica

    2014-01-01

    Full Text Available In recent years, modeling and prediction of total zooplankton abundance have been performed by various tools and techniques, among which data mining tools have been less frequent. The purpose of this paper is to automatically determine the dependency degree and the influence of physical, chemical and biological parameters on the total zooplankton abundance, through design of the specific data mining models. For this purpose, the analysis of key influencers was used. The analysis is based on the data obtained from the SeLaR information system - specifically, the data from the two reservoirs (Gruža and Grošnica with different morphometric characteristics and trophic state. The data is transformed into optimal structure for data analysis, upon which, data mining model based on the Naïve Bayes algorithm is constructed. The results of the analysis imply that in both reservoirs, parameters of groups and species of zooplankton have the greatest influence on the total zooplankton abundance. If these inputs (group and zooplankton species are left out, differences in the impact of physical, chemical and other biological parameters in dependences of reservoirs can be noted. In the Grošnica reservoir, analysis showed that the temporal dimension (months, nitrates, water temperature, chemical oxygen demand, chlorophyll and chlorides, had the key influence with strong relative impact. In the Gruža reservoir, key influence parameters for total zooplankton are: spatial dimension (location, water temperature and physiological groups of bacteria. The results show that the presented data mining model is usable on any kind of aquatic ecosystem and can also serve for the detection of inputs which could be the basis for the future analysis and modeling.

  7. Robust bayesian analysis of an autoregressive model with ...

    African Journals Online (AJOL)

    In this work, robust Bayesian analysis of the Bayesian estimation of an autoregressive model with exponential innovations is performed. Using a Bayesian robustness methodology, we show that, using a suitable generalized quadratic loss, we obtain optimal Bayesian estimators of the parameters corresponding to the ...

  8. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  9. Factor analysis shows association between family activity environment and children's health behaviour.

    Science.gov (United States)

    Hendrie, Gilly A; Coveney, John; Cox, David N

    2011-12-01

    To characterise the family activity environment in a questionnaire format, assess the questionnaire's reliability and describe its predictive ability by examining the relationships between the family activity environment and children's health behaviours - physical activity, screen time and fruit and vegetable intake. This paper describes the creation of a tool, based on previously validated scales, adapted from the food domain. Data are from 106 children and their parents (Adelaide, South Australia). Factor analysis was used to characterise factors within the family activity environment. Pearson-Product Moment correlations between the family environment and child outcomes, controlling for demographic variation, were examined. Three factors described the family activity environment - parental activity involvement, opportunity for role modelling and parental support for physical activity - and explained 37.6% of the variance. Controlling for demographic factors, the scale was significantly correlated with children's health behaviour - physical activity (r=0.27), screen time (r=-0.24) and fruit and vegetable intake (r=0.34). The family activity environment questionnaire shows high internal consistency and moderate predictive ability. This study has built on previous research by taking a more comprehensive approach to measuring the family activity environment. This research suggests the family activity environment should be considered in family-based health promotion interventions. © 2011 The Authors. ANZJPH © 2011 Public Health Association of Australia.

  10. Power Grid Modelling From Wind Turbine Perspective Using Principal Componenet Analysis

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2015-01-01

    In this study, we derive an eigenvector-based multivariate model of a power grid from the wind farm's standpoint using dynamic principal component analysis (DPCA). The main advantages of our model over previously developed models are being more realistic and having low complexity. We show that th...

  11. Sensitivity Analysis for Urban Drainage Modeling Using Mutual Information

    Directory of Open Access Journals (Sweden)

    Chuanqi Li

    2014-11-01

    Full Text Available The intention of this paper is to evaluate the sensitivity of the Storm Water Management Model (SWMM output to its input parameters. A global parameter sensitivity analysis is conducted in order to determine which parameters mostly affect the model simulation results. Two different methods of sensitivity analysis are applied in this study. The first one is the partial rank correlation coefficient (PRCC which measures nonlinear but monotonic relationships between model inputs and outputs. The second one is based on the mutual information which provides a general measure of the strength of the non-monotonic association between two variables. Both methods are based on the Latin Hypercube Sampling (LHS of the parameter space, and thus the same datasets can be used to obtain both measures of sensitivity. The utility of the PRCC and the mutual information analysis methods are illustrated by analyzing a complex SWMM model. The sensitivity analysis revealed that only a few key input variables are contributing significantly to the model outputs; PRCCs and mutual information are calculated and used to determine and rank the importance of these key parameters. This study shows that the partial rank correlation coefficient and mutual information analysis can be considered effective methods for assessing the sensitivity of the SWMM model to the uncertainty in its input parameters.

  12. Logic flowgraph model for disturbance analysis of a PWR pressurizer system

    International Nuclear Information System (INIS)

    Guarro, S.; Okrent, D.

    1984-01-01

    The Logic Flowgraph Methodology (LFM) has been developed as a synthetic simulation language for process reliability or disturbance analysis applications. A Disturbance Analysis System (DAS) using the LFM models can store the necessary information concerning a given process in an efficient way, and automatically construct in real time the diagnostic tree(s) showing the root cause(s) of occurring disturbances. A comprehensive LFM model for a PWR pressurizer system is presented and discussed, and the latest version of the LFM tree synthesis routine, optimized to achieve reduction of computer memory usage, is used to show the LFM diagnoses of selected hypothetic disturbances

  13. Predicate Argument Structure Analysis for Use Case Description Modeling

    Science.gov (United States)

    Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  14. Sensitivity analysis practices: Strategies for model-based inference

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (Vatican City State, Holy See,) (Italy)]. E-mail: andrea.saltelli@jrc.it; Ratto, Marco [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Tarantola, Stefano [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy); Campolongo, Francesca [Institute for the Protection and Security of the Citizen (IPSC), European Commission, Joint Research Centre, TP 361, 21020 Ispra (VA) (Italy)

    2006-10-15

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA.

  15. Sensitivity analysis practices: Strategies for model-based inference

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Ratto, Marco; Tarantola, Stefano; Campolongo, Francesca

    2006-01-01

    Fourteen years after Science's review of sensitivity analysis (SA) methods in 1989 (System analysis at molecular scale, by H. Rabitz) we search Science Online to identify and then review all recent articles having 'sensitivity analysis' as a keyword. In spite of the considerable developments which have taken place in this discipline, of the good practices which have emerged, and of existing guidelines for SA issued on both sides of the Atlantic, we could not find in our review other than very primitive SA tools, based on 'one-factor-at-a-time' (OAT) approaches. In the context of model corroboration or falsification, we demonstrate that this use of OAT methods is illicit and unjustified, unless the model under analysis is proved to be linear. We show that available good practices, such as variance based measures and others, are able to overcome OAT shortcomings and easy to implement. These methods also allow the concept of factors importance to be defined rigorously, thus making the factors importance ranking univocal. We analyse the requirements of SA in the context of modelling, and present best available practices on the basis of an elementary model. We also point the reader to available recipes for a rigorous SA

  16. ModelMate - A graphical user interface for model analysis

    Science.gov (United States)

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  17. Mathematical Model and Stability Analysis of Inverter-Based Distributed Generator

    Directory of Open Access Journals (Sweden)

    Alireza Khadem Abbasi

    2013-01-01

    Full Text Available This paper presents a mathematical (small-signal model of an electronically interfaced distributed generator (DG by considering the effect of voltage and frequency variations of the prime source. Dynamic equations are found by linearization about an operating point. In this study, the dynamic of DC part of the interface is included in the model. The stability analysis shows with proper selection of system parameters; the system is stable during steady-state and dynamic situations, and oscillatory modes are well damped. The proposed model is useful to study stability analysis of a standalone DG or a Microgrid.

  18. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  19. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  20. Models of Economic Analysis

    OpenAIRE

    Adrian Ioana; Tiberiu Socaciu

    2013-01-01

    The article presents specific aspects of management and models for economic analysis. Thus, we present the main types of economic analysis: statistical analysis, dynamic analysis, static analysis, mathematical analysis, psychological analysis. Also we present the main object of the analysis: the technological activity analysis of a company, the analysis of the production costs, the economic activity analysis of a company, the analysis of equipment, the analysis of labor productivity, the anal...

  1. Evolution analysis of the states of the EZ model

    International Nuclear Information System (INIS)

    Qing-Hua, Chen; Yi-Ming, Ding; Hong-Guang, Dong

    2009-01-01

    Based on suitable choice of states, this paper studies the stability of the equilibrium state of the EZ model by regarding the evolution of the EZ model as a Markov chain and by showing that the Markov chain is ergodic. The Markov analysis is applied to the EZ model with small number of agents, the exact equilibrium state for N = 5 and numerical results for N = 18 are obtained. (cross-disciplinary physics and related areas of science and technology)

  2. A model for website analysis and\tconception: the Website Canvas Model applied to\tEldiario.es

    Directory of Open Access Journals (Sweden)

    Carles Sanabre Vives

    2015-11-01

    Full Text Available This article presents the model of ideation and analysis called Website CanvasModel. It allows identifying the key aspects for a website to be successful, and shows how ithas been applied to Eldiario.es. As a result, the key factors prompting the success of thisdigital newspaper have been identified.

  3. Rich analysis and rational models: Inferring individual behavior from infant looking data

    Science.gov (United States)

    Piantadosi, Steven T.; Kidd, Celeste; Aslin, Richard

    2013-01-01

    Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive model to create a rich data analysis framework for infant looking times. We formalize (i) a statistical learning model (ii) a parametric linking between the learning model's beliefs and infants' looking behavior, and (iii) a data analysis model that infers parameters of the cognitive model and linking function for groups and individuals. Using this approach, we show that recent findings from Kidd, Piantadosi, and Aslin (2012) of a U-shaped relationship between look-away probability and stimulus complexity even holds within infants and is not due to averaging subjects with different types of behavior. Our results indicate that individual infants prefer stimuli of intermediate complexity, reserving attention for events that are moderately predictable given their probabilistic expectations about the world. PMID:24750256

  4. Classifying multi-model wheat yield impact response surfaces showing sensitivity to temperature and precipitation change

    Czech Academy of Sciences Publication Activity Database

    Fronzek, S.; Pirttioja, N. K.; Carter, T. R.; Bindi, M.; Hoffmann, H.; Palosuo, T.; Ruiz-Ramos, M.; Tao, F.; Trnka, Miroslav; Acutis, M.; Asseng, S.; Baranowski, P.; Basso, B.; Bodin, P.; Buis, S.; Cammarano, D.; Deligios, P.; Destain, M. F.; Dumont, B.; Ewert, F.; Ferrise, R.; Francois, L.; Gaiser, T.; Hlavinka, Petr; Jacquemin, I.; Kersebaum, K. C.; Kollas, C.; Krzyszczak, J.; Lorite, I. J.; Minet, J.; Ines Minguez, M.; Montesino, M.; Moriondo, M.; Mueller, C.; Nendel, C.; Öztürk, I.; Perego, A.; Rodriguez, A.; Ruane, A. C.; Ruget, F.; Sanna, M.; Semenov, M. A.; Slawinski, C.; Stratonovitch, P.; Supit, I.; Waha, K.; Wang, E.; Wu, L.; Zhao, Z.; Rötter, R.

    2018-01-01

    Roč. 159, jan (2018), s. 209-224 ISSN 0308-521X Institutional support: RVO:86652079 Keywords : climate - change * crop models * probabilistic assessment * simulating impacts * british catchments * uncertainty * europe * productivity * calibration * adaptation * Classification * Climate change * Crop model * Ensemble * Sensitivity analysis * Wheat Subject RIV: GC - Agronomy OBOR OECD: Agronomy, plant breeding and plant protection Impact factor: 2.571, year: 2016

  5. Information-theoretic analysis of the dynamics of an executable biological model.

    Directory of Open Access Journals (Sweden)

    Avital Sadot

    Full Text Available To facilitate analysis and understanding of biological systems, large-scale data are often integrated into models using a variety of mathematical and computational approaches. Such models describe the dynamics of the biological system and can be used to study the changes in the state of the system over time. For many model classes, such as discrete or continuous dynamical systems, there exist appropriate frameworks and tools for analyzing system dynamics. However, the heterogeneous information that encodes and bridges molecular and cellular dynamics, inherent to fine-grained molecular simulation models, presents significant challenges to the study of system dynamics. In this paper, we present an algorithmic information theory based approach for the analysis and interpretation of the dynamics of such executable models of biological systems. We apply a normalized compression distance (NCD analysis to the state representations of a model that simulates the immune decision making and immune cell behavior. We show that this analysis successfully captures the essential information in the dynamics of the system, which results from a variety of events including proliferation, differentiation, or perturbations such as gene knock-outs. We demonstrate that this approach can be used for the analysis of executable models, regardless of the modeling framework, and for making experimentally quantifiable predictions.

  6. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    Science.gov (United States)

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  7. Optical model analysis of intermediate energy p-4He scattering

    International Nuclear Information System (INIS)

    Greben, J.M.; Gourishankar, R.

    1983-03-01

    Recent Wolfenstein R-parameter data are used to explain and resolve previous problems with optical model descriptions of p- 4 He elastic scattering at 500 MeV. An essential component in this optical model analysis is a qualitative interpretation of different features of the elastic data in terms of the Born approximation. First we show that the R-data require the real spin-orbit potential to have certain geometrical properties which were missing in previous analyses. We can then show that the fast fall-off of the cross-section for small angles, together with the rapid increase and subsequent decrease of the polarization, establishes the need for an attractive tail in the real central potentials can also be inferred from this qualitative analysis, in particular a strong reduction of the spin-orbit potential. Our final potential gives a reduction of the X 2 /datapoint by about 20 in comparison to previous potentials, and underlines the usefulness of the qualitative Born analysis

  8. Using multi-criteria analysis of simulation models to understand complex biological systems

    Science.gov (United States)

    Maureen C. Kennedy; E. David. Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  9. Standard model for safety analysis report of fuel fabrication plants

    International Nuclear Information System (INIS)

    1980-09-01

    A standard model for a safety analysis report of fuel fabrication plants is established. This model shows the presentation format, the origin, and the details of the minimal information required by CNEN (Comissao Nacional de Energia Nuclear) aiming to evaluate the requests of construction permits and operation licenses made according to the legislation in force. (E.G.) [pt

  10. Standard model for safety analysis report of fuel reprocessing plants

    International Nuclear Information System (INIS)

    1979-12-01

    A standard model for a safety analysis report of fuel reprocessing plants is established. This model shows the presentation format, the origin, and the details of the minimal information required by CNEN (Comissao Nacional de Energia Nuclear) aiming to evaluate the requests of construction permits and operation licenses made according to the legislation in force. (E.G.) [pt

  11. Urban Saturated Power Load Analysis Based on a Novel Combined Forecasting Model

    Directory of Open Access Journals (Sweden)

    Huiru Zhao

    2015-03-01

    Full Text Available Analysis of urban saturated power loads is helpful to coordinate urban power grid construction and economic social development. There are two different kinds of forecasting models: the logistic curve model focuses on the growth law of the data itself, while the multi-dimensional forecasting model considers several influencing factors as the input variables. To improve forecasting performance, a novel combined forecasting model for saturated power load analysis was proposed in this paper, which combined the above two models. Meanwhile, the weights of these two models in the combined forecasting model were optimized by employing a fruit fly optimization algorithm. Using Hubei Province as the example, the effectiveness of the proposed combined forecasting model was verified, demonstrating a higher forecasting accuracy. The analysis result shows that the power load of Hubei Province will reach saturation in 2039, and the annual maximum power load will reach about 78,630 MW. The results obtained from this proposed hybrid urban saturated power load analysis model can serve as a reference for sustainable development for urban power grids, regional economies, and society at large.

  12. Showing that the race model inequality is not violated

    DEFF Research Database (Denmark)

    Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul

    2012-01-01

    important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...

  13. [Simulation and data analysis of stereological modeling based on virtual slices].

    Science.gov (United States)

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  14. Urban Sprawl Analysis and Modeling in Asmara, Eritrea

    Directory of Open Access Journals (Sweden)

    Mussie G. Tewolde

    2011-09-01

    Full Text Available The extension of urban perimeter markedly cuts available productive land. Hence, studies in urban sprawl analysis and modeling play an important role to ensure sustainable urban development. The urbanization pattern of the Greater Asmara Area (GAA, the capital of Eritrea, was studied. Satellite images and geospatial tools were employed to analyze the spatiotemporal urban landuse changes. Object-Based Image Analysis (OBIA, Landuse Cover Change (LUCC analysis and urban sprawl analysis using Shannon Entropy were carried out. The Land Change Modeler (LCM was used to develop a model of urban growth. The Multi-layer Perceptron Neural Network was employed to model the transition potential maps with an accuracy of 85.9% and these were used as an input for the ‘actual’ urban modeling with Markov chains. Model validation was assessed and a scenario of urban land use change of the GAA up to year 2020 was presented. The result of the study indicated that the built-up area has tripled in size (increased by 4,441 ha between 1989 and 2009. Specially, after year 2000 urban sprawl in GAA caused large scale encroachment on high potential agricultural lands and plantation cover. The scenario for year 2020 shows an increase of the built-up areas by 1,484 ha (25% which may cause further loss. The study indicated that the land allocation system in the GAA overrode the landuse plan, which caused the loss of agricultural land and plantation cover. The recommended policy options might support decision makers to resolve further loss of agricultural land and plantation cover and to achieve sustainable urban development planning in the GAA.

  15. Post-dryout heat transfer analysis model with droplet Lagrangian simulation

    International Nuclear Information System (INIS)

    Keizo Matsuura; Isao Kataoka; Kaichiro Mishima

    2005-01-01

    Post-dryout heat transfer analysis was carried out considering droplet behavior by using the Lagrangian simulation method. Post-dryout heat transfer is an important heat transfer mechanism in many industrial appliances. Especially in recent Japanese BWR licensing, the standard for assessing the integrity of fuel that has experienced boiling transition is being examined. Although post-dryout heat transfer analysis is important when predicting wall temperature, it is difficult to accurately predict the heat transfer coefficient in the post-dryout regime because of the many heat transfer paths and non-equilibrium status between droplet and vapor. Recently, an analysis model that deals with many heat transfer paths including droplet direct contact heat transfer was developed and its results showed good agreement with experimental results. The model also showed that heat transfer by droplet could not be neglected in the low mass flux condition. However, the model deals with droplet deposition behavior by experimental droplet deposition correlation, so it cannot estimate the effect of droplet flow on turbulent flow field and heat transfer. Therefore, in this study we deal with many droplets separately by using the Lagrangian simulation method and hence estimate the effect of droplet flow on the turbulent flow field. We analyzed post-dryout experimental results and found that they correlated well with the analysis results. (authors)

  16. An effective convolutional neural network model for Chinese sentiment analysis

    Science.gov (United States)

    Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong

    2017-06-01

    Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.

  17. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  18. SDI CFD MODELING ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.

    2011-05-05

    The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and

  19. Practical Soil-Shallow Foundation Model for Nonlinear Structural Analysis

    Directory of Open Access Journals (Sweden)

    Moussa Leblouba

    2016-01-01

    Full Text Available Soil-shallow foundation interaction models that are incorporated into most structural analysis programs generally lack accuracy and efficiency or neglect some aspects of foundation behavior. For instance, soil-shallow foundation systems have been observed to show both small and large loops under increasing amplitude load reversals. This paper presents a practical macroelement model for soil-shallow foundation system and its stability under simultaneous horizontal and vertical loads. The model comprises three spring elements: nonlinear horizontal, nonlinear rotational, and linear vertical springs. The proposed macroelement model was verified using experimental test results from large-scale model foundations subjected to small and large cyclic loading cases.

  20. Static aeroelastic analysis including geometric nonlinearities based on reduced order model

    Directory of Open Access Journals (Sweden)

    Changchuan Xie

    2017-04-01

    Full Text Available This paper describes a method proposed for modeling large deflection of aircraft in nonlinear aeroelastic analysis by developing reduced order model (ROM. The method is applied for solving the static aeroelastic and static aeroelastic trim problems of flexible aircraft containing geometric nonlinearities; meanwhile, the non-planar effects of aerodynamics and follower force effect have been considered. ROMs are computational inexpensive mathematical representations compared to traditional nonlinear finite element method (FEM especially in aeroelastic solutions. The approach for structure modeling presented here is on the basis of combined modal/finite element (MFE method that characterizes the stiffness nonlinearities and we apply that structure modeling method as ROM to aeroelastic analysis. Moreover, the non-planar aerodynamic force is computed by the non-planar vortex lattice method (VLM. Structure and aerodynamics can be coupled with the surface spline method. The results show that both of the static aeroelastic analysis and trim analysis of aircraft based on structure ROM can achieve a good agreement compared to analysis based on the FEM and experimental result.

  1. Sensitivity analysis of an Advanced Gas-cooled Reactor control rod model

    International Nuclear Information System (INIS)

    Scott, M.; Green, P.L.; O’Driscoll, D.; Worden, K.; Sims, N.D.

    2016-01-01

    Highlights: • A model was made of the AGR control rod mechanism. • The aim was to better understand the performance when shutting down the reactor. • The model showed good agreement with test data. • Sensitivity analysis was carried out. • The results demonstrated the robustness of the system. - Abstract: A model has been made of the primary shutdown system of an Advanced Gas-cooled Reactor nuclear power station. The aim of this paper is to explore the use of sensitivity analysis techniques on this model. The two motivations for performing sensitivity analysis are to quantify how much individual uncertain parameters are responsible for the model output uncertainty, and to make predictions about what could happen if one or several parameters were to change. Global sensitivity analysis techniques were used based on Gaussian process emulation; the software package GEM-SA was used to calculate the main effects, the main effect index and the total sensitivity index for each parameter and these were compared to local sensitivity analysis results. The results suggest that the system performance is resistant to adverse changes in several parameters at once.

  2. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  3. Multiscale Signal Analysis and Modeling

    CERN Document Server

    Zayed, Ahmed

    2013-01-01

    Multiscale Signal Analysis and Modeling presents recent advances in multiscale analysis and modeling using wavelets and other systems. This book also presents applications in digital signal processing using sampling theory and techniques from various function spaces, filter design, feature extraction and classification, signal and image representation/transmission, coding, nonparametric statistical signal processing, and statistical learning theory. This book also: Discusses recently developed signal modeling techniques, such as the multiscale method for complex time series modeling, multiscale positive density estimations, Bayesian Shrinkage Strategies, and algorithms for data adaptive statistics Introduces new sampling algorithms for multidimensional signal processing Provides comprehensive coverage of wavelets with presentations on waveform design and modeling, wavelet analysis of ECG signals and wavelet filters Reviews features extraction and classification algorithms for multiscale signal and image proce...

  4. Modeling and analysis of advanced binary cycles

    Energy Technology Data Exchange (ETDEWEB)

    Gawlik, K.

    1997-12-31

    A computer model (Cycle Analysis Simulation Tool, CAST) and a methodology have been developed to perform value analysis for small, low- to moderate-temperature binary geothermal power plants. The value analysis method allows for incremental changes in the levelized electricity cost (LEC) to be determined between a baseline plant and a modified plant. Thermodynamic cycle analyses and component sizing are carried out in the model followed by economic analysis which provides LEC results. The emphasis of the present work is on evaluating the effect of mixed working fluids instead of pure fluids on the LEC of a geothermal binary plant that uses a simple Organic Rankine Cycle. Four resources were studied spanning the range of 265{degrees}F to 375{degrees}F. A variety of isobutane and propane based mixtures, in addition to pure fluids, were used as working fluids. This study shows that the use of propane mixtures at a 265{degrees}F resource can reduce the LEC by 24% when compared to a base case value that utilizes commercial isobutane as its working fluid. The cost savings drop to 6% for a 375{degrees}F resource, where an isobutane mixture is favored. Supercritical cycles were found to have the lowest cost at all resources.

  5. Static analysis of a Model of the LDL degradation pathway

    DEFF Research Database (Denmark)

    Pilegaard, Henrik; Nielson, Flemming; Nielson, Hanne Riis

    2005-01-01

    BioAmbients is a derivative of mobile ambients that has shown promise of describing interesting features of the behaviour of biological systems. As for other ambient calculi static program analysis can be used to compute safe approximations of the behavior of modelled systems. We use these tools ...... to model and analyse the production of cholesterol in living cells and show that we are able to pinpoint the difference in behaviour between models of healthy systems and models of mutated systems giving rise to known diseases....

  6. Modeling Patient No-Show History and Predicting Future Outpatient Appointment Behavior in the Veterans Health Administration.

    Science.gov (United States)

    Goffman, Rachel M; Harris, Shannon L; May, Jerrold H; Milicevic, Aleksandra S; Monte, Robert J; Myaskovsky, Larissa; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2017-05-01

    Missed appointments reduce the efficiency of the health care system and negatively impact access to care for all patients. Identifying patients at risk for missing an appointment could help health care systems and providers better target interventions to reduce patient no-shows. Our aim was to develop and test a predictive model that identifies patients that have a high probability of missing their outpatient appointments. Demographic information, appointment characteristics, and attendance history were drawn from the existing data sets from four Veterans Affairs health care facilities within six separate service areas. Past attendance behavior was modeled using an empirical Markov model based on up to 10 previous appointments. Using logistic regression, we developed 24 unique predictive models. We implemented the models and tested an intervention strategy using live reminder calls placed 24, 48, and 72 hours ahead of time. The pilot study targeted 1,754 high-risk patients, whose probability of missing an appointment was predicted to be at least 0.2. Our results indicate that three variables were consistently related to a patient's no-show probability in all 24 models: past attendance behavior, the age of the appointment, and having multiple appointments scheduled on that day. After the intervention was implemented, the no-show rate in the pilot group was reduced from the expected value of 35% to 12.16% (p value < 0.0001). The predictive model accurately identified patients who were more likely to miss their appointments. Applying the model in practice enables clinics to apply more intensive intervention measures to high-risk patients. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  7. Sensitivity Analysis of a Riparian Vegetation Growth Model

    Directory of Open Access Journals (Sweden)

    Michael Nones

    2016-11-01

    Full Text Available The paper presents a sensitivity analysis of two main parameters used in a mathematic model able to evaluate the effects of changing hydrology on the growth of riparian vegetation along rivers and its effects on the cross-section width. Due to a lack of data in existing literature, in a past study the schematization proposed here was applied only to two large rivers, assuming steady conditions for the vegetational carrying capacity and coupling the vegetal model with a 1D description of the river morphology. In this paper, the limitation set by steady conditions is overcome, imposing the vegetational evolution dependent upon the initial plant population and the growth rate, which represents the potential growth of the overall vegetation along the watercourse. The sensitivity analysis shows that, regardless of the initial population density, the growth rate can be considered the main parameter defining the development of riparian vegetation, but it results site-specific effects, with significant differences for large and small rivers. Despite the numerous simplifications adopted and the small database analyzed, the comparison between measured and computed river widths shows a quite good capability of the model in representing the typical interactions between riparian vegetation and water flow occurring along watercourses. After a thorough calibration, the relatively simple structure of the code permits further developments and applications to a wide range of alluvial rivers.

  8. Development and analysis of a twelfth degree and order gravity model for Mars

    Science.gov (United States)

    Christensen, E. J.; Balmino, G.

    1979-01-01

    Satellite geodesy techniques previously applied to artificial earth satellites have been extended to obtain a high-resolution gravity field for Mars. Two-way Doppler data collected by 10 Deep Space Network (DSN) stations during Mariner 9 and Viking 1 and 2 missions have been processed to obtain a twelfth degree and order spherical harmonic model for the martian gravitational potential. The quality of this model was evaluated by examining the rms residuals within the fit and the ability of the model to predict the spacecraft state beyond the fit. Both indicators show that more data and higher degree and order harmonics will be required to further refine our knowledge of the martian gravity field. The model presented shows much promise, since it resolves local gravity features which correlate highly with the martian topography. An isostatic analysis based on this model, as well as an error analysis, shows rather complete compensation on a global (long wavelength) scale. Though further model refinements are necessary to be certain, local (short wavelength) features such as the shield volcanos in Tharsis appear to be uncompensated. These are interpreted to place some bounds on the internal structure of Mars.

  9. Rotor-Flying Manipulator: Modeling, Analysis, and Control

    Directory of Open Access Journals (Sweden)

    Bin Yang

    2014-01-01

    Full Text Available Equipping multijoint manipulators on a mobile robot is a typical redesign scheme to make the latter be able to actively influence the surroundings and has been extensively used for many ground robots, underwater robots, and space robotic systems. However, the rotor-flying robot (RFR is difficult to be made such redesign. This is mainly because the motion of the manipulator will bring heavy coupling between itself and the RFR system, which makes the system model highly complicated and the controller design difficult. Thus, in this paper, the modeling, analysis, and control of the combined system, called rotor-flying multijoint manipulator (RF-MJM, are conducted. Firstly, the detailed dynamics model is constructed and analyzed. Subsequently, a full-state feedback linear quadratic regulator (LQR controller is designed through obtaining linearized model near steady state. Finally, simulations are conducted and the results are analyzed to show the basic control performance.

  10. Multi-model analysis of terrestrial carbon cycles in Japan: reducing uncertainties in model outputs among different terrestrial biosphere models using flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2009-08-01

    Terrestrial biosphere models show large uncertainties when simulating carbon and water cycles, and reducing these uncertainties is a priority for developing more accurate estimates of both terrestrial ecosystem statuses and future climate changes. To reduce uncertainties and improve the understanding of these carbon budgets, we investigated the ability of flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine-based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and an improved model (based on calibration using flux observations). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using flux observations (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs, and model calibration using flux observations significantly improved the model outputs. These results show that to reduce uncertainties among terrestrial biosphere models, we need to conduct careful validation and calibration with available flux observations. Flux observation data significantly improved terrestrial biosphere models, not only on a point scale but also on spatial scales.

  11. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    Science.gov (United States)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  12. KIC 8164262: a heartbeat star showing tidally induced pulsations with resonant locking

    Science.gov (United States)

    Hambleton, K.; Fuller, J.; Thompson, S.; Prša, A.; Kurtz, D. W.; Shporer, A.; Isaacson, H.; Howard, A. W.; Endl, M.; Cochran, W.; Murphy, S. J.

    2018-02-01

    We present the analysis of KIC 8164262, a heartbeat star with a high-amplitude (∼1 mmag), tidally resonant pulsation (a mode in resonance with the orbit) at 229 times the orbital frequency and a plethora of tidally induced g-mode pulsations (modes excited by the orbit). The analysis combines Kepler light curves with follow-up spectroscopic data from the Keck telescope, KPNO (Kitt Peak National Observatory) 4-m Mayall telescope and the 2.7-m telescope at the McDonald observatory. We apply the binary modelling software, PHOEBE, to the Kepler light curve and radial velocity data to determine a detailed binary star model that includes the prominent pulsation and Doppler boosting, alongside the usual attributes of a binary star model (including tidal distortion and reflection). The results show that the system contains a slightly evolved F star with an M secondary companion in a highly eccentric orbit (e = 0.886). We use the results of the binary star model in a companion paper (Fuller) where we show that the prominent pulsation can be explained by a tidally excited oscillation mode held near resonance by a resonance locking mechanism.

  13. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  14. Therapeutic Implications from Sensitivity Analysis of Tumor Angiogenesis Models

    Science.gov (United States)

    Poleszczuk, Jan; Hahnfeldt, Philip; Enderling, Heiko

    2015-01-01

    Anti-angiogenic cancer treatments induce tumor starvation and regression by targeting the tumor vasculature that delivers oxygen and nutrients. Mathematical models prove valuable tools to study the proof-of-concept, efficacy and underlying mechanisms of such treatment approaches. The effects of parameter value uncertainties for two models of tumor development under angiogenic signaling and anti-angiogenic treatment are studied. Data fitting is performed to compare predictions of both models and to obtain nominal parameter values for sensitivity analysis. Sensitivity analysis reveals that the success of different cancer treatments depends on tumor size and tumor intrinsic parameters. In particular, we show that tumors with ample vascular support can be successfully targeted with conventional cytotoxic treatments. On the other hand, tumors with curtailed vascular support are not limited by their growth rate and therefore interruption of neovascularization emerges as the most promising treatment target. PMID:25785600

  15. Comparative analysis of calculation models of railway subgrade

    Directory of Open Access Journals (Sweden)

    I.O. Sviatko

    2013-08-01

    Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.

  16. Numeric-modeling sensitivity analysis of the performance of wind turbine arrays

    Energy Technology Data Exchange (ETDEWEB)

    Lissaman, P.B.S.; Gyatt, G.W.; Zalay, A.D.

    1982-06-01

    An evaluation of the numerical model created by Lissaman for predicting the performance of wind turbine arrays has been made. Model predictions of the wake parameters have been compared with both full-scale and wind tunnel measurements. Only limited, full-scale data were available, while wind tunnel studies showed difficulties in representing real meteorological conditions. Nevertheless, several modifications and additions have been made to the model using both theoretical and empirical techniques and the new model shows good correlation with experiment. The larger wake growth rate and shorter near wake length predicted by the new model lead to reduced interference effects on downstream turbines and hence greater array efficiencies. The array model has also been re-examined and now incorporates the ability to show the effects of real meteorological conditions such as variations in wind speed and unsteady winds. The resulting computer code has been run to show the sensitivity of array performance to meteorological, machine, and array parameters. Ambient turbulence and windwise spacing are shown to dominate, while hub height ratio is seen to be relatively unimportant. Finally, a detailed analysis of the Goodnoe Hills wind farm in Washington has been made to show how power output can be expected to vary with ambient turbulence, wind speed, and wind direction.

  17. Hypersonic - Model Analysis as a Service

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald

    2014-01-01

    Hypersonic is a Cloud-based tool that proposes a new approach to the deployment of model analysis facilities. It is implemented as a RESTful Web service API o_ering analysis features such as model clone detection. This approach allows the migration of resource intensive analysis algorithms from...

  18. Urine proteome analysis in Dent's disease shows high selective changes potentially involved in chronic renal damage.

    Science.gov (United States)

    Santucci, Laura; Candiano, Giovanni; Anglani, Franca; Bruschi, Maurizio; Tosetto, Enrica; Cremasco, Daniela; Murer, Luisa; D'Ambrosio, Chiara; Scaloni, Andrea; Petretto, Andrea; Caridi, Gianluca; Rossi, Roberta; Bonanni, Alice; Ghiggeri, Gian Marco

    2016-01-01

    Definition of the urinary protein composition would represent a potential tool for diagnosis in many clinical conditions. The use of new proteomic technologies allows detection of genetic and post-trasductional variants that increase sensitivity of the approach but complicates comparison within a heterogeneous patient population. Overall, this limits research of urinary biomarkers. Studying monogenic diseases are useful models to address this issue since genetic variability is reduced among first- and second-degree relatives of the same family. We applied this concept to Dent's disease, a monogenic condition characterised by low-molecular-weight proteinuria that is inherited following an X-linked trait. Results are presented here on a combined proteomic approach (LC-mass spectrometry, Western blot and zymograms for proteases and inhibitors) to characterise urine proteins in a large family (18 members, 6 hemizygous patients, 6 carrier females, and 6 normals) with Dent's diseases due to the 1070G>T mutation of the CLCN5. Gene ontology analysis on more than 1000 proteins showed that several clusters of proteins characterised urine of affected patients compared to carrier females and normal subjects: proteins involved in extracellular matrix remodelling were the major group. Specific analysis on metalloproteases and their inhibitors underscored unexpected mechanisms potentially involved in renal fibrosis. Studying with new-generation techniques for proteomic analysis of the members of a large family with Dent's disease sharing the same molecular defect allowed highly repetitive results that justify conclusions. Identification in urine of proteins actively involved in interstitial matrix remodelling poses the question of active anti-fibrotic drugs in Dent's patients. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Sensitivity analysis approaches applied to systems biology models.

    Science.gov (United States)

    Zi, Z

    2011-11-01

    With the rising application of systems biology, sensitivity analysis methods have been widely applied to study the biological systems, including metabolic networks, signalling pathways and genetic circuits. Sensitivity analysis can provide valuable insights about how robust the biological responses are with respect to the changes of biological parameters and which model inputs are the key factors that affect the model outputs. In addition, sensitivity analysis is valuable for guiding experimental analysis, model reduction and parameter estimation. Local and global sensitivity analysis approaches are the two types of sensitivity analysis that are commonly applied in systems biology. Local sensitivity analysis is a classic method that studies the impact of small perturbations on the model outputs. On the other hand, global sensitivity analysis approaches have been applied to understand how the model outputs are affected by large variations of the model input parameters. In this review, the author introduces the basic concepts of sensitivity analysis approaches applied to systems biology models. Moreover, the author discusses the advantages and disadvantages of different sensitivity analysis methods, how to choose a proper sensitivity analysis approach, the available sensitivity analysis tools for systems biology models and the caveats in the interpretation of sensitivity analysis results.

  20. Human eyeball model reconstruction and quantitative analysis.

    Science.gov (United States)

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  1. Integrating Household Risk Mitigation Behavior in Flood Risk Analysis: An Agent-Based Model Approach.

    Science.gov (United States)

    Haer, Toon; Botzen, W J Wouter; de Moel, Hans; Aerts, Jeroen C J H

    2017-10-01

    Recent studies showed that climate change and socioeconomic trends are expected to increase flood risks in many regions. However, in these studies, human behavior is commonly assumed to be constant, which neglects interaction and feedback loops between human and environmental systems. This neglect of human adaptation leads to a misrepresentation of flood risk. This article presents an agent-based model that incorporates human decision making in flood risk analysis. In particular, household investments in loss-reducing measures are examined under three economic decision models: (1) expected utility theory, which is the traditional economic model of rational agents; (2) prospect theory, which takes account of bounded rationality; and (3) a prospect theory model, which accounts for changing risk perceptions and social interactions through a process of Bayesian updating. We show that neglecting human behavior in flood risk assessment studies can result in a considerable misestimation of future flood risk, which is in our case study an overestimation of a factor two. Furthermore, we show how behavior models can support flood risk analysis under different behavioral assumptions, illustrating the need to include the dynamic adaptive human behavior of, for instance, households, insurers, and governments. The method presented here provides a solid basis for exploring human behavior and the resulting flood risk with respect to low-probability/high-impact risks. © 2016 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  2. Thermodynamic analysis of regulation in metabolic networks using constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Mahadevan Radhakrishnan

    2010-05-01

    Full Text Available Abstract Background Geobacter sulfurreducens is a member of the Geobacter species, which are capable of oxidation of organic waste coupled to the reduction of heavy metals and electrode with applications in bioremediation and bioenergy generation. While the metabolism of this organism has been studied through the development of a stoichiometry based genome-scale metabolic model, the associated regulatory network has not yet been well studied. In this manuscript, we report on the implementation of a thermodynamics based metabolic flux model for Geobacter sulfurreducens. We use this updated model to identify reactions that are subject to regulatory control in the metabolic network of G. sulfurreducens using thermodynamic variability analysis. Findings As a first step, we have validated the regulatory sites and bottleneck reactions predicted by the thermodynamic flux analysis in E. coli by evaluating the expression ranges of the corresponding genes. We then identified ten reactions in the metabolic network of G. sulfurreducens that are predicted to be candidates for regulation. We then compared the free energy ranges for these reactions with the corresponding gene expression fold changes under conditions of different environmental and genetic perturbations and show that the model predictions of regulation are consistent with data. In addition, we also identify reactions that operate close to equilibrium and show that the experimentally determined exchange coefficient (a measure of reversibility is significant for these reactions. Conclusions Application of the thermodynamic constraints resulted in identification of potential bottleneck reactions not only from the central metabolism but also from the nucleotide and amino acid subsystems, thereby showing the highly coupled nature of the thermodynamic constraints. In addition, thermodynamic variability analysis serves as a valuable tool in estimating the ranges of ΔrG' of every reaction in the model

  3. Nonlinear dynamic mechanism of vocal tremor from voice analysis and model simulations

    Science.gov (United States)

    Zhang, Yu; Jiang, Jack J.

    2008-09-01

    Nonlinear dynamic analysis and model simulations are used to study the nonlinear dynamic characteristics of vocal folds with vocal tremor, which can typically be characterized by low-frequency modulation and aperiodicity. Tremor voices from patients with disorders such as paresis, Parkinson's disease, hyperfunction, and adductor spasmodic dysphonia show low-dimensional characteristics, differing from random noise. Correlation dimension analysis statistically distinguishes tremor voices from normal voices. Furthermore, a nonlinear tremor model is proposed to study the vibrations of the vocal folds with vocal tremor. Fractal dimensions and positive Lyapunov exponents demonstrate the evidence of chaos in the tremor model, where amplitude and frequency play important roles in governing vocal fold dynamics. Nonlinear dynamic voice analysis and vocal fold modeling may provide a useful set of tools for understanding the dynamic mechanism of vocal tremor in patients with laryngeal diseases.

  4. Development of local TDC model in core thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Kwon, H.S.; Park, J.R.; Hwang, D.H.; Lee, S.K.

    2004-01-01

    The local TDC model consisting of natural mixing and forced mixing part was developed to obtain more realistic local fluid properties in the core subchannel analysis. To evaluate the performance of local TDC model, the CHF prediction capability was tested with the various CHF correlations and local fluid properties at CHF location which are based on the local TDC model. The results show that the standard deviation of measured to predicted CHF ratio (M/P) based on local TDC model can be reduced by about 7% compared to those based on global TDC model when the CHF correlation has no term to account for distance from the spacer grid. (author)

  5. Structural modeling and in silico analysis of human superoxide dismutase 2.

    Directory of Open Access Journals (Sweden)

    Mariana Dias Castela de Carvalho

    Full Text Available Aging in the world population has increased every year. Superoxide dismutase 2 (Mn-SOD or SOD2 protects against oxidative stress, a main factor influencing cellular longevity. Polymorphisms in SOD2 have been associated with the development of neurodegenerative diseases, such as Alzheimer's and Parkinson's disease, as well as psychiatric disorders, such as schizophrenia, depression and bipolar disorder. In this study, all of the described natural variants (S10I, A16V, E66V, G76R, I82T and R156W of SOD2 were subjected to in silico analysis using eight different algorithms: SNPeffect, PolyPhen-2, PhD-SNP, PMUT, SIFT, SNAP, SNPs&GO and nsSNPAnalyzer. This analysis revealed disparate results for a few of the algorithms. The results showed that, from at least one algorithm, each amino acid substitution appears to harmfully affect the protein. Structural theoretical models were created for variants through comparative modelling performed using the MHOLline server (which includes MODELLER and PROCHECK and ab initio modelling, using the I-Tasser server. The predicted models were evaluated using TM-align, and the results show that the models were constructed with high accuracy. The RMSD values of the modelled mutants indicated likely pathogenicity for all missense mutations. Structural phylogenetic analysis using ConSurf revealed that human SOD2 is highly conserved. As a result, a human-curated database was generated that enables biologists and clinicians to explore SOD2 nsSNPs, including predictions of their effects and visualisation of the alignment of both the wild-type and mutant structures. The database is freely available at http://bioinfogroup.com/database and will be regularly updated.

  6. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  7. Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte

    2004-01-01

    are proposed, but all remain applicable to other domain problems. The well-known multi-resolution AAM optimisation is extended to include sequential relaxations on texture resolution, model coverage and model parameter constraints. Fully unsupervised analysis is obtained by exploiting model parameter...... that show that the method produces accurate, robust and rapid segmentations in a cross sectional study of 17 subjects, establishing its feasibility as a fully automated clinical tool for analysis and segmentation.......This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case...

  8. Stochastic modeling analysis and simulation

    CERN Document Server

    Nelson, Barry L

    1995-01-01

    A coherent introduction to the techniques for modeling dynamic stochastic systems, this volume also offers a guide to the mathematical, numerical, and simulation tools of systems analysis. Suitable for advanced undergraduates and graduate-level industrial engineers and management science majors, it proposes modeling systems in terms of their simulation, regardless of whether simulation is employed for analysis. Beginning with a view of the conditions that permit a mathematical-numerical analysis, the text explores Poisson and renewal processes, Markov chains in discrete and continuous time, se

  9. Modelling and analysis of turbulent datasets using Auto Regressive Moving Average processes

    International Nuclear Information System (INIS)

    Faranda, Davide; Dubrulle, Bérengère; Daviaud, François; Pons, Flavio Maria Emanuele; Saint-Michel, Brice; Herbert, Éric; Cortet, Pierre-Philippe

    2014-01-01

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test the method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system

  10. Signal analysis of accelerometry data using gravity-based modeling

    Science.gov (United States)

    Davey, Neil P.; James, Daniel A.; Anderson, Megan E.

    2004-03-01

    Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.

  11. A regional scale modeling framework combining biogeochemical model with life cycle and economic analysis for integrated assessment of cropping systems.

    Science.gov (United States)

    Tabatabaie, Seyed Mohammad Hossein; Bolte, John P; Murthy, Ganti S

    2018-06-01

    The goal of this study was to integrate a crop model, DNDC (DeNitrification-DeComposition), with life cycle assessment (LCA) and economic analysis models using a GIS-based integrated platform, ENVISION. The integrated model enables LCA practitioners to conduct integrated economic analysis and LCA on a regional scale while capturing the variability of soil emissions due to variation in regional factors during production of crops and biofuel feedstocks. In order to evaluate the integrated model, the corn-soybean cropping system in Eagle Creek Watershed, Indiana was studied and the integrated model was used to first model the soil emissions and then conduct the LCA as well as economic analysis. The results showed that the variation in soil emissions due to variation in weather is high causing some locations to be carbon sink in some years and source of CO 2 in other years. In order to test the model under different scenarios, two tillage scenarios were defined: 1) conventional tillage (CT) and 2) no tillage (NT) and analyzed with the model. The overall GHG emissions for the corn-soybean cropping system was simulated and results showed that the NT scenario resulted in lower soil GHG emissions compared to CT scenario. Moreover, global warming potential (GWP) of corn ethanol from well to pump varied between 57 and 92gCO 2 -eq./MJ while GWP under the NT system was lower than that of the CT system. The cost break-even point was calculated as $3612.5/ha in a two year corn-soybean cropping system and the results showed that under low and medium prices for corn and soybean most of the farms did not meet the break-even point. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    Science.gov (United States)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  13. Finite element modelling for fatigue stress analysis of large suspension bridges

    Science.gov (United States)

    Chan, Tommy H. T.; Guo, L.; Li, Z. X.

    2003-03-01

    Fatigue is an important failure mode for large suspension bridges under traffic loadings. However, large suspension bridges have so many attributes that it is difficult to analyze their fatigue damage using experimental measurement methods. Numerical simulation is a feasible method of studying such fatigue damage. In British standards, the finite element method is recommended as a rigorous method for steel bridge fatigue analysis. This paper aims at developing a finite element (FE) model of a large suspension steel bridge for fatigue stress analysis. As a case study, a FE model of the Tsing Ma Bridge is presented. The verification of the model is carried out with the help of the measured bridge modal characteristics and the online data measured by the structural health monitoring system installed on the bridge. The results show that the constructed FE model is efficient for bridge dynamic analysis. Global structural analyses using the developed FE model are presented to determine the components of the nominal stress generated by railway loadings and some typical highway loadings. The critical locations in the bridge main span are also identified with the numerical results of the global FE stress analysis. Local stress analysis of a typical weld connection is carried out to obtain the hot-spot stresses in the region. These results provide a basis for evaluating fatigue damage and predicting the remaining life of the bridge.

  14. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    Science.gov (United States)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor 0.91, NSE>0.89, and 0.18analysis. Indeed, the uncertainty analysis must be accounted when the outcomes of the model use for policy or management decisions.

  15. A Bayesian analysis of inflationary primordial spectrum models using Planck data

    Science.gov (United States)

    Santos da Costa, Simony; Benetti, Micol; Alcaniz, Jailson

    2018-03-01

    The current available Cosmic Microwave Background (CMB) data show an anomalously low value of the CMB temperature fluctuations at large angular scales (l power is not explained by the minimal ΛCDM model, and one of the possible mechanisms explored in the literature to address this problem is the presence of features in the primordial power spectrum (PPS) motivated by the early universe physics. In this paper, we analyse a set of cutoff inflationary PPS models using a Bayesian model comparison approach in light of the latest CMB data from the Planck Collaboration. Our results show that the standard power-law parameterisation is preferred over all models considered in the analysis, which motivates the search for alternative explanations for the observed lack of power in the CMB anisotropy spectrum.

  16. Comparison of global sensitivity analysis methods – Application to fuel behavior modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ikonen, Timo, E-mail: timo.ikonen@vtt.fi

    2016-02-15

    Highlights: • Several global sensitivity analysis methods are compared. • The methods’ applicability to nuclear fuel performance simulations is assessed. • The implications of large input uncertainties and complex models are discussed. • Alternative strategies to perform sensitivity analyses are proposed. - Abstract: Fuel performance codes have two characteristics that make their sensitivity analysis challenging: large uncertainties in input parameters and complex, non-linear and non-additive structure of the models. The complex structure of the code leads to interactions between inputs that show as cross terms in the sensitivity analysis. Due to the large uncertainties of the inputs these interactions are significant, sometimes even dominating the sensitivity analysis. For the same reason, standard linearization techniques do not usually perform well in the analysis of fuel performance codes. More sophisticated methods are typically needed in the analysis. To this end, we compare the performance of several sensitivity analysis methods in the analysis of a steady state FRAPCON simulation. The comparison of importance rankings obtained with the various methods shows that even the simplest methods can be sufficient for the analysis of fuel maximum temperature. However, the analysis of the gap conductance requires more powerful methods that take into account the interactions of the inputs. In some cases, moment-independent methods are needed. We also investigate the computational cost of the various methods and present recommendations as to which methods to use in the analysis.

  17. [Model-based biofuels system analysis: a review].

    Science.gov (United States)

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  18. Beta-binomial model for meta-analysis of odds ratios.

    Science.gov (United States)

    Bakbergenuly, Ilyas; Kulinskaya, Elena

    2017-05-20

    In meta-analysis of odds ratios (ORs), heterogeneity between the studies is usually modelled via the additive random effects model (REM). An alternative, multiplicative REM for ORs uses overdispersion. The multiplicative factor in this overdispersion model (ODM) can be interpreted as an intra-class correlation (ICC) parameter. This model naturally arises when the probabilities of an event in one or both arms of a comparative study are themselves beta-distributed, resulting in beta-binomial distributions. We propose two new estimators of the ICC for meta-analysis in this setting. One is based on the inverted Breslow-Day test, and the other on the improved gamma approximation by Kulinskaya and Dollinger (2015, p. 26) to the distribution of Cochran's Q. The performance of these and several other estimators of ICC on bias and coverage is studied by simulation. Additionally, the Mantel-Haenszel approach to estimation of ORs is extended to the beta-binomial model, and we study performance of various ICC estimators when used in the Mantel-Haenszel or the inverse-variance method to combine ORs in meta-analysis. The results of the simulations show that the improved gamma-based estimator of ICC is superior for small sample sizes, and the Breslow-Day-based estimator is the best for n⩾100. The Mantel-Haenszel-based estimator of OR is very biased and is not recommended. The inverse-variance approach is also somewhat biased for ORs≠1, but this bias is not very large in practical settings. Developed methods and R programs, provided in the Web Appendix, make the beta-binomial model a feasible alternative to the standard REM for meta-analysis of ORs. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  19. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    Directory of Open Access Journals (Sweden)

    Cherryl O Talaue

    Full Text Available The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  20. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    Science.gov (United States)

    Talaue, Cherryl O; del Rosario, Ricardo C H; Pfeiffer, Friedhelm; Mendoza, Eduardo R; Oesterhelt, Dieter

    2016-01-01

    The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  1. Sensitivity analysis of the terrestrial food chain model FOOD III

    International Nuclear Information System (INIS)

    Zach, Reto.

    1980-10-01

    As a first step in constructing a terrestrial food chain model suitable for long-term waste management situations, a numerical sensitivity analysis of FOOD III was carried out to identify important model parameters. The analysis involved 42 radionuclides, four pathways, 14 food types, 93 parameters and three percentages of parameter variation. We also investigated the importance of radionuclides, pathways and food types. The analysis involved a simple contamination model to render results from individual pathways comparable. The analysis showed that radionuclides vary greatly in their dose contribution to each of the four pathways, but relative contributions to each pathway are very similar. Man's and animals' drinking water pathways are much more important than the leaf and root pathways. However, this result depends on the contamination model used. All the pathways contain unimportant food types. Considering the number of parameters involved, FOOD III has too many different food types. Many of the parameters of the leaf and root pathway are important. However, this is true for only a few of the parameters of animals' drinking water pathway, and for neither of the two parameters of mans' drinking water pathway. The radiological decay constant increases the variability of these results. The dose factor is consistently the most important variable, and it explains most of the variability of radionuclide doses within pathways. Consideration of the variability of dose factors is important in contemporary as well as long-term waste management assessment models, if realistic estimates are to be made. (auth)

  2. Fluid Dynamic Models for Bhattacharyya-Based Discriminant Analysis.

    Science.gov (United States)

    Noh, Yung-Kyun; Hamm, Jihun; Park, Frank Chongwoo; Zhang, Byoung-Tak; Lee, Daniel D

    2018-01-01

    Classical discriminant analysis attempts to discover a low-dimensional subspace where class label information is maximally preserved under projection. Canonical methods for estimating the subspace optimize an information-theoretic criterion that measures the separation between the class-conditional distributions. Unfortunately, direct optimization of the information-theoretic criteria is generally non-convex and intractable in high-dimensional spaces. In this work, we propose a novel, tractable algorithm for discriminant analysis that considers the class-conditional densities as interacting fluids in the high-dimensional embedding space. We use the Bhattacharyya criterion as a potential function that generates forces between the interacting fluids, and derive a computationally tractable method for finding the low-dimensional subspace that optimally constrains the resulting fluid flow. We show that this model properly reduces to the optimal solution for homoscedastic data as well as for heteroscedastic Gaussian distributions with equal means. We also extend this model to discover optimal filters for discriminating Gaussian processes and provide experimental results and comparisons on a number of datasets.

  3. Social network analysis shows direct evidence for social transmission of tool use in wild chimpanzees.

    Directory of Open Access Journals (Sweden)

    Catherine Hobaiter

    2014-09-01

    Full Text Available Social network analysis methods have made it possible to test whether novel behaviors in animals spread through individual or social learning. To date, however, social network analysis of wild populations has been limited to static models that cannot precisely reflect the dynamics of learning, for instance, the impact of multiple observations across time. Here, we present a novel dynamic version of network analysis that is capable of capturing temporal aspects of acquisition--that is, how successive observations by an individual influence its acquisition of the novel behavior. We apply this model to studying the spread of two novel tool-use variants, "moss-sponging" and "leaf-sponge re-use," in the Sonso chimpanzee community of Budongo Forest, Uganda. Chimpanzees are widely considered the most "cultural" of all animal species, with 39 behaviors suspected as socially acquired, most of them in the domain of tool-use. The cultural hypothesis is supported by experimental data from captive chimpanzees and a range of observational data. However, for wild groups, there is still no direct experimental evidence for social learning, nor has there been any direct observation of social diffusion of behavioral innovations. Here, we tested both a static and a dynamic network model and found strong evidence that diffusion patterns of moss-sponging, but not leaf-sponge re-use, were significantly better explained by social than individual learning. The most conservative estimate of social transmission accounted for 85% of observed events, with an estimated 15-fold increase in learning rate for each time a novice observed an informed individual moss-sponging. We conclude that group-specific behavioral variants in wild chimpanzees can be socially learned, adding to the evidence that this prerequisite for culture originated in a common ancestor of great apes and humans, long before the advent of modern humans.

  4. Social network analysis shows direct evidence for social transmission of tool use in wild chimpanzees.

    Science.gov (United States)

    Hobaiter, Catherine; Poisot, Timothée; Zuberbühler, Klaus; Hoppitt, William; Gruber, Thibaud

    2014-09-01

    Social network analysis methods have made it possible to test whether novel behaviors in animals spread through individual or social learning. To date, however, social network analysis of wild populations has been limited to static models that cannot precisely reflect the dynamics of learning, for instance, the impact of multiple observations across time. Here, we present a novel dynamic version of network analysis that is capable of capturing temporal aspects of acquisition--that is, how successive observations by an individual influence its acquisition of the novel behavior. We apply this model to studying the spread of two novel tool-use variants, "moss-sponging" and "leaf-sponge re-use," in the Sonso chimpanzee community of Budongo Forest, Uganda. Chimpanzees are widely considered the most "cultural" of all animal species, with 39 behaviors suspected as socially acquired, most of them in the domain of tool-use. The cultural hypothesis is supported by experimental data from captive chimpanzees and a range of observational data. However, for wild groups, there is still no direct experimental evidence for social learning, nor has there been any direct observation of social diffusion of behavioral innovations. Here, we tested both a static and a dynamic network model and found strong evidence that diffusion patterns of moss-sponging, but not leaf-sponge re-use, were significantly better explained by social than individual learning. The most conservative estimate of social transmission accounted for 85% of observed events, with an estimated 15-fold increase in learning rate for each time a novice observed an informed individual moss-sponging. We conclude that group-specific behavioral variants in wild chimpanzees can be socially learned, adding to the evidence that this prerequisite for culture originated in a common ancestor of great apes and humans, long before the advent of modern humans.

  5. Proteome Analysis of the Plant Pathogenic Fungus Monilinia laxa Showing Host Specificity

    Directory of Open Access Journals (Sweden)

    Olja Bregar

    2012-01-01

    Full Text Available Brown rot fungus Monilinia laxa (Aderh. & Ruhl. Honey is an important plant pathogen in stone and pome fruits in Europe. We applied a proteomic approach in a study of M. laxa isolates obtained from apples and apricots in order to show the host specifity of the isolates and to analyse differentially expressed proteins in terms of host specifity, fungal pathogenicity and identification of candidate proteins for diagnostic marker development. Extracted mycelium proteins were separated by 2-D electrophoresis (2-DE and visualized by Coomassie staining in a non-linear pH range of 3–11 and Mr of 14–116 kDa. We set up a 2-DE reference map of M. laxa, resolving up to 800 protein spots, and used it for image analysis. The average technical coefficient of variance (13 % demonstrated a high reproducibility of protein extraction and 2-D polyacrylamide gel electrophoresis (2-DE PAGE, and the average biological coefficient of variance (23 % enabled differential proteomic analysis of the isolates. Multivariate statistical analysis (principal component analysis discriminated isolates from two different hosts, providing new data that support the existence of a M. laxa specialized form f. sp. mali, which infects only apples. A total of 50 differentially expressed proteins were further analyzed by LC-MS/MS, yielding 41 positive identifications. The identified mycelial proteins were functionally classified into 6 groups: amino acid and protein metabolism, energy production, carbohydrate metabolism, stress response, fatty acid metabolism and other proteins. Some proteins expressed only in apple isolates have been described as virulence factors in other fungi. The acetolactate synthase was almost 11-fold more abundant in apple-specific isolates than in apricot isolates and it might be implicated in M. laxa host specificity. Ten proteins identified only in apple isolates are potential candidates for the development of M. laxa host-specific diagnostic markers.

  6. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  7. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  8. Intercity Travel Demand Analysis Model

    Directory of Open Access Journals (Sweden)

    Ming Lu

    2014-01-01

    Full Text Available It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model of main mode choice and access mode choice. At last, an integrated multilevel nested logit model structure system was built. The model system includes trip generation, destination choice, and mode-route choice based on multinomial logit model, and it achieved linkage and feedback of each part through logsum variable. This model was applied in Shenzhen intercity railway passenger demand forecast in 2010 as a case study. As a result, the forecast results were consistent with the actuality. The model's correctness and feasibility were verified.

  9. KEEFEKTIFAN MODEL SHOW NOT TELL DAN MIND MAP PADA PEMBELAJARAN MENULIS TEKS EKSPOSISI BERDASARKAN MINAT PESERTA DIDIK KELAS X SMK

    Directory of Open Access Journals (Sweden)

    Wiwit Lili Sokhipah

    2015-03-01

    Full Text Available Tujuan penelitian ini adalah (1 menentukan keefektifan penggunaan model show not tell pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK Kelas X, (2 menentukan keefektifan penggunaan model mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X, (3 menentukan keefektifan interaksi show not tell dan mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X. Penelitian ini adalah quasi experimental design (pretes-postes control group design. Dalam desain ini terdapat dua kelompok eksperimen yakni penerapan model show not tell dalam pembelajaran keterampilan menulis teks eksposisipeserta didik dengan minat tinggi dan penerapan model mind map dalam pembelajaran keterampilan menulis teks eksposisi  peserta didik dengan minat rendah. Hasil penelitian adalah (1 model show not tell efektif digunakan  dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, (2 model mind map efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat rendah, dan (3 model show not tell lebih efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, sedangkan model mind map efektif digunakan dalam membelajarkan teks eksposisi pagi peserta didik yang memiliki minat rendah.

  10. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    Science.gov (United States)

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  11. A sensitivity analysis of regional and small watershed hydrologic models

    Science.gov (United States)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  12. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  13. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  14. Comparative uncertainty analysis of copper loads in stormwater systems using GLUE and grey-box modeling

    DEFF Research Database (Denmark)

    Lindblom, Erik Ulfson; Madsen, Henrik; Mikkelsen, Peter Steen

    2007-01-01

    . With the proposed model and input data, the GLUE analysis show that the total sampled copper mass can be predicted within a range of +/- 50% of the median value ( 385 g), whereas the grey-box analysis showed a prediction uncertainty of less than +/- 30%. Future work will clarify the pros and cons of the two methods...

  15. Daily supplementation of D-ribose shows no therapeutic benefits in the MHC-I transgenic mouse model of inflammatory myositis.

    Directory of Open Access Journals (Sweden)

    William Coley

    Full Text Available BACKGROUND: Current treatments for idiopathic inflammatory myopathies (collectively called myositis focus on the suppression of an autoimmune inflammatory response within the skeletal muscle. However, it has been observed that there is a poor correlation between the successful suppression of muscle inflammation and an improvement in muscle function. Some evidence in the literature suggests that metabolic abnormalities in the skeletal muscle underlie the weakness that continues despite successful immunosuppression. We have previously shown that decreased expression of a purine nucleotide cycle enzyme, adenosine monophosphate deaminase (AMPD1, leads to muscle weakness in a mouse model of myositis and may provide a mechanistic basis for muscle weakness. One of the downstream metabolites of this pathway, D-ribose, has been reported to alleviate symptoms of myalgia in patients with a congenital loss of AMPD1. Therefore, we hypothesized that supplementing exogenous D-ribose would improve muscle function in the mouse model of myositis. We treated normal and myositis mice with daily doses of D-ribose (4 mg/kg over a 6-week time period and assessed its effects using a battery of behavioral, functional, histological and molecular measures. RESULTS: Treatment with D-ribose was found to have no statistically significant effects on body weight, grip strength, open field behavioral activity, maximal and specific forces of EDL, soleus muscles, or histological features. Histological and gene expression analysis indicated that muscle tissues remained inflamed despite treatment. Gene expression analysis also suggested that low levels of the ribokinase enzyme in the skeletal muscle might prevent skeletal muscle tissue from effectively utilizing D-ribose. CONCLUSIONS: Treatment with daily oral doses of D-ribose showed no significant effect on either disease progression or muscle function in the mouse model of myositis.

  16. Daily Supplementation of D-ribose Shows No Therapeutic Benefits in the MHC-I Transgenic Mouse Model of Inflammatory Myositis

    Science.gov (United States)

    Coley, William; Rayavarapu, Sree; van der Meulen, Jack H.; Duba, Ayyappa S.; Nagaraju, Kanneboyina

    2013-01-01

    Background Current treatments for idiopathic inflammatory myopathies (collectively called myositis) focus on the suppression of an autoimmune inflammatory response within the skeletal muscle. However, it has been observed that there is a poor correlation between the successful suppression of muscle inflammation and an improvement in muscle function. Some evidence in the literature suggests that metabolic abnormalities in the skeletal muscle underlie the weakness that continues despite successful immunosuppression. We have previously shown that decreased expression of a purine nucleotide cycle enzyme, adenosine monophosphate deaminase (AMPD1), leads to muscle weakness in a mouse model of myositis and may provide a mechanistic basis for muscle weakness. One of the downstream metabolites of this pathway, D-ribose, has been reported to alleviate symptoms of myalgia in patients with a congenital loss of AMPD1. Therefore, we hypothesized that supplementing exogenous D-ribose would improve muscle function in the mouse model of myositis. We treated normal and myositis mice with daily doses of D-ribose (4 mg/kg) over a 6-week time period and assessed its effects using a battery of behavioral, functional, histological and molecular measures. Results Treatment with D-ribose was found to have no statistically significant effects on body weight, grip strength, open field behavioral activity, maximal and specific forces of EDL, soleus muscles, or histological features. Histological and gene expression analysis indicated that muscle tissues remained inflamed despite treatment. Gene expression analysis also suggested that low levels of the ribokinase enzyme in the skeletal muscle might prevent skeletal muscle tissue from effectively utilizing D-ribose. Conclusions Treatment with daily oral doses of D-ribose showed no significant effect on either disease progression or muscle function in the mouse model of myositis. PMID:23785461

  17. Dimensional Model for Estimating Factors influencing Childhood Obesity: Path Analysis Based Modeling

    Directory of Open Access Journals (Sweden)

    Maryam Kheirollahpour

    2014-01-01

    Full Text Available The main objective of this study is to identify and develop a comprehensive model which estimates and evaluates the overall relations among the factors that lead to weight gain in children by using structural equation modeling. The proposed models in this study explore the connection among the socioeconomic status of the family, parental feeding practice, and physical activity. Six structural models were tested to identify the direct and indirect relationship between the socioeconomic status and parental feeding practice general level of physical activity, and weight status of children. Finally, a comprehensive model was devised to show how these factors relate to each other as well as to the body mass index (BMI of the children simultaneously. Concerning the methodology of the current study, confirmatory factor analysis (CFA was applied to reveal the hidden (secondary effect of socioeconomic factors on feeding practice and ultimately on the weight status of the children and also to determine the degree of model fit. The comprehensive structural model tested in this study suggested that there are significant direct and indirect relationships among variables of interest. Moreover, the results suggest that parental feeding practice and physical activity are mediators in the structural model.

  18. Bifurcation analysis of dengue transmission model in Baguio City, Philippines

    Science.gov (United States)

    Libatique, Criselda P.; Pajimola, Aprimelle Kris J.; Addawe, Joel M.

    2017-11-01

    In this study, we formulate a deterministic model for the transmission dynamics of dengue fever in Baguio City, Philippines. We analyzed the existence of the equilibria of the dengue model. We computed and obtained conditions for the existence of the equilibrium states. Stability analysis for the system is carried out for disease free equilibrium. We showed that the system becomes stable under certain conditions of the parameters. A particular parameter is taken and with the use of the Theory of Centre Manifold, the proposed model demonstrates a bifurcation phenomenon. We performed numerical simulation to verify the analytical results.

  19. Modal Analysis and Model Correlation of the Mir Space Station

    Science.gov (United States)

    Kim, Hyoung M.; Kaouk, Mohamed

    2000-01-01

    This paper will discuss on-orbit dynamic tests, modal analysis, and model refinement studies performed as part of the Mir Structural Dynamics Experiment (MiSDE). Mir is the Russian permanently manned Space Station whose construction first started in 1986. The MiSDE was sponsored by the NASA International Space Station (ISS) Phase 1 Office and was part of the Shuttle-Mir Risk Mitigation Experiment (RME). One of the main objectives for MiSDE is to demonstrate the feasibility of performing on-orbit modal testing on large space structures to extract modal parameters that will be used to correlate mathematical models. The experiment was performed over a one-year span on the Mir-alone and Mir with a Shuttle docked. A total of 45 test sessions were performed including: Shuttle and Mir thruster firings, Shuttle-Mir and Progress-Mir dockings, crew exercise and pushoffs, and ambient noise during night-to-day and day-to-night orbital transitions. Test data were recorded with a variety of existing and new instrumentation systems that included: the MiSDE Mir Auxiliary Sensor Unit (MASU), the Space Acceleration Measurement System (SAMS), the Russian Mir Structural Dynamic Measurement System (SDMS), the Mir and Shuttle Inertial Measurement Units (IMUs), and the Shuttle payload bay video cameras. Modal analysis was performed on the collected test data to extract modal parameters, i.e. frequencies, damping factors, and mode shapes. A special time-domain modal identification procedure was used on free-decay structural responses. The results from this study show that modal testing and analysis of large space structures is feasible within operational constraints. Model refinements were performed on both the Mir alone and the Shuttle-Mir mated configurations. The design sensitivity approach was used for refinement, which adjusts structural properties in order to match analytical and test modal parameters. To verify the refinement results, the analytical responses calculated using

  20. Cost-effectiveness analysis of countermeasures using accident consequence assessment models

    International Nuclear Information System (INIS)

    Alonso, A.; Gallego, E.

    1987-01-01

    In the event of a large release of radionuclides from a nuclear power plant, protective actions for the population potentially affected must be implemented. Cost-effectiveness analysis will be useful to define the countermeasures and the criteria needed to implement them. This paper shows the application of Accident Consequence Assessment (ACA) models to cost-effectiveness analysis of emergency and long-term countermeasures, making use of the different relationships between dose, contamination levels, affected areas and population distribution, included in such a model. The procedure is illustrated with the new Melcor Accident Consequence Code System (MACCS 1.3), developed at Sandia National Laboratories (USA), for a fixed accident scenario. Different alternative actions are evaluated with regard to their radiological and economical impact, searching for an 'optimum' strategy. (author)

  1. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  2. Modeling, Testing, and Characteristic Analysis of a Planetary Flywheel Inerter

    Directory of Open Access Journals (Sweden)

    Zheng Ge

    2018-01-01

    Full Text Available We propose the planetary flywheel inerter, which is a new type of ball screw inerter. A planetary flywheel consists of several planetary gears mounted on a flywheel bracket. When the flywheel bracket is driven by a screw and rotating, each planetary gear meshing with an outer ring gear generates a compound motion composed of revolution and rotation. Theoretical analysis shows that the output force of the planetary flywheel inerter is proportional to the relative acceleration of one terminal of the inerter to the other. Optimizing the gear ratio of the planetary gears to the ring gear allows the planetary flywheel to be lighter than its traditional counterpart, without any loss on the inertance. According to the structure of the planetary flywheel inerter, nonlinear factors of the inerter are analyzed, and a nonlinear dynamical model of the inerter is established. Then the parameters in the model are identified and the accuracy of the model is validated by experiment. Theoretical analysis and experimental data show that the dynamical characteristics of a planetary flywheel inerter and those of a traditional flywheel inerter are basically the same. It is concluded that a planetary flywheel can completely replace a traditional flywheel, making the inerter lighter.

  3. Meta-analysis a structural equation modeling approach

    CERN Document Server

    Cheung, Mike W-L

    2015-01-01

    Presents a novel approach to conducting meta-analysis using structural equation modeling. Structural equation modeling (SEM) and meta-analysis are two powerful statistical methods in the educational, social, behavioral, and medical sciences. They are often treated as two unrelated topics in the literature. This book presents a unified framework on analyzing meta-analytic data within the SEM framework, and illustrates how to conduct meta-analysis using the metaSEM package in the R statistical environment. Meta-Analysis: A Structural Equation Modeling Approach begins by introducing the impo

  4. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Hujun He

    2017-01-01

    Full Text Available The prediction and risk classification of collapse is an important issue in the process of highway construction in mountainous regions. Based on the principles of information entropy and Mahalanobis distance discriminant analysis, we have produced a collapse hazard prediction model. We used the entropy measure method to reduce the influence indexes of the collapse activity and extracted the nine main indexes affecting collapse activity as the discriminant factors of the distance discriminant analysis model (i.e., slope shape, aspect, gradient, and height, along with exposure of the structural face, stratum lithology, relationship between weakness face and free face, vegetation cover rate, and degree of rock weathering. We employ postearthquake collapse data in relation to construction of the Yingxiu-Wolong highway, Hanchuan County, China, as training samples for analysis. The results were analyzed using the back substitution estimation method, showing high accuracy and no errors, and were the same as the prediction result of uncertainty measure. Results show that the classification model based on information entropy and distance discriminant analysis achieves the purpose of index optimization and has excellent performance, high prediction accuracy, and a zero false-positive rate. The model can be used as a tool for future evaluation of collapse risk.

  5. A framework for 2-stage global sensitivity analysis of GastroPlus™ compartmental models.

    Science.gov (United States)

    Scherholz, Megerle L; Forder, James; Androulakis, Ioannis P

    2018-04-01

    Parameter sensitivity and uncertainty analysis for physiologically based pharmacokinetic (PBPK) models are becoming an important consideration for regulatory submissions, requiring further evaluation to establish the need for global sensitivity analysis. To demonstrate the benefits of an extensive analysis, global sensitivity was implemented for the GastroPlus™ model, a well-known commercially available platform, using four example drugs: acetaminophen, risperidone, atenolol, and furosemide. The capabilities of GastroPlus were expanded by developing an integrated framework to automate the GastroPlus graphical user interface with AutoIt and for execution of the sensitivity analysis in MATLAB ® . Global sensitivity analysis was performed in two stages using the Morris method to screen over 50 parameters for significant factors followed by quantitative assessment of variability using Sobol's sensitivity analysis. The 2-staged approach significantly reduced computational cost for the larger model without sacrificing interpretation of model behavior, showing that the sensitivity results were well aligned with the biopharmaceutical classification system. Both methods detected nonlinearities and parameter interactions that would have otherwise been missed by local approaches. Future work includes further exploration of how the input domain influences the calculated global sensitivity measures as well as extending the framework to consider a whole-body PBPK model.

  6. Analysis and Modeling of Vapor Recompressive Distillation Using ASPEN-HYSYS

    Directory of Open Access Journals (Sweden)

    Cinthujaa C. Sivanantha

    2011-10-01

    Full Text Available HYSYS process modeling software was used to analyze the effect of reflux ratio and number of trays on the purity of ethylene in a vapor recompression distillation column and also in an ordinary distillation column. Analysis of data showed that with increased pressure a higher reflux ratio is needed to obtain a purity of 99.9{\\%} for both towers. In addition number of trays was varied to see its effect on purity. Analysis proved that purity increases with number of trays.

  7. Advanced spatial metrics analysis in cellular automata land use and cover change modeling

    International Nuclear Information System (INIS)

    Zamyatin, Alexander; Cabral, Pedro

    2011-01-01

    This paper proposes an approach for a more effective definition of cellular automata transition rules for landscape change modeling using an advanced spatial metrics analysis. This approach considers a four-stage methodology based on: (i) the search for the appropriate spatial metrics with minimal correlations; (ii) the selection of the appropriate neighborhood size; (iii) the selection of the appropriate technique for spatial metrics application; and (iv) the analysis of the contribution level of each spatial metric for joint use. The case study uses an initial set of 7 spatial metrics of which 4 are selected for modeling. Results show a better model performance when compared to modeling without any spatial metrics or with the initial set of 7 metrics.

  8. Modeled hydrologic metrics show links between hydrology and the functional composition of stream assemblages.

    Science.gov (United States)

    Patrick, Christopher J; Yuan, Lester L

    2017-07-01

    Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.

  9. Integrating model checking with HiP-HOPS in model-based safety analysis

    International Nuclear Information System (INIS)

    Sharvia, Septavera; Papadopoulos, Yiannis

    2015-01-01

    The ability to perform an effective and robust safety analysis on the design of modern safety–critical systems is crucial. Model-based safety analysis (MBSA) has been introduced in recent years to support the assessment of complex system design by focusing on the system model as the central artefact, and by automating the synthesis and analysis of failure-extended models. Model checking and failure logic synthesis and analysis (FLSA) are two prominent MBSA paradigms. Extensive research has placed emphasis on the development of these techniques, but discussion on their integration remains limited. In this paper, we propose a technique in which model checking and Hierarchically Performed Hazard Origin and Propagation Studies (HiP-HOPS) – an advanced FLSA technique – can be applied synergistically with benefit for the MBSA process. The application of the technique is illustrated through an example of a brake-by-wire system. - Highlights: • We propose technique to integrate HiP-HOPS and model checking. • State machines can be systematically constructed from HiP-HOPS. • The strengths of different MBSA techniques are combined. • Demonstrated through modeling and analysis of brake-by-wire system. • Root cause analysis is automated and system dynamic behaviors analyzed and verified

  10. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    Science.gov (United States)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  11. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    Science.gov (United States)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  12. Use of multivariate extensions of generalized linear models in the analysis of data from clinical trials

    OpenAIRE

    ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose

    2002-01-01

    In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...

  13. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS).

    Science.gov (United States)

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-05-09

    There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.

  14. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)

    Science.gov (United States)

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-01-01

    Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512

  15. Communicating systems with UML 2 modeling and analysis of network protocols

    CERN Document Server

    Barrera, David Garduno

    2013-01-01

    This book gives a practical approach to modeling and analyzing communication protocols using UML 2. Network protocols are always presented with a point of view focusing on partial mechanisms and starting models. This book aims at giving the basis needed for anybody to model and validate their own protocols. It follows a practical approach and gives many examples for the description and analysis of well known basic network mechanisms for protocols.The book firstly shows how to describe and validate the main protocol issues (such as synchronization problems, client-server interactions, layer

  16. Conclusion of LOD-score analysis for family data generated under two-locus models.

    Science.gov (United States)

    Dizier, M H; Babron, M C; Clerget-Darpoux, F

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.

  17. Conclusions of LOD-score analysis for family data generated under two-locus models

    Energy Technology Data Exchange (ETDEWEB)

    Dizier, M.H.; Babron, M.C.; Clergt-Darpoux, F. [Unite de Recherches d`Epidemiologie Genetique, Paris (France)

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. 17 refs., 3 tabs.

  18. Thermal-Hydraulics analysis of pressurized water reactor core by using single heated channel model

    Directory of Open Access Journals (Sweden)

    Reza Akbari

    2017-08-01

    Full Text Available Thermal hydraulics of nuclear reactor as a basis of reactor safety has a very important role in reactor design and control. The thermal-hydraulic analysis provides input data to the reactor-physics analysis, whereas the latter gives information about the distribution of heat sources, which is needed to perform the thermal-hydraulic analysis. In this study single heated channel model as a very fast model for predicting thermal hydraulics behavior of pressurized water reactor core has been developed. For verifying the results of this model, we used RELAP5 code as US nuclear regulatory approved thermal hydraulics code. The results of developed single heated channel model have been checked with RELAP5 results for WWER-1000. This comparison shows the capability of single heated channel model for predicting thermal hydraulics behavior of reactor core.

  19. Contingency analysis modeling for superfund sites and other sources. Final report

    International Nuclear Information System (INIS)

    Christensen, D.; Kaiser, G.D.

    1993-01-01

    The report provides information on contingency modeling for a wide range of different accidental release scenarios of hazardous air pollutants that might take place at Superfund and other sites. The scenarios are used to illustrate how atmospheric dispersion models, including dense gas models, should be applied. Particular emphasis is made on the input data that is needed for proper applications of models. Flow charts direct the user to specific sections where various scenarios are discussed. A check list of items that should be discussed before running the model is provided. Several examples are provided to specifically show how to apply the models so as to produce a credible analysis for a particular release scenario

  20. An analysis of single amino acid repeats as use case for application specific background models

    Directory of Open Access Journals (Sweden)

    Sykacek Peter

    2011-05-01

    Full Text Available Abstract Background Sequence analysis aims to identify biologically relevant signals against a backdrop of functionally meaningless variation. Increasingly, it is recognized that the quality of the background model directly affects the performance of analyses. State-of-the-art approaches rely on classical sequence models that are adapted to the studied dataset. Although performing well in the analysis of globular protein domains, these models break down in regions of stronger compositional bias or low complexity. While these regions are typically filtered, there is increasing anecdotal evidence of functional roles. This motivates an exploration of more complex sequence models and application-specific approaches for the investigation of biased regions. Results Traditional Markov-chains and application-specific regression models are compared using the example of predicting runs of single amino acids, a particularly simple class of biased regions. Cross-fold validation experiments reveal that the alternative regression models capture the multi-variate trends well, despite their low dimensionality and in contrast even to higher-order Markov-predictors. We show how the significance of unusual observations can be computed for such empirical models. The power of a dedicated model in the detection of biologically interesting signals is then demonstrated in an analysis identifying the unexpected enrichment of contiguous leucine-repeats in signal-peptides. Considering different reference sets, we show how the question examined actually defines what constitutes the 'background'. Results can thus be highly sensitive to the choice of appropriate model training sets. Conversely, the choice of reference data determines the questions that can be investigated in an analysis. Conclusions Using a specific case of studying biased regions as an example, we have demonstrated that the construction of application-specific background models is both necessary and

  1. A catalog of automated analysis methods for enterprise models.

    Science.gov (United States)

    Florez, Hector; Sánchez, Mario; Villalobos, Jorge

    2016-01-01

    Enterprise models are created for documenting and communicating the structure and state of Business and Information Technologies elements of an enterprise. After models are completed, they are mainly used to support analysis. Model analysis is an activity typically based on human skills and due to the size and complexity of the models, this process can be complicated and omissions or miscalculations are very likely. This situation has fostered the research of automated analysis methods, for supporting analysts in enterprise analysis processes. By reviewing the literature, we found several analysis methods; nevertheless, they are based on specific situations and different metamodels; then, some analysis methods might not be applicable to all enterprise models. This paper presents the work of compilation (literature review), classification, structuring, and characterization of automated analysis methods for enterprise models, expressing them in a standardized modeling language. In addition, we have implemented the analysis methods in our modeling tool.

  2. Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation

    Directory of Open Access Journals (Sweden)

    Y. Tang

    2007-01-01

    Full Text Available This study seeks to identify sensitivity tools that will advance our understanding of lumped hydrologic models for the purposes of model improvement, calibration efficiency and improved measurement schemes. Four sensitivity analysis methods were tested: (1 local analysis using parameter estimation software (PEST, (2 regional sensitivity analysis (RSA, (3 analysis of variance (ANOVA, and (4 Sobol's method. The methods' relative efficiencies and effectiveness have been analyzed and compared. These four sensitivity methods were applied to the lumped Sacramento soil moisture accounting model (SAC-SMA coupled with SNOW-17. Results from this study characterize model sensitivities for two medium sized watersheds within the Juniata River Basin in Pennsylvania, USA. Comparative results for the 4 sensitivity methods are presented for a 3-year time series with 1 h, 6 h, and 24 h time intervals. The results of this study show that model parameter sensitivities are heavily impacted by the choice of analysis method as well as the model time interval. Differences between the two adjacent watersheds also suggest strong influences of local physical characteristics on the sensitivity methods' results. This study also contributes a comprehensive assessment of the repeatability, robustness, efficiency, and ease-of-implementation of the four sensitivity methods. Overall ANOVA and Sobol's method were shown to be superior to RSA and PEST. Relative to one another, ANOVA has reduced computational requirements and Sobol's method yielded more robust sensitivity rankings.

  3. Intercity Travel Demand Analysis Model

    OpenAIRE

    Ming Lu; Hai Zhu; Xia Luo; Lei Lei

    2014-01-01

    It is well known that intercity travel is an important component of travel demand which belongs to short distance corridor travel. The conventional four-step method is no longer suitable for short distance corridor travel demand analysis for the time spent on urban traffic has a great impact on traveler's main mode choice. To solve this problem, the author studied the existing intercity travel demand analysis model, then improved it based on the study, and finally established a combined model...

  4. Uncertainty analysis of environmental models

    International Nuclear Information System (INIS)

    Monte, L.

    1990-01-01

    In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition

  5. Structural Modeling and Analysis on Dynamic Characteristics of Antenna Pedestal in Airborne SAR

    Directory of Open Access Journals (Sweden)

    He Li-ping

    2012-06-01

    Full Text Available Finite element modeling and structural dynamic characteristics of antenna pedestal in airborne SAR were studied in this paper. The Finite element model of antenna pedestal in airborne SAR was set up on the basis of structural dynamic theory, then, the key technologies of dynamic simulation were pointed out, and the modal analysis and transient analysis were carried out. Simulation results show that the dynamic characteristics of antenna pedestal in airborne SAR can meet the requirements of servo bandwidth and structural strength. The fast finite element modeling and simulation method proposed in this paper are of great significance to the weight reducing design of antenna pedestal in airborne SAR.

  6. Nephrus: expert system model in intelligent multilayers for evaluation of urinary system based on scintigraphic image analysis

    International Nuclear Information System (INIS)

    Silva, Jorge Wagner Esteves da; Schirru, Roberto; Boasquevisque, Edson Mendes

    1999-01-01

    Renal function can be measured noninvasively with radionuclides in a extremely safe way compared to other diagnosis techniques. Nevertheless, due to the fact that radioactive materials are used in this procedure, it is necessary to maximize its benefits, therefore all efforts are justifiable in the development of data analysis support tools for this diagnosis modality. The objective of this work is to develop a prototype for a system model based on Artificial Intelligence devices able to perform functions related to cintilographic image analysis of the urinary system. Rules used by medical experts in the analysis of images obtained with 99m Tc+DTPA and /or 99m Tc+DMSA were modeled and a Neural Network diagnosis technique was implemented. Special attention was given for designing programs user-interface. Human Factor Engineering techniques were taking in account allowing friendliness and robustness. The image segmentation adopts a model based on Ideal ROIs, which represent the normal anatomic concept for urinary system organs. Results obtained using Artificial Neural Networks for qualitative image analysis and knowledge model constructed show the feasibility of Artificial Neural Networks for qualitative image analysis and knowledge model constructed show feasibility of Artificial Intelligence implementation that uses inherent abilities of each technique in the medical diagnosis image analysis. (author)

  7. Analysis of CPN-1 sigma models via projective structures

    International Nuclear Information System (INIS)

    Post, S; Grundland, A M

    2012-01-01

    This paper represents a study of projector solutions to the Euclidean CP N-1 sigma model in two dimensions and their associated surfaces immersed in the su(N) Lie algebra. Any solution for the CP N-1 sigma model defined on the extended complex plane with finite action can be written as a raising operator acting on a holomorphic one. Here the proof is formulated in terms rank-1 projectors so it is explicitly gauge invariant. We apply these results to the analysis of surfaces associated with the CP N-1 models defined using the generalized Weierstrass formula for immersion. We show that the surfaces are conformally parametrized by the Lagrangian density, with finite area equal to the action of the model, and express several other geometrical characteristics of the surface in terms of the physical quantities of the model. Finally, we provide necessary and sufficient conditions that a surface be related to a CP N-1 sigma model

  8. Application of the Periodic Average System Model in Dam Deformation Analysis

    Directory of Open Access Journals (Sweden)

    Yueqian Shen

    2015-01-01

    Full Text Available Dams are among the most important hydraulic engineering facilities used for water supply, flood control, and hydroelectric power. Monitoring of dams is crucial since deformation might have occurred. How to obtain the deformation information and then judge the safe conditions is the key and difficult problem in dam deformation monitoring field. This paper proposes the periodic average system model and creates the concept of “settlement activity” based on the dam deformation issue. Long-term deformation monitoring data is carried out in a pumped-storage power station, this model combined with settlement activity is used to make the single point deformation analysis, and then the whole settlement activity profile is drawn by clustering analysis. Considering the cumulative settlement value of every point, the dam deformation trend is analyzed in an intuitive effect way. The analysis mode of combined single point with multipoints is realized. The results show that the key deformation information of the dam can be easily grasped by the application of the periodic average system model combined with the distribution diagram of settlement activity. And, above all, the ideas of this research provide an effective method for dam deformation analysis.

  9. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  10. Suffering by comparison: Twitter users' reactions to the Victoria's Secret Fashion Show.

    Science.gov (United States)

    Chrisler, Joan C; Fung, Kaitlin T; Lopez, Alexandra M; Gorman, Jennifer A

    2013-09-01

    Social comparison theory suggests that evaluating the self in comparison with others (e.g., peers, celebrities, models) can influence body image. Experimental studies that have tested effects of viewing idealized images in the media often show that women feel worse about themselves after seeing images that illustrate the beauty ideal. Twitter presents a naturally occurring opportunity to study viewers' reactions. An analysis was conducted of 977 tweets sent immediately before and during the 2011 Victoria's Secret Fashion Show that reference the show. Although the majority were idiosyncratic remarks, many tweets contain evidence of upward social comparisons to the fashion models. There were tweets about body image, eating disorders, weight, desires for food or alcohol, and thoughts about self-harm. The results support social comparison theory, and suggest that vulnerable viewers could experience negative affect, or even engage in harmful behaviors, during or after viewing the show or others like it. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Simulation modeling and analysis with Arena

    CERN Document Server

    Altiok, Tayfur

    2007-01-01

    Simulation Modeling and Analysis with Arena is a highly readable textbook which treats the essentials of the Monte Carlo discrete-event simulation methodology, and does so in the context of a popular Arena simulation environment.” It treats simulation modeling as an in-vitro laboratory that facilitates the understanding of complex systems and experimentation with what-if scenarios in order to estimate their performance metrics. The book contains chapters on the simulation modeling methodology and the underpinnings of discrete-event systems, as well as the relevant underlying probability, statistics, stochastic processes, input analysis, model validation and output analysis. All simulation-related concepts are illustrated in numerous Arena examples, encompassing production lines, manufacturing and inventory systems, transportation systems, and computer information systems in networked settings.· Introduces the concept of discrete event Monte Carlo simulation, the most commonly used methodology for modeli...

  12. A three-dimensional model for thermal analysis in a vanadium flow battery

    International Nuclear Information System (INIS)

    Zheng, Qiong; Zhang, Huamin; Xing, Feng; Ma, Xiangkun; Li, Xianfeng; Ning, Guiling

    2014-01-01

    Highlights: • A three-dimensional model for thermal analysis in a VFB has been developed. • A quasi-static thermal behavior and temperature spatial distribution were showed. • Ohmic heat gets vital in heat generation if applied current density is large enough. • A lower porosity or a faster flow shows a more uniform temperature distribution. • The model shows good prospect in heat and temperature management for a VFB. - Abstract: A three-dimensional model for thermal analysis has been developed to gain a better understanding of thermal behavior in a vanadium flow battery (VFB). The model is based on a comprehensive description of mass, momentum, charge and energy transport and conservation, combining with a global kinetic model for reactions involving all vanadium species. The emphasis in this paper is placed on the heat losses inside a cell. A quasi-static behavior of temperature and the temperature spatial distribution were characterized via the thermal model. The simulations also indicate that the heat generation exhibits a strong dependence on the applied current density. The reaction rate and the over potential rise with an increased applied current density, resulting in the electrochemical reaction heat rises proportionally and the activation heat rises at a parabolic rate. Based on the Ohm’s law, the ohmic heat rises at a parabolic rate when the applied current density increases. As a result, the determining heat source varies when the applied current density changes. While the relative contribution of the three types of heat is dependent on the cell materials and cell geometry, the regularities of heat losses can also be attained via the model. In addition, the electrochemical reaction heat and activation heat have a lack of sensitivity to the porosity and flow rate, whereas an obvious increase of ohmic heat has been observed with the rise of the porosity. A lower porosity or a faster flow shows a better uniformity of temperature distribution in

  13. Modelling and analysis of global coal markets

    International Nuclear Information System (INIS)

    Trueby, Johannes

    2013-01-01

    The thesis comprises four interrelated essays featuring modelling and analysis of coal markets. Each of the four essays has a dedicated chapter in this thesis. Chapters 2 to 4 have, from a topical perspective, a backward-looking focus and deal with explaining recent market outcomes in the international coal trade. The findings of those essays may serve as guidance for assessing current coal market outcomes as well as expected market outcomes in the near to medium-term future. Chapter 5 has a forward-looking focus and builds a bridge between explaining recent market outcomes and projecting long-term market equilibria. Chapter 2, Strategic Behaviour in International Metallurgical Coal Markets, deals with market conduct of large exporters in the market of coals used in steel-making in the period 2008 to 2010. In this essay I analyse whether prices and trade-flows in the international market for metallurgical coals were subject to non-competitive conduct in the period 2008 to 2010. To do so, I develop mathematical programming models - a Stackelberg model, two varieties of a Cournot model, and a perfect competition model - for computing spatial equilibria in international resource markets. Results are analysed with various statistical measures to assess the prediction accuracy of the models. The results show that real market equilibria cannot be reproduced with a competitive model. However, real market outcomes can be accurately simulated with the non-competitive models, suggesting that market equilibria in the international metallurgical coal trade were subject to the strategic behaviour of coal exporters. Chapter 3 and chapter 4 deal with market power issues in the steam coal trade in the period 2006 to 2008. Steam coals are typically used to produce steam either for electricity generation or for heating purposes. In Chapter 3 we analyse market behaviour of key exporting countries in the steam coal trade. This chapter features the essay Market Structure Scenarios in

  14. Modelling and analysis of global coal markets

    Energy Technology Data Exchange (ETDEWEB)

    Trueby, Johannes

    2013-01-17

    The thesis comprises four interrelated essays featuring modelling and analysis of coal markets. Each of the four essays has a dedicated chapter in this thesis. Chapters 2 to 4 have, from a topical perspective, a backward-looking focus and deal with explaining recent market outcomes in the international coal trade. The findings of those essays may serve as guidance for assessing current coal market outcomes as well as expected market outcomes in the near to medium-term future. Chapter 5 has a forward-looking focus and builds a bridge between explaining recent market outcomes and projecting long-term market equilibria. Chapter 2, Strategic Behaviour in International Metallurgical Coal Markets, deals with market conduct of large exporters in the market of coals used in steel-making in the period 2008 to 2010. In this essay I analyse whether prices and trade-flows in the international market for metallurgical coals were subject to non-competitive conduct in the period 2008 to 2010. To do so, I develop mathematical programming models - a Stackelberg model, two varieties of a Cournot model, and a perfect competition model - for computing spatial equilibria in international resource markets. Results are analysed with various statistical measures to assess the prediction accuracy of the models. The results show that real market equilibria cannot be reproduced with a competitive model. However, real market outcomes can be accurately simulated with the non-competitive models, suggesting that market equilibria in the international metallurgical coal trade were subject to the strategic behaviour of coal exporters. Chapter 3 and chapter 4 deal with market power issues in the steam coal trade in the period 2006 to 2008. Steam coals are typically used to produce steam either for electricity generation or for heating purposes. In Chapter 3 we analyse market behaviour of key exporting countries in the steam coal trade. This chapter features the essay Market Structure Scenarios in

  15. Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2009-01-01

    This contribution presents an overview of sensitivity analysis of simulation models, including the estimation of gradients. It covers classic designs and their corresponding (meta)models; namely, resolution-III designs including fractional-factorial two-level designs for first-order polynomial

  16. Next-generation sequence analysis of cancer xenograft models.

    Directory of Open Access Journals (Sweden)

    Fernando J Rossello

    Full Text Available Next-generation sequencing (NGS studies in cancer are limited by the amount, quality and purity of tissue samples. In this situation, primary xenografts have proven useful preclinical models. However, the presence of mouse-derived stromal cells represents a technical challenge to their use in NGS studies. We examined this problem in an established primary xenograft model of small cell lung cancer (SCLC, a malignancy often diagnosed from small biopsy or needle aspirate samples. Using an in silico strategy that assign reads according to species-of-origin, we prospectively compared NGS data from primary xenograft models with matched cell lines and with published datasets. We show here that low-coverage whole-genome analysis demonstrated remarkable concordance between published genome data and internal controls, despite the presence of mouse genomic DNA. Exome capture sequencing revealed that this enrichment procedure was highly species-specific, with less than 4% of reads aligning to the mouse genome. Human-specific expression profiling with RNA-Seq replicated array-based gene expression experiments, whereas mouse-specific transcript profiles correlated with published datasets from human cancer stroma. We conclude that primary xenografts represent a useful platform for complex NGS analysis in cancer research for tumours with limited sample resources, or those with prominent stromal cell populations.

  17. Systematic analysis of DEMETER-like DNA glycosylase genes shows lineage-specific Smi-miR7972 involved in SmDML1 regulation in Salvia miltiorrhiza.

    Science.gov (United States)

    Li, Jiang; Li, Caili; Lu, Shanfa

    2018-05-08

    DEMETER-like DNA glycosylases (DMLs) initiate the base excision repair-dependent DNA demethylation to regulate a wide range of biological processes in plants. Six putative SmDML genes, termed SmDML1-SmDML6, were identified from the genome of S. miltiorrhiza, an emerging model plant for Traditional Chinese Medicine (TCM) studies. Integrated analysis of gene structures, sequence features, conserved domains and motifs, phylogenetic analysis and differential expression showed the conservation and divergence of SmDMLs. SmDML1, SmDML2 and SmDML4 were significantly down-regulated by the treatment of 5Aza-dC, a general DNA methylation inhibitor, suggesting involvement of SmDMLs in genome DNA methylation change. SmDML1 was predicted and experimentally validated to be target of Smi-miR7972. Computational analysis of forty whole genome sequences and almost all of RNA-seq data from Lamiids revealed that MIR7972s were only distributed in some plants of the three orders, including Lamiales, Solanales and Boraginales, and the number of MIR7972 genes varied among species. It suggests that MIR7972 genes underwent expansion and loss during the evolution of some Lamiids species. Phylogenetic analysis of MIR7972s showed closer evolutionary relationships between MIR7972s in Boraginales and Solanales in comparison with Lamiales. These results provide a valuable resource for elucidating DNA demethylation mechanism in S. miltiorrhiza.

  18. Evaluations of the CCFL and critical flow models in TRACE for PWR LBLOCA analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jung-Hua; Lin, Hao Tzu [National Tsing Hua Univ., HsinChu, Taiwan (China). Dept. of Engineering and System Science; Wang, Jong-Rong [Atomic Energy Council, Taoyuan County, Taiwan (China). Inst. of Nuclear Energy Research; Shih, Chunkuan [National Tsing Hua Univ., HsinChu, Taiwan (China). Inst. of Nuclear Engineering and Science

    2012-12-15

    This study aims to develop the Maanshan Pressurized Water Reactor (PWR) analysis model by using the TRACE (TRAC/RELAP Advanced Computational Engine) code. By analyzing the Large Break Loss of Coolant Accident (LBLOCA) sequence, the results are compared with the Maanshan Final Safety Analysis Report (FSAR) data. The critical flow and Counter Current Flow Limitation (CCFL) play an important role in the overall performance of TRACE LBLOCA prediction. Therefore, the sensitivity study on the discharge coefficients of critical flow model and CCFL modeling among different regions are also discussed. The current conclusions show that modeling CCFL in downcomer has more significant impact on the peak cladding temperature than modeling CCFL in hot-legs does. No CCFL phenomena occurred in the pressurizer surge line. The best value for the multipliers of critical flow model would be 0.5 and the TRACE could consistently predict the break flow rate in the LBLOCA analysis as shown in FSAR. (orig.)

  19. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    Science.gov (United States)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  20. Sensitivity analysis for thermo-hydraulics model of a Westinghouse type PWR. Verification of the simulation results

    Energy Technology Data Exchange (ETDEWEB)

    Farahani, Aref Zarnooshe [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Nuclear Engineering, Science and Research Branch; Yousefpour, Faramarz [Nuclear Science and Technology Research Institute, Tehran (Iran, Islamic Republic of); Hoseyni, Seyed Mohsen [Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Dept. of Basic Sciences; Islamic Azad Univ., Tehran (Iran, Islamic Republic of). Young Researchers and Elite Club

    2017-07-15

    Development of a steady-state model is the first step in nuclear safety analysis. The developed model should be qualitatively analyzed first, then a sensitivity analysis is required on the number of nodes for models of different systems to ensure the reliability of the obtained results. This contribution aims to show through sensitivity analysis, the independence of modeling results to the number of nodes in a qualified MELCOR model for a Westinghouse type pressurized power plant. For this purpose, and to minimize user error, the nuclear analysis software, SNAP, is employed. Different sensitivity cases were developed by modification of the existing model and refinement of the nodes for the simulated systems including steam generators, reactor coolant system and also reactor core and its connecting flow paths. By comparing the obtained results to those of the original model no significant difference is observed which is indicative of the model independence to the finer nodes.

  1. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  2. Incoherent SSI Analysis of Reactor Building using 2007 Hard-Rock Coherency Model

    International Nuclear Information System (INIS)

    Kang, Joo-Hyung; Lee, Sang-Hoon

    2008-01-01

    Many strong earthquake recordings show the response motions at building foundations to be less intense than the corresponding free-field motions. To account for these phenomena, the concept of spatial variation, or wave incoherence was introduced. Several approaches for its application to practical analysis and design as part of soil-structure interaction (SSI) effect have been developed. However, conventional wave incoherency models didn't reflect the characteristics of earthquake data from hard-rock site, and their application to the practical nuclear structures on the hard-rock sites was not justified sufficiently. This paper is focused on the response impact of hard-rock coherency model proposed in 2007 on the incoherent SSI analysis results of nuclear power plant (NPP) structure. A typical reactor building of pressurized water reactor (PWR) type NPP is modeled classified into surface and embedded foundations. The model is also assumed to be located on medium-hard rock and hard-rock sites. The SSI analysis results are obtained and compared in case of coherent and incoherent input motions. The structural responses considering rocking and torsion effects are also investigated

  3. Airline Overbooking Problem with Uncertain No-Shows

    Directory of Open Access Journals (Sweden)

    Chunxiao Zhang

    2014-01-01

    Full Text Available This paper considers an airline overbooking problem of a new single-leg flight with discount fare. Due to the absence of historical data of no-shows for a new flight, and various uncertain human behaviors or unexpected events which causes that a few passengers cannot board their aircraft on time, we fail to obtain the probability distribution of no-shows. In this case, the airlines have to invite some domain experts to provide belief degree of no-shows to estimate its distribution. However, human beings often overestimate unlikely events, which makes the variance of belief degree much greater than that of the frequency. If we still regard the belief degree as a subjective probability, the derived results will exceed our expectations. In order to deal with this uncertainty, the number of no-shows of new flight is assumed to be an uncertain variable in this paper. Given the chance constraint of social reputation, an overbooking model with discount fares is developed to maximize the profit rate based on uncertain programming theory. Finally, the analytic expression of the optimal booking limit is obtained through a numerical example, and the results of sensitivity analysis indicate that the optimal booking limit is affected by flight capacity, discount, confidence level, and parameters of the uncertainty distribution significantly.

  4. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  5. Sensitivity analysis of a modified energy model

    International Nuclear Information System (INIS)

    Suganthi, L.; Jagadeesan, T.R.

    1997-01-01

    Sensitivity analysis is carried out to validate model formulation. A modified model has been developed to predict the future energy requirement of coal, oil and electricity, considering price, income, technological and environmental factors. The impact and sensitivity of the independent variables on the dependent variable are analysed. The error distribution pattern in the modified model as compared to a conventional time series model indicated the absence of clusters. The residual plot of the modified model showed no distinct pattern of variation. The percentage variation of error in the conventional time series model for coal and oil ranges from -20% to +20%, while for electricity it ranges from -80% to +20%. However, in the case of the modified model the percentage variation in error is greatly reduced - for coal it ranges from -0.25% to +0.15%, for oil -0.6% to +0.6% and for electricity it ranges from -10% to +10%. The upper and lower limit consumption levels at 95% confidence is determined. The consumption at varying percentage changes in price and population are analysed. The gap between the modified model predictions at varying percentage changes in price and population over the years from 1990 to 2001 is found to be increasing. This is because of the increasing rate of energy consumption over the years and also the confidence level decreases as the projection is made far into the future. (author)

  6. Atmospheric Dispersion Modelling and Spatial Analysis to Evaluate Population Exposure to Pesticides from Farming Processes

    Directory of Open Access Journals (Sweden)

    Sofia Costanzini

    2018-01-01

    Full Text Available This work originates from an epidemiological study aimed to assess the correlation between population exposure to pesticides used in agriculture and adverse health effects. In support of the population exposure evaluation two models implemented by the authors were applied: a GIS-based proximity model and the CAREA atmospheric dispersion model. In this work, the results of the two models are presented and compared. Despite the proximity analysis is widely used for these kinds of studies, it was investigated how meteorology could affect the exposure assessment. Both models were applied to pesticides emitted by 1519 agricultural fields and considering 2584 receptors distributed over an area of 8430 km2. CAREA output shows a considerable enhancement in the percentage of exposed receptors, from the 4% of the proximity model to the 54% of the CAREA model. Moreover, the spatial analysis of the results on a specific test site showed that the effects of meteorology considered by CAREA led to an anisotropic exposure distribution that differs considerably from the symmetric distribution resulting by the proximity model. In addition, the results of a field campaign for the definition and planning of ground measurement of concentration for the validation of CAREA are presented. The preliminary results showed how, during treatments, pesticide concentrations distant from the fields are significantly higher than background values.

  7. Modeling and Analysis of Wrinkled Membranes: An Overview

    Science.gov (United States)

    Yang, B.; Ding, H.; Lou, M.; Fang, H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    Thin-film membranes are basic elements of a variety of space inflatable/deployable structures. Wrinkling degrades the performance and reliability of these membrane structures, and hence has been a topic of continued interest. Wrinkling analysis of membranes for general geometry and arbitrary boundary conditions is quite challenging. The objective of this presentation is two-fold. Firstly, the existing models of wrinkled membranes and related numerical solution methods are reviewed. The important issues to be discussed are the capability of a membrane model to characterize taut, wrinkled and slack states of membranes in a consistent and physically reasonable manner; the ability of a wrinkling analysis method to predict the formation and growth of wrinkled regions, and to determine out-of-plane deformation and wrinkled waves; the convergence of a numerical solution method for wrinkling analysis; and the compatibility of a wrinkling analysis with general-purpose finite element codes. According to this review, several opening issues in modeling and analysis of wrinkled membranes that are to be addressed in future research are summarized, The second objective of this presentation is to discuss a newly developed membrane model of two viable parameters (2-VP model) and associated parametric finite element method (PFEM) for wrinkling analysis are introduced. The innovations and advantages of the proposed membrane model and PFEM-based wrinkling analysis are: (1) Via a unified stress-strain relation; the 2-VP model treat the taut, wrinkled, and slack states of membranes consistently; (2) The PFEM-based wrinkling analysis has guaranteed convergence; (3) The 2-VP model along with PFEM is capable of predicting membrane out-of-plane deformations; and (4) The PFEM can be integrated into any existing finite element code. Preliminary numerical examples are also included in this presentation to demonstrate the 2-VP model and PFEM-based wrinkling analysis approach.

  8. Pumps modelling of a sodium fast reactor design and analysis of hydrodynamic behavior

    Directory of Open Access Journals (Sweden)

    Ordóñez Ródenas José

    2016-01-01

    Full Text Available One of the goals of Generation IV reactors is to increase safety from those of previous generations. Different research platforms have been identified the need to improve the reliability of the simulation tools to ensure the capability of the plant to accommodate the design basis transients established in preliminary safety studies. The paper describes the modelling of primary pumps in advanced sodium cooled reactors using the TRACE code. Following the implementation of the models, the results obtained in the analysis of different design basis transients are compared with the simplifying approximations used in reference models. The paper shows the process to obtain a consistent pump model of the ESFR (European Sodium Fast Reactor design and the analysis of loss of flow transients triggered by pumps coast–down analyzing the thermal hydraulic neutronic coupled system response. A sensitivity analysis of the system pressure drops effect and the other relevant parameters that influence the natural convection after the pumps coast–down is also included.

  9. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    Science.gov (United States)

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  10. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS

    Directory of Open Access Journals (Sweden)

    Tennant Alan

    2009-05-01

    Full Text Available Abstract Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.

  11. A threat-vulnerability based risk analysis model for cyber physical system security

    CSIR Research Space (South Africa)

    Ledwaba, Lehlogonolo

    2017-01-01

    Full Text Available model. An analysis of the Natanz system shows that, with an actual case security-risk score at Mitigation level 5, the infested facilities barely avoided a situation worse than the one which occurred. The paper concludes with a discussion on the need...

  12. Seismic simulation analysis of nuclear reactor building by soil-building interaction model

    International Nuclear Information System (INIS)

    Muto, K.; Kobayashi, T.; Motohashi, S.; Kusano, N.; Mizuno, N.; Sugiyama, N.

    1981-01-01

    Seismic simulation analysis were performed for evaluating soil-structure interaction effects by an analytical approach using a 'Lattice Model' developed by the authors. The purpose of this paper is to check the adequacy of this procedure for analyzing soil-structure interaction by means of comparing computed results with recorded ones. The 'Lattice Model' approach employs a lumped mass interactive model, in which not only the structure but also the underlying and/or surrounding soil are modeled as descretized elements. The analytical model used for this study extends about 310 m in the horizontal direction and about 103 m in depth. The reactor building is modeled as three shearing-bending sticks (outer wall, inner wall and shield wall) and the underlying and surrounding soil are divided into four shearing sticks (column directly beneath the reactor building, adjacent, near and distant columns). A corresponding input base motion for the 'Lattice Model' was determined by a deconvolution analysis using a recorded motion at elevation -18.5 m in the free-field. The results of this simulation analysis were shown to be in reasonably good agreement with the recorded ones in the forms of the distribution of ground motions and structural responses, acceleration time histories and related response spectra. These results showed that the 'Lattice Model' approach was an appropriate one to estimate the soil-structure interaction effects. (orig./HP)

  13. Bayesian uncertainty analysis with applications to turbulence modeling

    International Nuclear Information System (INIS)

    Cheung, Sai Hung; Oliver, Todd A.; Prudencio, Ernesto E.; Prudhomme, Serge; Moser, Robert D.

    2011-01-01

    In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the Spalart-Allmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the Spalart-Allmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges.

  14. Bayesian sensitivity analysis of a 1D vascular model with Gaussian process emulators.

    Science.gov (United States)

    Melis, Alessandro; Clayton, Richard H; Marzo, Alberto

    2017-12-01

    One-dimensional models of the cardiovascular system can capture the physics of pulse waves but involve many parameters. Since these may vary among individuals, patient-specific models are difficult to construct. Sensitivity analysis can be used to rank model parameters by their effect on outputs and to quantify how uncertainty in parameters influences output uncertainty. This type of analysis is often conducted with a Monte Carlo method, where large numbers of model runs are used to assess input-output relations. The aim of this study was to demonstrate the computational efficiency of variance-based sensitivity analysis of 1D vascular models using Gaussian process emulators, compared to a standard Monte Carlo approach. The methodology was tested on four vascular networks of increasing complexity to analyse its scalability. The computational time needed to perform the sensitivity analysis with an emulator was reduced by the 99.96% compared to a Monte Carlo approach. Despite the reduced computational time, sensitivity indices obtained using the two approaches were comparable. The scalability study showed that the number of mechanistic simulations needed to train a Gaussian process for sensitivity analysis was of the order O(d), rather than O(d×103) needed for Monte Carlo analysis (where d is the number of parameters in the model). The efficiency of this approach, combined with capacity to estimate the impact of uncertain parameters on model outputs, will enable development of patient-specific models of the vascular system, and has the potential to produce results with clinical relevance. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.

  15. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  16. Hierarchical modeling and analysis for spatial data

    CERN Document Server

    Banerjee, Sudipto; Gelfand, Alan E

    2003-01-01

    Among the many uses of hierarchical modeling, their application to the statistical analysis of spatial and spatio-temporal data from areas such as epidemiology And environmental science has proven particularly fruitful. Yet to date, the few books that address the subject have been either too narrowly focused on specific aspects of spatial analysis, or written at a level often inaccessible to those lacking a strong background in mathematical statistics.Hierarchical Modeling and Analysis for Spatial Data is the first accessible, self-contained treatment of hierarchical methods, modeling, and dat

  17. Sensitivity study of CFD turbulent models for natural convection analysis

    International Nuclear Information System (INIS)

    Yu sun, Park

    2007-01-01

    The buoyancy driven convective flow fields are steady circulatory flows which were made between surfaces maintained at two fixed temperatures. They are ubiquitous in nature and play an important role in many engineering applications. Application of a natural convection can reduce the costs and efforts remarkably. This paper focuses on the sensitivity study of turbulence analysis using CFD (Computational Fluid Dynamics) for a natural convection in a closed rectangular cavity. Using commercial CFD code, FLUENT and various turbulent models were applied to the turbulent flow. Results from each CFD model will be compared each other in the viewpoints of grid resolution and flow characteristics. It has been showed that: -) obtaining general flow characteristics is possible with relatively coarse grid; -) there is no significant difference between results from finer grid resolutions than grid with y + + is defined as y + = ρ*u*y/μ, u being the wall friction velocity, y being the normal distance from the center of the cell to the wall, ρ and μ being respectively the fluid density and the fluid viscosity; -) the K-ε models show a different flow characteristic from K-ω models or from the Reynolds Stress Model (RSM); and -) the y + parameter is crucial for the selection of the appropriate turbulence model to apply within the simulation

  18. Microarray profiling shows distinct differences between primary tumors and commonly used preclinical models in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Wang, Weining; Iyer, N. Gopalakrishna; Tay, Hsien Ts’ung; Wu, Yonghui; Lim, Tony K. H.; Zheng, Lin; Song, In Chin; Kwoh, Chee Keong; Huynh, Hung; Tan, Patrick O. B.; Chow, Pierce K. H.

    2015-01-01

    Despite advances in therapeutics, outcomes for hepatocellular carcinoma (HCC) remain poor and there is an urgent need for efficacious systemic therapy. Unfortunately, drugs that are successful in preclinical studies often fail in the clinical setting, and we hypothesize that this is due to functional differences between primary tumors and commonly used preclinical models. In this study, we attempt to answer this question by comparing tumor morphology and gene expression profiles between primary tumors, xenografts and HCC cell lines. Hep G2 cell lines and tumor cells from patient tumor explants were subcutaneously (ectopically) injected into the flank and orthotopically into liver parenchyma of Mus Musculus SCID mice. The mice were euthanized after two weeks. RNA was extracted from the tumors, and gene expression profiling was performed using the Gene Chip Human Genome U133 Plus 2.0. Principal component analyses (PCA) and construction of dendrograms were conducted using Partek genomics suite. PCA showed that the commonly used HepG2 cell line model and its xenograft counterparts were vastly different from all fresh primary tumors. Expression profiles of primary tumors were also significantly divergent from their counterpart patient-derived xenograft (PDX) models, regardless of the site of implantation. Xenografts from the same primary tumors were more likely to cluster together regardless of site of implantation, although heat maps showed distinct differences in gene expression profiles between orthotopic and ectopic models. The data presented here challenges the utility of routinely used preclinical models. Models using HepG2 were vastly different from primary tumors and PDXs, suggesting that this is not clinically representative. Surprisingly, site of implantation (orthotopic versus ectopic) resulted in limited impact on gene expression profiles, and in both scenarios xenografts differed significantly from the original primary tumors, challenging the long

  19. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks

    Directory of Open Access Journals (Sweden)

    Luciano Pivoto Specht

    2007-03-01

    Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.

  20. Evaluation of Thermal Margin Analysis Models for SMART

    International Nuclear Information System (INIS)

    Seo, Kyong Won; Kwon, Hyuk; Hwang, Dae Hyun

    2011-01-01

    Thermal margin of SMART would be analyzed by three different methods. The first method is subchannel analysis by MATRA-S code and it would be a reference data for the other two methods. The second method is an on-line few channel analysis by FAST code that would be integrated into SCOPS/SCOMS. The last one is a single channel module analysis by safety analysis. Several thermal margin analysis models for SMART reactor core by subchannel analysis were setup and tested. We adopted a strategy of single stage analysis for thermal analysis of SMART reactor core. The model should represent characteristics of the SMART reactor core including hot channel. The model should be simple as possible to be evaluated within reasonable time and cost

  1. A Costing Analysis for Decision Making Grid Model in Failure-Based Maintenance

    Directory of Open Access Journals (Sweden)

    Burhanuddin M. A.

    2011-01-01

    Full Text Available Background. In current economic downturn, industries have to set good control on production cost, to maintain their profit margin. Maintenance department as an imperative unit in industries should attain all maintenance data, process information instantaneously, and subsequently transform it into a useful decision. Then act on the alternative to reduce production cost. Decision Making Grid model is used to identify strategies for maintenance decision. However, the model has limitation as it consider two factors only, that is, downtime and frequency of failures. We consider third factor, cost, in this study for failure-based maintenance. The objective of this paper is to introduce the formulae to estimate maintenance cost. Methods. Fish bone analysis conducted with Ishikawa model and Decision Making Grid methods are used in this study to reveal some underlying risk factors that delay failure-based maintenance. The goal of the study is to estimate the risk factor that is, repair cost to fit in the Decision Making Grid model. Decision Making grid model consider two variables, frequency of failure and downtime in the analysis. This paper introduces third variable, repair cost for Decision Making Grid model. This approaches give better result to categorize the machines, reduce cost, and boost the earning for the manufacturing plant. Results. We collected data from one of the food processing factories in Malaysia. From our empirical result, Machine C, Machine D, Machine F, and Machine I must be in the Decision Making Grid model even though their frequency of failures and downtime are less than Machine B and Machine N, based on the costing analysis. The case study and experimental results show that the cost analysis in Decision Making Grid model gives more promising strategies in failure-based maintenance. Conclusions. The improvement of Decision Making Grid model for decision analysis with costing analysis is our contribution in this paper for

  2. A dynamic model of liquid containers (tanks) with legs and probability analysis of response to simulated earthquake

    International Nuclear Information System (INIS)

    Fujita, Takafumi; Shimosaka, Haruo

    1980-01-01

    This paper is described on the results of analysis of the response of liquid containers (tanks) to earthquakes. Sine wave oscillation was applied experimentally to model tanks with legs. A model with one degree of freedom is good enough for the analysis. To investigate the reason of this fact, the response multiplication factor of tank displacement was analysed. The shapes of the model tanks were rectangular and cylindrical. Analyses were made by a potential theory. The experimental studies show that the characteristics of attenuation of oscillation was non-linear. The model analysis of this non-linear attenuation was also performed. Good agreement between the experimental and the analytical results was recognized. The probability analysis of the response to earthquake with simulated shock waves was performed, using the above mentioned model, and good agreement between the experiment and the analysis was obtained. (Kato, T.)

  3. Conformational analysis of lignin models; Analise conformacional de modelos de lignina

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Helio F. dos [Juiz de Fora Univ., MG (Brazil). Dept. de Quimica]. E-mail: helius@quimica.ufjf.br

    2001-08-01

    The conformational equilibrium for two 5,5' biphenyl lignin models have been analyzed using a quantum mechanical semiempirical method. The gas phase and solution structures are discussed based on the NMR and X-ray experimental data. The results obtained showed that the observed conformations are solvent-dependent, being the geometries and the thermodynamic properties correlated with the experimental information. This study shows how a systematic theoretical conformational analysis can help to understand chemical processes at a molecular level. (author)

  4. Domain specific modeling and analysis

    NARCIS (Netherlands)

    Jacob, Joost Ferdinand

    2008-01-01

    It is desirable to model software systems in such a way that analysis of the systems, and tool development for such analysis, is readily possible and feasible in the context of large scientific research projects. This thesis emphasizes the methodology that serves as a basis for such developments.

  5. Seismic Response Analysis and Test of 1/8 Scale Model for a Spent Fuel Storage Cask

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Han; Park, C. G.; Koo, G. H.; Seo, G. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yeom, S. H. [Chungnam Univ., Daejeon (Korea, Republic of); Choi, B. I.; Cho, Y. D. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2005-07-15

    The seismic response tests of a spent fuel dry storage cask model of 1/8 scale are performed for an typical 1940 El-centro and Kobe earthquakes. This report firstly focuses on the data generation by seismic response tests of a free standing storage cask model to check the overturing possibility of a storage cask and the slipping displacement on concrete slab bed. The variations in seismic load magnitude and cask/bed interface friction are considered in tests. The test results show that the model gives an overturning response for an extreme condition only. A FEM model is built for the test model of 1/8 scale spent fuel dry storage cask using available 3D contact conditions in ABAQUS/Explicit. Input load for this analysis is El-centro earthquake, and the friction coefficients are obtained from the test result. Penalty and kinematic contact methods of ABAQUS are used for a mechanical contact formulation. The analysis methods was verified with the rocking angle obtained by seismic response tests. The kinematic contact method with an adequate normal contact stiffness showed a good agreement with tests. Based on the established analysis method for 1/8 scale model, the seismic response analyses of a full scale model are performed for design and beyond design seismic loads.

  6. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  7. Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2007-01-01

    We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...

  8. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    Science.gov (United States)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  9. Pilot-model analysis and simulation study of effect of control task desired control response

    Science.gov (United States)

    Adams, J. J.; Gera, J.; Jaudon, J. B.

    1978-01-01

    A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.

  10. Sensitivity of a numerical wave model on wind re-analysis datasets

    Science.gov (United States)

    Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel

    2017-03-01

    Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.

  11. Replica analysis of partition-function zeros in spin-glass models

    International Nuclear Information System (INIS)

    Takahashi, Kazutaka

    2011-01-01

    We study the partition-function zeros in mean-field spin-glass models. We show that the replica method is useful to find the locations of zeros in a complex parameter plane. For the random energy model, we obtain the phase diagram in the plane and find that there are two types of distributions of zeros: two-dimensional distribution within a phase and one-dimensional one on a phase boundary. Phases with a two-dimensional distribution are characterized by a novel order parameter defined in the present replica analysis. We also discuss possible patterns of distributions by studying several systems.

  12. Stability Analysis Susceptible, Exposed, Infected, Recovered (SEIR) Model for Spread Model for Spread of Dengue Fever in Medan

    Science.gov (United States)

    Side, Syafruddin; Molliq Rangkuti, Yulita; Gerhana Pane, Dian; Setia Sinaga, Marlina

    2018-01-01

    Dengue fever is endemic disease which spread through vector, Aedes Aegypty. This disease is found more than 100 countries, such as, United State, Africa as well Asia, especially in country that have tropic climate. Mathematical modeling in this paper, discusses the speed of the spread of dengue fever. The model adopting divided over four classes, such as Susceptible (S), Exposed (E), Infected (I) and Recovered (R). SEIR model further analyzed to detect the re-breeding value based on the number reported case by dengue in Medan city. Analysis of the stability of the system in this study is asymptotically stable indicating a case of endemic and unstable that show cases the endemic cases. Simulation on the mathematical model of SEIR showed that require a very long time to produce infected humans will be free of dengue virus infection. This happens because of dengue virus infection that occurs continuously between human and vector populations.

  13. Computational fluid dynamics application: slosh analysis of a fuel tank model

    International Nuclear Information System (INIS)

    Iu, H.S.; Cleghorn, W.L.; Mills, J.K.

    2004-01-01

    This paper presents the analysis of fluid slosh behaviour inside a fuel tank model. The fuel tank model was a simplified version of a stock fuel tank that has a sloshing noise problem. A commercial CFD software, FLOW-3D, was used to simulate the slosh behaviour. Slosh experiments were performed to verify the computer simulation results. High speed video equipment enhanced with a data acquisition system was used to record the slosh experiments and to obtain the instantaneous sound level of each video frame. Five baffle configurations including the no baffle configuration were considered in the computer simulations and the experiments. The simulation results showed that the best baffle configuration can reduce the mean kinetic energy by 80% from the no baffle configuration in a certain slosh situation. The experimental results showed that 15dB(A) noise reduction can be achieved by the best baffle configuration. The correlation analysis between the mean kinetic energy and the noise level showed that high mean kinetic energy of the fluid does not always correspond to high sloshing noise. High correlation between them only occurs for the slosh situations where the fluid hits the top of the tank and creates noise. (author)

  14. Invariant density analysis: modeling and analysis of the postural control system using Markov chains.

    Science.gov (United States)

    Hur, Pilwon; Shorter, K Alex; Mehta, Prashant G; Hsiao-Wecksler, Elizabeth T

    2012-04-01

    In this paper, a novel analysis technique, invariant density analysis (IDA), is introduced. IDA quantifies steady-state behavior of the postural control system using center of pressure (COP) data collected during quiet standing. IDA relies on the analysis of a reduced-order finite Markov model to characterize stochastic behavior observed during postural sway. Five IDA parameters characterize the model and offer physiological insight into the long-term dynamical behavior of the postural control system. Two studies were performed to demonstrate the efficacy of IDA. Study 1 showed that multiple short trials can be concatenated to create a dataset suitable for IDA. Study 2 demonstrated that IDA was effective at distinguishing age-related differences in postural control behavior between young, middle-aged, and older adults. These results suggest that the postural control system of young adults converges more quickly to their steady-state behavior while maintaining COP nearer an overall centroid than either the middle-aged or older adults. Additionally, larger entropy values for older adults indicate that their COP follows a more stochastic path, while smaller entropy values for young adults indicate a more deterministic path. These results illustrate the potential of IDA as a quantitative tool for the assessment of the quiet-standing postural control system.

  15. Sensitivity analysis of Smith's AMRV model

    International Nuclear Information System (INIS)

    Ho, Chih-Hsiang

    1995-01-01

    Multiple-expert hazard/risk assessments have considerable precedent, particularly in the Yucca Mountain site characterization studies. In this paper, we present a Bayesian approach to statistical modeling in volcanic hazard assessment for the Yucca Mountain site. Specifically, we show that the expert opinion on the site disruption parameter p is elicited on the prior distribution, π (p), based on geological information that is available. Moreover, π (p) can combine all available geological information motivated by conflicting but realistic arguments (e.g., simulation, cluster analysis, structural control, etc.). The incorporated uncertainties about the probability of repository disruption p, win eventually be averaged out by taking the expectation over π (p). We use the following priors in the analysis: priors chosen for mathematical convenience: Beta (r, s) for (r, s) = (2, 2), (3, 3), (5, 5), (2, 1), (2, 8), (8, 2), and (1, 1); and three priors motivated by expert knowledge. Sensitivity analysis is performed for each prior distribution. Estimated values of hazard based on the priors chosen for mathematical simplicity are uniformly higher than those obtained based on the priors motivated by expert knowledge. And, the model using the prior, Beta (8,2), yields the highest hazard (= 2.97 X 10 -2 ). The minimum hazard is produced by the open-quotes three-expert priorclose quotes (i.e., values of p are equally likely at 10 -3 10 -2 , and 10 -1 ). The estimate of the hazard is 1.39 x which is only about one order of magnitude smaller than the maximum value. The term, open-quotes hazardclose quotes, is defined as the probability of at least one disruption of a repository at the Yucca Mountain site by basaltic volcanism for the next 10,000 years

  16. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  17. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  18. Spatial occupancy models applied to atlas data show Southern Ground Hornbills strongly depend on protected areas.

    Science.gov (United States)

    Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L

    2014-03-01

    Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.

  19. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  20. Modeling and analysis of power extraction circuits for passive UHF RFID applications

    International Nuclear Information System (INIS)

    Fan Bo; Dai Yujie; Zhang Xiaoxing; Lue Yingjie

    2009-01-01

    Modeling and analysis of far field power extraction circuits for passive UHF RF identification (RFID) applications are presented. A mathematical model is derived to predict the complex nonlinear performance of UHF voltage multiplier using Schottky diodes. To reduce the complexity of the proposed model, a simple linear approximation for Schottky diode is introduced. Measurement results show considerable agreement with the values calculated by the proposed model. With the derived model, optimization on stage number for voltage multiplier to achieve maximum power conversion efficiency is discussed. Furthermore, according to the Bode-Fano criterion and the proposed model, a limitation on maximum power up range for passive UHF RFID power extraction circuits is also studied.

  1. Dynamical Analysis of bantam-Regulated Drosophila Circadian Rhythm Model

    Science.gov (United States)

    Li, Ying; Liu, Zengrong

    MicroRNAs (miRNAs) interact with 3‧untranslated region (UTR) elements of target genes to regulate mRNA stability or translation, and play a crucial role in regulating many different biological processes. bantam, a conserved miRNA, is involved in several functions, such as regulating Drosophila growth and circadian rhythm. Recently, it has been discovered that bantam plays a crucial role in the core circadian pacemaker. In this paper, based on experimental observations, a detailed dynamical model of bantam-regulated circadian clock system is developed to show the post-transcriptional behaviors in the modulation of Drosophila circadian rhythm, in which the regulation of bantam is incorporated into a classical model. The dynamical behaviors of the model are consistent with the experimental observations, which shows that bantam is an important regulator of Drosophila circadian rhythm. The sensitivity analysis of parameters demonstrates that with the regulation of bantam the system is more sensitive to perturbations, indicating that bantam regulation makes it easier for the organism to modulate its period against the environmental perturbations. The effectiveness in rescuing locomotor activity rhythms of mutated flies shows that bantam is necessary for strong and sustained rhythms. In addition, the biological mechanisms of bantam regulation are analyzed, which may help us more clearly understand Drosophila circadian rhythm regulated by other miRNAs.

  2. A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Ebenezer Out-Nyarko

    2009-11-01

    Full Text Available Using Hidden Markov Models (HMMs as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks.

  3. Gait analysis in a pre- and post-ischemic stroke biomedical pig model.

    Science.gov (United States)

    Duberstein, Kylee Jo; Platt, Simon R; Holmes, Shannon P; Dove, C Robert; Howerth, Elizabeth W; Kent, Marc; Stice, Steven L; Hill, William D; Hess, David C; West, Franklin D

    2014-02-10

    Severity of neural injury including stroke in human patients, as well as recovery from injury, can be assessed through changes in gait patterns of affected individuals. Similar quantification of motor function deficits has been measured in rodent animal models of such injuries. However, due to differences in fundamental structure of human and rodent brains, there is a need to develop a large animal model to facilitate treatment development for neurological conditions. Porcine brain structure is similar to that of humans, and therefore the pig may make a more clinically relevant animal model. The current study was undertaken to determine key gait characteristics in normal biomedical miniature pigs and dynamic changes that occur post-neural injury in a porcine middle cerebral artery (MCA) occlusion ischemic stroke model. Yucatan miniature pigs were trained to walk through a semi-circular track and were recorded with high speed cameras to detect changes in key gait parameters. Analysis of normal pigs showed overall symmetry in hindlimb swing and stance times, forelimb stance time, along with step length, step velocity, and maximum hoof height on both fore and hindlimbs. A subset of pigs were again recorded at 7, 5 and 3 days prior to MCA occlusion and then at 1, 3, 5, 7, 14 and 30 days following surgery. MRI analysis showed that MCA occlusion resulted in significant infarction. Gait analysis indicated that stroke resulted in notable asymmetries in both temporal and spatial variables. Pigs exhibited lower maximum front hoof height on the paretic side, as well as shorter swing time and longer stance time on the paretic hindlimb. These results support that gait analysis of stroke injury is a highly sensitive detection method for changes in gait parameters in pig. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. A discrete-time Bayesian network reliability modeling and analysis framework

    International Nuclear Information System (INIS)

    Boudali, H.; Dugan, J.B.

    2005-01-01

    Dependability tools are becoming an indispensable tool for modeling and analyzing (critical) systems. However the growing complexity of such systems calls for increasing sophistication of these tools. Dependability tools need to not only capture the complex dynamic behavior of the system components, but they must be also easy to use, intuitive, and computationally efficient. In general, current tools have a number of shortcomings including lack of modeling power, incapacity to efficiently handle general component failure distributions, and ineffectiveness in solving large models that exhibit complex dependencies between their components. We propose a novel reliability modeling and analysis framework based on the Bayesian network (BN) formalism. The overall approach is to investigate timed Bayesian networks and to find a suitable reliability framework for dynamic systems. We have applied our methodology to two example systems and preliminary results are promising. We have defined a discrete-time BN reliability formalism and demonstrated its capabilities from a modeling and analysis point of view. This research shows that a BN based reliability formalism is a powerful potential solution to modeling and analyzing various kinds of system components behaviors and interactions. Moreover, being based on the BN formalism, the framework is easy to use and intuitive for non-experts, and provides a basis for more advanced and useful analyses such as system diagnosis

  5. Global model of zenith tropospheric delay proposed based on EOF analysis

    Science.gov (United States)

    Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng

    2017-07-01

    Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.

  6. A New Computationally Frugal Method For Sensitivity Analysis Of Environmental Models

    Science.gov (United States)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A.; Teuling, R.; Borgonovo, E.; Uijlenhoet, R.

    2013-12-01

    Effective and efficient parameter sensitivity analysis methods are crucial to understand the behaviour of complex environmental models and use of models in risk assessment. This paper proposes a new computationally frugal method for analyzing parameter sensitivity: the Distributed Evaluation of Local Sensitivity Analysis (DELSA). The DELSA method can be considered a hybrid of local and global methods, and focuses explicitly on multiscale evaluation of parameter sensitivity across the parameter space. Results of the DELSA method are compared with the popular global, variance-based Sobol' method and the delta method. We assess the parameter sensitivity of both (1) a simple non-linear reservoir model with only two parameters, and (2) five different "bucket-style" hydrologic models applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both the synthetic and real-world examples, the global Sobol' method and the DELSA method provide similar sensitivities, with the DELSA method providing more detailed insight at much lower computational cost. The ability to understand how sensitivity measures vary through parameter space with modest computational requirements provides exciting new opportunities.

  7. Sparse multivariate factor analysis regression models and its applications to integrative genomics analysis.

    Science.gov (United States)

    Zhou, Yan; Wang, Pei; Wang, Xianlong; Zhu, Ji; Song, Peter X-K

    2017-01-01

    The multivariate regression model is a useful tool to explore complex associations between two kinds of molecular markers, which enables the understanding of the biological pathways underlying disease etiology. For a set of correlated response variables, accounting for such dependency can increase statistical power. Motivated by integrative genomic data analyses, we propose a new methodology-sparse multivariate factor analysis regression model (smFARM), in which correlations of response variables are assumed to follow a factor analysis model with latent factors. This proposed method not only allows us to address the challenge that the number of association parameters is larger than the sample size, but also to adjust for unobserved genetic and/or nongenetic factors that potentially conceal the underlying response-predictor associations. The proposed smFARM is implemented by the EM algorithm and the blockwise coordinate descent algorithm. The proposed methodology is evaluated and compared to the existing methods through extensive simulation studies. Our results show that accounting for latent factors through the proposed smFARM can improve sensitivity of signal detection and accuracy of sparse association map estimation. We illustrate smFARM by two integrative genomics analysis examples, a breast cancer dataset, and an ovarian cancer dataset, to assess the relationship between DNA copy numbers and gene expression arrays to understand genetic regulatory patterns relevant to the disease. We identify two trans-hub regions: one in cytoband 17q12 whose amplification influences the RNA expression levels of important breast cancer genes, and the other in cytoband 9q21.32-33, which is associated with chemoresistance in ovarian cancer. © 2016 WILEY PERIODICALS, INC.

  8. Analysis for Ad Hoc Network Attack-Defense Based on Stochastic Game Model

    Directory of Open Access Journals (Sweden)

    Yuanjie LI

    2014-06-01

    Full Text Available The attack actions analysis for Ad Hoc networks can provide a reference for the design security mechanisms. This paper presents an analysis method of security of Ad Hoc networks based on Stochastic Game Nets (SGN. This method can establish a SGN model of Ad Hoc networks and calculate to get the Nash equilibrium strategy. After transforming the SGN model into a continuous-time Markov Chain (CTMC, the security of Ad Hoc networks can be evaluated and analyzed quantitatively by calculating the stationary probability of CTMC. Finally, the Matlab simulation results show that the probability of successful attack is related to the attack intensity and expected payoffs, but not attack rate.

  9. Developmental trajectories of adolescent popularity: a growth curve modelling analysis.

    Science.gov (United States)

    Cillessen, Antonius H N; Borch, Casey

    2006-12-01

    Growth curve modelling was used to examine developmental trajectories of sociometric and perceived popularity across eight years in adolescence, and the effects of gender, overt aggression, and relational aggression on these trajectories. Participants were 303 initially popular students (167 girls, 136 boys) for whom sociometric data were available in Grades 5-12. The popularity and aggression constructs were stable but non-overlapping developmental dimensions. Growth curve models were run with SAS MIXED in the framework of the multilevel model for change [Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis. Oxford, UK: Oxford University Press]. Sociometric popularity showed a linear change trajectory; perceived popularity showed nonlinear change. Overt aggression predicted low sociometric popularity but an increase in perceived popularity in the second half of the study. Relational aggression predicted a decrease in sociometric popularity, especially for girls, and continued high-perceived popularity for both genders. The effect of relational aggression on perceived popularity was the strongest around the transition from middle to high school. The importance of growth curve models for understanding adolescent social development was discussed, as well as specific issues and challenges of growth curve analyses with sociometric data.

  10. Model-based analysis and control of axial and torsional stick-slip oscillations in drilling systems

    NARCIS (Netherlands)

    Besselink, B.; Wouw, van de N.; Nijmeijer, H.

    2011-01-01

    The mechanisms leading to torsional vibrations in drilling systems are considered in this paper. Thereto, a drill string model of the axial and torsional dynamics is proposed, where coupling is provided by a rate-independent bit-rock interaction law. Analysis of this model shows that the fast axial

  11. TIME SERIES ANALYSIS USING A UNIQUE MODEL OF TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    Goran Klepac

    2007-12-01

    Full Text Available REFII1 model is an authorial mathematical model for time series data mining. The main purpose of that model is to automate time series analysis, through a unique transformation model of time series. An advantage of this approach of time series analysis is the linkage of different methods for time series analysis, linking traditional data mining tools in time series, and constructing new algorithms for analyzing time series. It is worth mentioning that REFII model is not a closed system, which means that we have a finite set of methods. At first, this is a model for transformation of values of time series, which prepares data used by different sets of methods based on the same model of transformation in a domain of problem space. REFII model gives a new approach in time series analysis based on a unique model of transformation, which is a base for all kind of time series analysis. The advantage of REFII model is its possible application in many different areas such as finance, medicine, voice recognition, face recognition and text mining.

  12. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  13. Combining Generated Data Models with Formal Invalidation for Insider Threat Analysis

    DEFF Research Database (Denmark)

    Kammuller, Florian; Probst, Christian W.

    2014-01-01

    draw from recent insights into generation of insider data to complement a logic based mechanical approach. We show how insider analysis can be traced back to the early days of security verification and the Lowe-attack on NSPK. The invalidation of policies allows modelchecking organizational structures......In this paper we revisit the advances made on invalidation policies to explore attack possibilities in organizational models. One aspect that has so far eloped systematic analysis of insider threat is the integration of data into attack scenarios and its exploitation for analyzing the models. We...... to detect insider attacks. Integration of higher order logic specification techniques allows the use of data refinement to explore attack possibilities beyond the initial system specification. We illustrate this combined invalidation technique on the classical example of the naughty lottery fairy. Data...

  14. A global sensitivity analysis approach for morphogenesis models

    KAUST Repository

    Boas, Sonja E. M.

    2015-11-21

    Background Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such ‘black-box’ models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. Results To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. Conclusions We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all ‘black-box’ models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  15. A global sensitivity analysis approach for morphogenesis models.

    Science.gov (United States)

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  16. Path analysis and multi-criteria decision making: an approach for multivariate model selection and analysis in health.

    Science.gov (United States)

    Vasconcelos, A G; Almeida, R M; Nobre, F F

    2001-08-01

    This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

  17. Experimental Design for Sensitivity Analysis of Simulation Models

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2001-01-01

    This introductory tutorial gives a survey on the use of statistical designs for what if-or sensitivity analysis in simulation.This analysis uses regression analysis to approximate the input/output transformation that is implied by the simulation model; the resulting regression model is also known as

  18. Standard model for safety analysis report of hexafluoride power plants from natural uranium

    International Nuclear Information System (INIS)

    1983-01-01

    The standard model for safety analysis report for hexafluoride production power plants from natural uranium is presented, showing the presentation form, the nature and the degree of detail, of the minimal information required by the Brazilian Nuclear Energy Commission - CNEN. (E.G.) [pt

  19. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  20. Landscape evolution models using the stream power incision model show unrealistic behavior when m ∕ n equals 0.5

    Directory of Open Access Journals (Sweden)

    J. S. Kwang

    2017-12-01

    Full Text Available Landscape evolution models often utilize the stream power incision model to simulate river incision: E = KAmSn, where E is the vertical incision rate, K is the erodibility constant, A is the upstream drainage area, S is the channel gradient, and m and n are exponents. This simple but useful law has been employed with an imposed rock uplift rate to gain insight into steady-state landscapes. The most common choice of exponents satisfies m ∕ n = 0.5. Yet all models have limitations. Here, we show that when hillslope diffusion (which operates only on small scales is neglected, the choice m ∕ n = 0.5 yields a curiously unrealistic result: the predicted landscape is invariant to horizontal stretching. That is, the steady-state landscape for a 10 km2 horizontal domain can be stretched so that it is identical to the corresponding landscape for a 1000 km2 domain.

  1. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  2. Analysis of Piscirickettsia salmonis Metabolism Using Genome-Scale Reconstruction, Modeling, and Testing

    Directory of Open Access Journals (Sweden)

    María P. Cortés

    2017-12-01

    Full Text Available Piscirickettsia salmonis is an intracellular bacterial fish pathogen that causes piscirickettsiosis, a disease with highly adverse impact in the Chilean salmon farming industry. The development of effective treatment and control methods for piscireckttsiosis is still a challenge. To meet it the number of studies on P. salmonis has grown in the last couple of years but many aspects of the pathogen’s biology are still poorly understood. Studies on its metabolism are scarce and only recently a metabolic model for reference strain LF-89 was developed. We present a new genome-scale model for P. salmonis LF-89 with more than twice as many genes as in the previous model and incorporating specific elements of the fish pathogen metabolism. Comparative analysis with models of different bacterial pathogens revealed a lower flexibility in P. salmonis metabolic network. Through constraint-based analysis, we determined essential metabolites required for its growth and showed that it can benefit from different carbon sources tested experimentally in new defined media. We also built an additional model for strain A1-15972, and together with an analysis of P. salmonis pangenome, we identified metabolic features that differentiate two main species clades. Both models constitute a knowledge-base for P. salmonis metabolism and can be used to guide the efficient culture of the pathogen and the identification of specific drug targets.

  3. Breast Cancer-Derived Lung Metastases Show Increased Pyruvate Carboxylase-Dependent Anaplerosis

    Directory of Open Access Journals (Sweden)

    Stefan Christen

    2016-10-01

    Full Text Available Cellular proliferation depends on refilling the tricarboxylic acid (TCA cycle to support biomass production (anaplerosis. The two major anaplerotic pathways in cells are pyruvate conversion to oxaloacetate via pyruvate carboxylase (PC and glutamine conversion to α-ketoglutarate. Cancers often show an organ-specific reliance on either pathway. However, it remains unknown whether they adapt their mode of anaplerosis when metastasizing to a distant organ. We measured PC-dependent anaplerosis in breast-cancer-derived lung metastases compared to their primary cancers using in vivo 13C tracer analysis. We discovered that lung metastases have higher PC-dependent anaplerosis compared to primary breast cancers. Based on in vitro analysis and a mathematical model for the determination of compartment-specific metabolite concentrations, we found that mitochondrial pyruvate concentrations can promote PC-dependent anaplerosis via enzyme kinetics. In conclusion, we show that breast cancer cells proliferating as lung metastases activate PC-dependent anaplerosis in response to the lung microenvironment.

  4. Static/dynamic fluid-structure interaction analysis for 3-D rotary blade model

    International Nuclear Information System (INIS)

    Kim, Dong Hyun; Kim, Yu Sung; Kim, Dong Man; Park, Kang Kyun

    2009-01-01

    In this study, static/dynamic fluid-structure interaction analyses have been conducted for a 3D rotary blade model like a turbo-machinery or wind turbine blade. Advanced computational analysis system based on Computational Fluid Dynamics (CFD) and Computational Structural Dynamics (CSD) has been developed in order to investigate detailed dynamic responses of rotary type models. Fluid domains are modeled using the computational grid system with local grid deforming techniques. Reynolds-averaged Navier-Stokes equations with various turbulence model are solved for unsteady flow problems of the rotating blade model. Detailed static/dynamic responses and instantaneous pressure contours on the blade surfaces considering flow-separation effects are presented to show the multi-physical phenomenon of the rotating blades.

  5. Performance analysis of NOAA tropospheric signal delay model

    International Nuclear Information System (INIS)

    Ibrahim, Hassan E; El-Rabbany, Ahmed

    2011-01-01

    Tropospheric delay is one of the dominant global positioning system (GPS) errors, which degrades the positioning accuracy. Recent development in tropospheric modeling relies on implementation of more accurate numerical weather prediction (NWP) models. In North America one of the NWP-based tropospheric correction models is the NOAA Tropospheric Signal Delay Model (NOAATrop), which was developed by the US National Oceanic and Atmospheric Administration (NOAA). Because of its potential to improve the GPS positioning accuracy, the NOAATrop model became the focus of many researchers. In this paper, we analyzed the performance of the NOAATrop model and examined its effect on ionosphere-free-based precise point positioning (PPP) solution. We generated 3 year long tropospheric zenith total delay (ZTD) data series for the NOAATrop model, Hopfield model, and the International GNSS Services (IGS) final tropospheric correction product, respectively. These data sets were generated at ten IGS reference stations spanning Canada and the United States. We analyzed the NOAATrop ZTD data series and compared them with those of the Hopfield model. The IGS final tropospheric product was used as a reference. The analysis shows that the performance of the NOAATrop model is a function of both season (time of the year) and geographical location. However, its performance was superior to the Hopfield model in all cases. We further investigated the effect of implementing the NOAATrop model on the ionosphere-free-based PPP solution convergence and accuracy. It is shown that the use of the NOAATrop model improved the PPP solution convergence by 1%, 10% and 15% for the latitude, longitude and height components, respectively

  6. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Analyzing availability using transfer function models and cross spectral analysis

    International Nuclear Information System (INIS)

    Singpurwalla, N.D.

    1980-01-01

    The paper shows how the methods of multivariate time series analysis can be used in a novel way to investigate the interrelationships between a series of operating (running) times and a series of maintenance (down) times of a complex system. Specifically, the techniques of cross spectral analysis are used to help obtain a Box-Jenkins type transfer function model for the running times and the down times of a nuclear reactor. A knowledge of the interrelationships between the running times and the down times is useful for an evaluation of maintenance policies, for replacement policy decisions, and for evaluating the availability and the readiness of complex systems

  8. Pumps modelling of a sodium fast reactor design and analysis of hydrodynamic behavior - 15294

    International Nuclear Information System (INIS)

    Ordonez, J.; Lazaro, A.; Martorell, S.

    2015-01-01

    One of the goals of Generation IV reactors is to increase safety from those of previous generations. Different research platforms have identified the need to improve the reliability of the simulation tools to ensure the capability of the plant to accommodate the design basis transients established in preliminary safety studies. The paper describes the modeling of recirculation pumps in advanced sodium cooled reactors using the TRACE code. Following the implementation of the models, the results obtained in the analysis of different design basis transients are compared with the simplifying approximations used in reference models. The paper shows the process to obtain a consistent pump model of the ESFR (European Sodium Fast Reactor) design and the analysis of loss of flow transients triggered by pumps coast-down analyzing the thermal hydraulic neutronic coupled system response. A sensitivity analysis of the system pressure drops effect and the other relevant parameters that influence the natural convection after the pumps coast-down is also included. (authors)

  9. An Overview of Soil Models for Earthquake Response Analysis

    Directory of Open Access Journals (Sweden)

    Halida Yunita

    2015-01-01

    Full Text Available Earthquakes can damage thousands of buildings and infrastructure as well as cause the loss of thousands of lives. During an earthquake, the damage to buildings is mostly caused by the effect of local soil conditions. Depending on the soil type, the earthquake waves propagating from the epicenter to the ground surface will result in various behaviors of the soil. Several studies have been conducted to accurately obtain the soil response during an earthquake. The soil model used must be able to characterize the stress-strain behavior of the soil during the earthquake. This paper compares equivalent linear and nonlinear soil model responses. Analysis was performed on two soil types, Site Class D and Site Class E. An equivalent linear soil model leads to a constant value of shear modulus, while in a nonlinear soil model, the shear modulus changes constantly,depending on the stress level, and shows inelastic behavior. The results from a comparison of both soil models are displayed in the form of maximum acceleration profiles and stress-strain curves.

  10. FAME, the Flux Analysis and Modeling Environment

    Directory of Open Access Journals (Sweden)

    Boele Joost

    2012-01-01

    Full Text Available Abstract Background The creation and modification of genome-scale metabolic models is a task that requires specialized software tools. While these are available, subsequently running or visualizing a model often relies on disjoint code, which adds additional actions to the analysis routine and, in our experience, renders these applications suboptimal for routine use by (systems biologists. Results The Flux Analysis and Modeling Environment (FAME is the first web-based modeling tool that combines the tasks of creating, editing, running, and analyzing/visualizing stoichiometric models into a single program. Analysis results can be automatically superimposed on familiar KEGG-like maps. FAME is written in PHP and uses the Python-based PySCeS-CBM for its linear solving capabilities. It comes with a comprehensive manual and a quick-start tutorial, and can be accessed online at http://f-a-m-e.org/. Conclusions With FAME, we present the community with an open source, user-friendly, web-based "one stop shop" for stoichiometric modeling. We expect the application will be of substantial use to investigators and educators alike.

  11. Hierarchical linear modeling of longitudinal pedigree data for genetic association analysis

    DEFF Research Database (Denmark)

    Tan, Qihua; B Hjelmborg, Jacob V; Thomassen, Mads

    2014-01-01

    -effect models to explicitly model the genetic relationship. These have proved to be an efficient way of dealing with sample clustering in pedigree data. Although current algorithms implemented in popular statistical packages are useful for adjusting relatedness in the mixed modeling of genetic effects...... associated with blood pressure with estimated inflation factors of 0.99, suggesting that our modeling of random effects efficiently handles the genetic relatedness in pedigrees. Application to simulated data captures important variants specified in the simulation. Our results show that the method is useful......Genetic association analysis on complex phenotypes under a longitudinal design involving pedigrees encounters the problem of correlation within pedigrees, which could affect statistical assessment of the genetic effects. Approaches have been proposed to integrate kinship correlation into the mixed...

  12. System Testability Analysis for Complex Electronic Devices Based on Multisignal Model

    International Nuclear Information System (INIS)

    Long, B; Tian, S L; Huang, J G

    2006-01-01

    It is necessary to consider the system testability problems for electronic devices during their early design phase because modern electronic devices become smaller and more compositive while their function and structure are more complex. Multisignal model, combining advantage of structure model and dependency model, is used to describe the fault dependency relationship for the complex electronic devices, and the main testability indexes (including optimal test program, fault detection rate, fault isolation rate, etc.) to evaluate testability and corresponding algorithms are given. The system testability analysis process is illustrated for USB-GPIB interface circuit with TEAMS toolbox. The experiment results show that the modelling method is simple, the computation speed is rapid and this method has important significance to improve diagnostic capability for complex electronic devices

  13. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  14. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  15. A CAD based geometry model for simulation and analysis of particle detector data

    Energy Technology Data Exchange (ETDEWEB)

    Milde, Michael; Losekamm, Martin; Poeschl, Thomas; Greenwald, Daniel; Paul, Stephan [Technische Universitaet Muenchen, 85748 Garching (Germany)

    2016-07-01

    The development of a new particle detector requires a good understanding of its setup. A detailed model of the detector's geometry is not only needed during construction, but also for simulation and data analysis. To arrive at a consistent description of the detector geometry a representation is needed that can be easily implemented in different software tools used during data analysis. We developed a geometry representation based on CAD files that can be easily used within the Geant4 simulation framework and analysis tools based on the ROOT framework. This talk presents the structure of the geometry model and show its implementation using the example of the event reconstruction developed for the Multi-purpose Active-target Particle Telescope (MAPT). The detector consists of scintillating plastic fibers and can be used as a tracking detector and calorimeter with omnidirectional acceptance. To optimize the angular resolution and the energy reconstruction of measured particles, a detailed detector model is needed at all stages of the reconstruction.

  16. Distributed activation energy model for kinetic analysis of multi-stage hydropyrolysis of coal

    Energy Technology Data Exchange (ETDEWEB)

    Liu, X.; Li, W.; Wang, N.; Li, B. [Chinese Academy of Sciences, Taiyuan (China). Inst. of Coal Chemistry

    2003-07-01

    Based on the new analysis of distributed activation energy model, a bicentral distribution model was introduced to the analysis of multi-stage hydropyrolysis of coal. The hydropyrolysis for linear temperature programming with and without holding stage were mathematically described and the corresponding kinetic expressions were achieved. Based on the kinetics, the hydropyrolysis (HyPr) and multi-stage hydropyrolysis (MHyPr) of Xundian brown coal was simulated. The results shows that both Mo catalyst and 2-stage holding can lower the apparent activation energy of hydropyrolysis and make activation energy distribution become narrow. Besides, there exists an optimum Mo loading of 0.2% for HyPy of Xundian lignite. 10 refs.

  17. Odor-conditioned rheotaxis of the sea lamprey: modeling, analysis and validation

    International Nuclear Information System (INIS)

    Choi, Jongeun; Jeon, Soo; Johnson, Nicholas S; Brant, Cory O; Li, Weiming

    2013-01-01

    Mechanisms for orienting toward and locating an odor source are sought in both biology and engineering. Chemical ecology studies have demonstrated that adult female sea lamprey show rheotaxis in response to a male pheromone with dichotomous outcomes: sexually mature females locate the source of the pheromone whereas immature females swim by the source and continue moving upstream. Here we introduce a simple switching mechanism modeled after odor-conditioned rheotaxis for the sea lamprey as they search for the source of a pheromone in a one-dimensional riverine environment. In this strategy, the females move upstream only if they detect that the pheromone concentration is higher than a threshold value and drifts down (by turning off control action to save energy) otherwise. In addition, we propose various uncertainty models such as measurement noise, actuator disturbance, and a probabilistic model of a concentration field in turbulent flow. Based on the proposed model with uncertainties, a convergence analysis showed that with this simplistic switching mechanism, the lamprey converges to the source location on average in spite of all such uncertainties. Furthermore, a slightly modified model and its extensive simulation results explain the behaviors of immature female lamprey near the source location. (paper)

  18. SWAT meta-modeling as support of the management scenario analysis in large watersheds.

    Science.gov (United States)

    Azzellino, A; Çevirgen, S; Giupponi, C; Parati, P; Ragusa, F; Salvetti, R

    2015-01-01

    In the last two decades, numerous models and modeling techniques have been developed to simulate nonpoint source pollution effects. Most models simulate the hydrological, chemical, and physical processes involved in the entrainment and transport of sediment, nutrients, and pesticides. Very often these models require a distributed modeling approach and are limited in scope by the requirement of homogeneity and by the need to manipulate extensive data sets. Physically based models are extensively used in this field as a decision support for managing the nonpoint source emissions. A common characteristic of this type of model is a demanding input of several state variables that makes the calibration and effort-costing in implementing any simulation scenario more difficult. In this study the USDA Soil and Water Assessment Tool (SWAT) was used to model the Venice Lagoon Watershed (VLW), Northern Italy. A Multi-Layer Perceptron (MLP) network was trained on SWAT simulations and used as a meta-model for scenario analysis. The MLP meta-model was successfully trained and showed an overall accuracy higher than 70% both on the training and on the evaluation set, allowing a significant simplification in conducting scenario analysis.

  19. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  20. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  1. Construction Process Simulation and Safety Analysis Based on Building Information Model and 4D Technology

    Institute of Scientific and Technical Information of China (English)

    HU Zhenzhong; ZHANG Jianping; DENG Ziyin

    2008-01-01

    Time-dependent structure analysis theory has been proved to be more accurate and reliable com-pared to commonly used methods during construction. However, so far applications are limited to partial pe-riod and part of the structure because of immeasurable artificial intervention. Based on the building informa-tion model (BIM) and four-dimensional (4D) technology, this paper proposes an improves structure analysis method, which can generate structural geometry, resistance model, and loading conditions automatically by a close interlink of the schedule information, architectural model, and material properties. The method was applied to a safety analysis during a continuous and dynamic simulation of the entire construction process.The results show that the organic combination of the BIM, 4D technology, construction simulation, and safety analysis of time-dependent structures is feasible and practical. This research also lays a foundation for further researches on building lifecycle management by combining architectural design, structure analy-sis, and construction management.

  2. Representing Uncertainty on Model Analysis Plots

    Science.gov (United States)

    Smith, Trevor I.

    2016-01-01

    Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model.…

  3. Transcriptome analysis in non-model species: a new method for the analysis of heterologous hybridization on microarrays

    Directory of Open Access Journals (Sweden)

    Jouventin Pierre

    2010-05-01

    Full Text Available Abstract Background Recent developments in high-throughput methods of analyzing transcriptomic profiles are promising for many areas of biology, including ecophysiology. However, although commercial microarrays are available for most common laboratory models, transcriptome analysis in non-traditional model species still remains a challenge. Indeed, the signal resulting from heterologous hybridization is low and difficult to interpret because of the weak complementarity between probe and target sequences, especially when no microarray dedicated to a genetically close species is available. Results We show here that transcriptome analysis in a species genetically distant from laboratory models is made possible by using MAXRS, a new method of analyzing heterologous hybridization on microarrays. This method takes advantage of the design of several commercial microarrays, with different probes targeting the same transcript. To illustrate and test this method, we analyzed the transcriptome of king penguin pectoralis muscle hybridized to Affymetrix chicken microarrays, two organisms separated by an evolutionary distance of approximately 100 million years. The differential gene expression observed between different physiological situations computed by MAXRS was confirmed by real-time PCR on 10 genes out of 11 tested. Conclusions MAXRS appears to be an appropriate method for gene expression analysis under heterologous hybridization conditions.

  4. Heterogeneous modelling and finite element analysis of the femur

    Directory of Open Access Journals (Sweden)

    Zhang Binkai

    2017-01-01

    Full Text Available As the largest and longest bone in the human body, the femur has important research value and application prospects. This paper introduces a fast reconstruction method with Mimics and ANSYS software to realize the heterogeneous modelling of the femur according to Hu distribution of the CT series, and simulates it in various situations by finite element analysis to study the mechanical characteristics of the femur. The femoral heterogeneous model shows the distribution of bone mineral density and material properties, which can be used to assess the diagnosis and treatment of bone diseases. The stress concentration position of the femur under different conditions can be calculated by the simulation, which can provide reference for the design and material selection of prosthesis.

  5. Sensitivity analysis using two-dimensional models of the Whiteshell geosphere

    Energy Technology Data Exchange (ETDEWEB)

    Scheier, N. W.; Chan, T.; Stanchell, F. W.

    1992-12-01

    As part of the assessment of the environmental impact of disposing of immobilized nuclear fuel waste in a vault deep within plutonic rock, detailed modelling of groundwater flow, heat transport and containment transport through the geosphere is being performed using the MOTIF finite-element computer code. The first geosphere model is being developed using data from the Whiteshell Research Area, with a hypothetical disposal vault at a depth of 500 m. This report briefly describes the conceptual model and then describes in detail the two-dimensional simulations used to help initially define an adequate three-dimensional representation, select a suitable form for the simplified model to be used in the overall systems assessment with the SYVAC computer code, and perform some sensitivity analysis. The sensitivity analysis considers variations in the rock layer properties, variations in fracture zone configurations, the impact of grouting a vault/fracture zone intersection, and variations in boundary conditions. This study shows that the configuration of major fracture zones can have a major influence on groundwater flow patterns. The flows in the major fracture zones can have high velocities and large volumes. The proximity of the radionuclide source to a major fracture zone may strongly influence the time it takes for a radionuclide to be transported to the surface. (auth)

  6. Comparing model-based and model-free analysis methods for QUASAR arterial spin labeling perfusion quantification.

    Science.gov (United States)

    Chappell, Michael A; Woolrich, Mark W; Petersen, Esben T; Golay, Xavier; Payne, Stephen J

    2013-05-01

    Amongst the various implementations of arterial spin labeling MRI methods for quantifying cerebral perfusion, the QUASAR method is unique. By using a combination of labeling with and without flow suppression gradients, the QUASAR method offers the separation of macrovascular and tissue signals. This permits local arterial input functions to be defined and "model-free" analysis, using numerical deconvolution, to be used. However, it remains unclear whether arterial spin labeling data are best treated using model-free or model-based analysis. This work provides a critical comparison of these two approaches for QUASAR arterial spin labeling in the healthy brain. An existing two-component (arterial and tissue) model was extended to the mixed flow suppression scheme of QUASAR to provide an optimal model-based analysis. The model-based analysis was extended to incorporate dispersion of the labeled bolus, generally regarded as the major source of discrepancy between the two analysis approaches. Model-free and model-based analyses were compared for perfusion quantification including absolute measurements, uncertainty estimation, and spatial variation in cerebral blood flow estimates. Major sources of discrepancies between model-free and model-based analysis were attributed to the effects of dispersion and the degree to which the two methods can separate macrovascular and tissue signal. Copyright © 2012 Wiley Periodicals, Inc.

  7. ANALYSIS AND MODELING OF GENEVA MECHANISM

    Directory of Open Access Journals (Sweden)

    HARAGA Georgeta

    2015-06-01

    Full Text Available The paper presents some aspects theoretical and practical based on the finite element analysis and modelling of Geneva mechanism with four slots, using the CATIA graphic program. This type of mechanism is an example of intermittent gearing that translates a continuous rotation into an intermittent rotary motion. It consists of alternate periods of motion and rest without reversing direction. In this paper, some design parameters with specify a Geneva mechanism will be defined precisely such as number of driving cranks, number of slots, wheel diameter, pin diameter, etc. Finite element analysis (FEA can be used for creating a finite element model (preprocessing and visualizing the analysis results (postprocessing, and use other solvers for processing.

  8. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  9. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  10. Development of Wolsong Unit 2 Containment Analysis Model

    Energy Technology Data Exchange (ETDEWEB)

    Hoon, Choi [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of); Jin, Ko Bong; Chan, Park Young [Hanbat National Univ., Daejeon (Korea, Republic of)

    2014-05-15

    To be prepared for the full scope safety analysis of Wolsong unit 2 with modified fuel, input decks for the various objectives, which can be read by GOTHIC 7.2b(QA), are developed and tested for the steady state simulation. A detailed nodalization of 39 control volumes and 92 flow paths is constructed to determine the differential pressure across internal walls or hydrogen concentration and distribution inside containment. A lumped model with 15 control volumes and 74 flow paths has also been developed to reduce the computer run time for the assessments in which the analysis results are not sensitive to detailed thermal hydraulic distribution inside containment such as peak pressure, pressure dependent signal and radionuclide release. The input data files provide simplified representations of the geometric layout of the containment building (volumes, dimensions, flow paths, doors, panels, etc.) and the performance characteristics of the various containment subsystems. The parameter values are based on best estimate or design values for that parameter. The analysis values are determined by conservatism depending on the analysis objective and may be different for various analysis objectives. Basic input decks of Wolsong unit 2 were developed for the various analysis purposes with GOTHIC 7.2b(QA). Depend on the analysis objective, two types of models are prepared. Detailed model models each confined room in the containment as a separate node. All of the geometric data are based on the drawings of Wolsong unit 2. Developed containment models are simulating the steady state well to the designated initial condition. These base models will be used for Wolsong unit 2 in case of safety analysis of full scope is needed.

  11. Regression and regression analysis time series prediction modeling on climate data of quetta, pakistan

    International Nuclear Information System (INIS)

    Jafri, Y.Z.; Kamal, L.

    2007-01-01

    Various statistical techniques was used on five-year data from 1998-2002 of average humidity, rainfall, maximum and minimum temperatures, respectively. The relationships to regression analysis time series (RATS) were developed for determining the overall trend of these climate parameters on the basis of which forecast models can be corrected and modified. We computed the coefficient of determination as a measure of goodness of fit, to our polynomial regression analysis time series (PRATS). The correlation to multiple linear regression (MLR) and multiple linear regression analysis time series (MLRATS) were also developed for deciphering the interdependence of weather parameters. Spearman's rand correlation and Goldfeld-Quandt test were used to check the uniformity or non-uniformity of variances in our fit to polynomial regression (PR). The Breusch-Pagan test was applied to MLR and MLRATS, respectively which yielded homoscedasticity. We also employed Bartlett's test for homogeneity of variances on a five-year data of rainfall and humidity, respectively which showed that the variances in rainfall data were not homogenous while in case of humidity, were homogenous. Our results on regression and regression analysis time series show the best fit to prediction modeling on climatic data of Quetta, Pakistan. (author)

  12. Degradation Assessment and Fault Diagnosis for Roller Bearing Based on AR Model and Fuzzy Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Lingli Jiang

    2011-01-01

    Full Text Available This paper proposes a new approach combining autoregressive (AR model and fuzzy cluster analysis for bearing fault diagnosis and degradation assessment. AR model is an effective approach to extract the fault feature, and is generally applied to stationary signals. However, the fault vibration signals of a roller bearing are non-stationary and non-Gaussian. Aiming at this problem, the set of parameters of the AR model is estimated based on higher-order cumulants. Consequently, the AR parameters are taken as the feature vectors, and fuzzy cluster analysis is applied to perform classification and pattern recognition. Experiments analysis results show that the proposed method can be used to identify various types and severities of fault bearings. This study is significant for non-stationary and non-Gaussian signal analysis, fault diagnosis and degradation assessment.

  13. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    International Nuclear Information System (INIS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-01-01

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model

  14. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yunfan, E-mail: yunfanlu@yeah.net; Wang, Jun; Niu, Hongli

    2015-06-12

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model.

  15. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  16. Experimental analysis of a nuclear reactor prestressed concrete pressure vessels model

    International Nuclear Information System (INIS)

    Vallin, C.

    1980-01-01

    A comprehensible analysis was made of the performance of each set of sensors used to measure the strain and displacement of a 1/20 scale Prestressed Concrete Pressure Vessel (PCPV) model tested at the Instituto de Pesquisas Energeticas e Nucleares (IPEN). Among the three Kinds of sensors used (strain gage, displacement transducers and load cells) the displacement transducers showed the best behavior. The displacemente transducers data was statistically analysed and a linear behavior of the model was observed during the first pressurizations tests. By means of a linear statistical correlation between experimental and expected theoretical data it was found that the model looses the linearity at a pressure between 110-125 atm. (Author) [pt

  17. Model Based Analysis and Test Generation for Flight Software

    Science.gov (United States)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  18. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  19. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis....... This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set...... analysis methods. We apply the model to show that the popular GSEA method, which recently has been claimed to test the self-contained null hypothesis, actually tests the competitive null if the weight parameter is zero. However, for this result to hold strictly, the choice of the dependence measures...

  20. Perturbation analysis of nonlinear matrix population models

    Directory of Open Access Journals (Sweden)

    Hal Caswell

    2008-03-01

    Full Text Available Perturbation analysis examines the response of a model to changes in its parameters. It is commonly applied to population growth rates calculated from linear models, but there has been no general approach to the analysis of nonlinear models. Nonlinearities in demographic models may arise due to density-dependence, frequency-dependence (in 2-sex models, feedback through the environment or the economy, and recruitment subsidy due to immigration, or from the scaling inherent in calculations of proportional population structure. This paper uses matrix calculus to derive the sensitivity and elasticity of equilibria, cycles, ratios (e.g. dependency ratios, age averages and variances, temporal averages and variances, life expectancies, and population growth rates, for both age-classified and stage-classified models. Examples are presented, applying the results to both human and non-human populations.

  1. Binding free energy analysis of protein-protein docking model structures by evERdock.

    Science.gov (United States)

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-14

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  2. Quantitative 2- and 3-dimensional analysis of pharmacokinetic model-derived variables for breast lesions in dynamic, contrast-enhanced MR mammography

    International Nuclear Information System (INIS)

    Hauth, E.A.M.; Jaeger, H.J.; Maderwald, S.; Muehler, A.; Kimmig, R.; Forsting, M.

    2008-01-01

    Purpose: 2- and 3-dimensional evaluation of quantitative pharmacokinetic parameters derived from the Tofts model modeling dynamic contrast enhancement of lesions in MR mammography. Materials and methods: In 95 patients, MR mammography revealed 127 suspicious lesions. The initial rate of enhancement was coded by color intensity, the post-initial enhancement change is coded by color hue. 2D and 3D analysis of distribution of color hue and intensity, vascular permeability and extracellular volume were performed. Results: In 2D, malignant lesions showed significant higher number of bright red, medium red, dark red, bright green, medium green, dark green and bright blue pixels than benign lesions. In 3D, statistical significant differences between malignant and benign lesions was found for all this parameters. Vascular permeability was significant higher in malignant lesions than in benign lesions. Regression model using the 3D data found that the best discriminator between malignant and benign lesions was combined number of voxels and medium green pixels, with a sensitivity of 79.4% and a specificity of 83.1%. Conclusions: Quantitative analysis of pharmacokinetic variables of contrast kinetics showed significant differences between malignant and benign lesions. 3D analysis showed superior diagnostic differentiation between malignant and benign lesions than 2D analysis. The parametric analysis using a pharmacokinetic model allows objective analysis of contrast enhancement in breast lesions

  3. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  4. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  5. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.

    Science.gov (United States)

    Kumar, Abhishek; Clement, Shibu; Agrawal, V P

    2010-07-15

    An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.

  6. OpenFLUX: efficient modelling software for 13C-based metabolic flux analysis

    Directory of Open Access Journals (Sweden)

    Nielsen Lars K

    2009-05-01

    Full Text Available Abstract Background The quantitative analysis of metabolic fluxes, i.e., in vivo activities of intracellular enzymes and pathways, provides key information on biological systems in systems biology and metabolic engineering. It is based on a comprehensive approach combining (i tracer cultivation on 13C substrates, (ii 13C labelling analysis by mass spectrometry and (iii mathematical modelling for experimental design, data processing, flux calculation and statistics. Whereas the cultivation and the analytical part is fairly advanced, a lack of appropriate modelling software solutions for all modelling aspects in flux studies is limiting the application of metabolic flux analysis. Results We have developed OpenFLUX as a user friendly, yet flexible software application for small and large scale 13C metabolic flux analysis. The application is based on the new Elementary Metabolite Unit (EMU framework, significantly enhancing computation speed for flux calculation. From simple notation of metabolic reaction networks defined in a spreadsheet, the OpenFLUX parser automatically generates MATLAB-readable metabolite and isotopomer balances, thus strongly facilitating model creation. The model can be used to perform experimental design, parameter estimation and sensitivity analysis either using the built-in gradient-based search or Monte Carlo algorithms or in user-defined algorithms. Exemplified for a microbial flux study with 71 reactions, 8 free flux parameters and mass isotopomer distribution of 10 metabolites, OpenFLUX allowed to automatically compile the EMU-based model from an Excel file containing metabolic reactions and carbon transfer mechanisms, showing it's user-friendliness. It reliably reproduced the published data and optimum flux distributions for the network under study were found quickly ( Conclusion We have developed a fast, accurate application to perform steady-state 13C metabolic flux analysis. OpenFLUX will strongly facilitate and

  7. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    Science.gov (United States)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  8. 3D BUILDING MODELS SEGMENTATION BASED ON K-MEANS++ CLUSTER ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2016-10-01

    Full Text Available 3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  9. About the use of rank transformation in sensitivity analysis of model output

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Sobol', Ilya M

    1995-01-01

    Rank transformations are frequently employed in numerical experiments involving a computational model, especially in the context of sensitivity and uncertainty analyses. Response surface replacement and parameter screening are tasks which may benefit from a rank transformation. Ranks can cope with nonlinear (albeit monotonic) input-output distributions, allowing the use of linear regression techniques. Rank transformed statistics are more robust, and provide a useful solution in the presence of long tailed input and output distributions. As is known to practitioners, care must be employed when interpreting the results of such analyses, as any conclusion drawn using ranks does not translate easily to the original model. In the present note an heuristic approach is taken, to explore, by way of practical examples, the effect of a rank transformation on the outcome of a sensitivity analysis. An attempt is made to identify trends, and to correlate these effects to a model taxonomy. Employing sensitivity indices, whereby the total variance of the model output is decomposed into a sum of terms of increasing dimensionality, we show that the main effect of the rank transformation is to increase the relative weight of the first order terms (the 'main effects'), at the expense of the 'interactions' and 'higher order interactions'. As a result the influence of those parameters which influence the output mostly by way of interactions may be overlooked in an analysis based on the ranks. This difficulty increases with the dimensionality of the problem, and may lead to the failure of a rank based sensitivity analysis. We suggest that the models can be ranked, with respect to the complexity of their input-output relationship, by mean of an 'Association' index I y . I y may complement the usual model coefficient of determination R y 2 as a measure of model complexity for the purpose of uncertainty and sensitivity analysis

  10. Sensitive analysis and modifications to reflood-related constitutive models of RELAP5

    International Nuclear Information System (INIS)

    Li Dong; Liu Xiaojing; Yang Yanhua

    2014-01-01

    Previous system code calculation reveals that the cladding temperature is underestimated and quench front appears too early during reflood process. To find out the parameters shows important effect on the results, sensitive analysis is performed on parameters of constitutive physical models. Based on the phenomenological and theoretical analysis, four parameters are selected: wall to vapor film boiling heat transfer coefficient, wall to liquid film boiling heat transfer coefficient, dry wall interfacial friction coefficient and minimum droplet diameter. In order to improve the reflood simulation ability of RELAP5 code, the film boiling heat transfer model and dry wall interfacial friction model which are corresponding models of those influential parameters are studied. Modifications have been made and installed into RELAP5 code. Six tests of FEBA are simulated by RELAP5 to study the predictability of reflood-related physical models. A dispersed flow film boiling heat transfer (DFFB) model is applied when void fraction is above 0.9. And a factor is multiplied to the post-CHF drag coefficient to fit the experiment better. Finally, the six FEBA tests are calculated again so as to assess the modifications. Better results are obtained which prove the advantage of the modified models. (author)

  11. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  12. Economic analysis model for total energy and economic systems

    International Nuclear Information System (INIS)

    Shoji, Katsuhiko; Yasukawa, Shigeru; Sato, Osamu

    1980-09-01

    This report describes framing an economic analysis model developed as a tool of total energy systems. To prospect and analyze future energy systems, it is important to analyze the relation between energy system and economic structure. We prepared an economic analysis model which was suited for this purpose. Our model marks that we can analyze in more detail energy related matters than other economic ones, and can forecast long-term economic progress rather than short-term economic fluctuation. From view point of economics, our model is longterm multi-sectoral economic analysis model of open Leontief type. Our model gave us appropriate results for fitting test and forecasting estimation. (author)

  13. Scaling analysis and model estimation of solar corona index

    Science.gov (United States)

    Ray, Samujjwal; Ray, Rajdeep; Khondekar, Mofazzal Hossain; Ghosh, Koushik

    2018-04-01

    A monthly average solar green coronal index time series for the period from January 1939 to December 2008 collected from NOAA (The National Oceanic and Atmospheric Administration) has been analysed in this paper in perspective of scaling analysis and modelling. Smoothed and de-noising have been done using suitable mother wavelet as a pre-requisite. The Finite Variance Scaling Method (FVSM), Higuchi method, rescaled range (R/S) and a generalized method have been applied to calculate the scaling exponents and fractal dimensions of the time series. Autocorrelation function (ACF) is used to find autoregressive (AR) process and Partial autocorrelation function (PACF) has been used to get the order of AR model. Finally a best fit model has been proposed using Yule-Walker Method with supporting results of goodness of fit and wavelet spectrum. The results reveal an anti-persistent, Short Range Dependent (SRD), self-similar property with signatures of non-causality, non-stationarity and nonlinearity in the data series. The model shows the best fit to the data under observation.

  14. An Improved Rigid Multibody Model for the Dynamic Analysis of the Planetary Gearbox in a Wind Turbine

    Directory of Open Access Journals (Sweden)

    Wenguang Yang

    2016-01-01

    Full Text Available This paper proposes an improved rigid multibody model for the dynamic analysis of the planetary gearbox in a wind turbine. The improvements mainly include choosing the inertia frame as the reference frame of the carrier, the ring, and the sun and adding a new degree of freedom for each planet. An element assembly method is introduced to build the model, and a time-varying mesh stiffness model is presented. A planetary gear study case is employed to verify the validity of the improved model. Comparisons between the improvement model and the traditional model show that the natural characteristics are very close; the improved model can obtain the right equivalent moment of inertia of the planetary gear in the transient simulation, and all the rotation speeds satisfy the transmission relationships well; harmonic resonance and resonance modulation phenomena can be found in their vibration signals. The improved model is applied in a multistage gearbox dynamics analysis to reveal the prospects of the model. Modal analysis and transient analysis with and without time-varying mesh stiffness considered are conducted. The rotation speeds from the transient analysis are consistent with the theory, and resonance modulation can be found in the vibration signals.

  15. Development and Analysis of Patient-Based Complete Conducting Airways Models.

    Directory of Open Access Journals (Sweden)

    Rafel Bordas

    Full Text Available The analysis of high-resolution computed tomography (CT images of the lung is dependent on inter-subject differences in airway geometry. The application of computational models in understanding the significance of these differences has previously been shown to be a useful tool in biomedical research. Studies using image-based geometries alone are limited to the analysis of the central airways, down to generation 6-10, as other airways are not visible on high-resolution CT. However, airways distal to this, often termed the small airways, are known to play a crucial role in common airway diseases such as asthma and chronic obstructive pulmonary disease (COPD. Other studies have incorporated an algorithmic approach to extrapolate CT segmented airways in order to obtain a complete conducting airway tree down to the level of the acinus. These models have typically been used for mechanistic studies, but also have the potential to be used in a patient-specific setting. In the current study, an image analysis and modelling pipeline was developed and applied to a number of healthy (n = 11 and asthmatic (n = 24 CT patient scans to produce complete patient-based airway models to the acinar level (mean terminal generation 15.8 ± 0.47. The resulting models are analysed in terms of morphometric properties and seen to be consistent with previous work. A number of global clinical lung function measures are compared to resistance predictions in the models to assess their suitability for use in a patient-specific setting. We show a significant difference (p < 0.01 in airways resistance at all tested flow rates in complete airway trees built using CT data from severe asthmatics (GINA 3-5 versus healthy subjects. Further, model predictions of airways resistance at all flow rates are shown to correlate with patient forced expiratory volume in one second (FEV1 (Spearman ρ = -0.65, p < 0.001 and, at low flow rates (0.00017 L/s, FEV1 over forced vital capacity (FEV1

  16. Analysis of laser remote fusion cutting based on a mathematical model

    Energy Technology Data Exchange (ETDEWEB)

    Matti, R. S. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden); Department of Mechanical Engineering, College of Engineering, University of Mosul, Mosul (Iraq); Ilar, T.; Kaplan, A. F. H. [Department of Engineering Sciences and Mathematics, Luleå University of Technology, S-971 87 Luleå (Sweden)

    2013-12-21

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

  17. Analysis of laser remote fusion cutting based on a mathematical model

    International Nuclear Information System (INIS)

    Matti, R. S.; Ilar, T.; Kaplan, A. F. H.

    2013-01-01

    Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too

  18. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  19. AN ANALYSIS ON THE DECISION MODEL OF SMART PLUS INSURANCE PRODUCT PURCHASE

    Directory of Open Access Journals (Sweden)

    Fitry Primadona

    2016-09-01

    Full Text Available The purposes of this study were 1 to analyze the decision model of Smart Plus insurance product purchase and 2 to determine the criteria, sub-criteria, and alternative priorities in Smart Plus purchase decision model. The methods utilized in the study included a survey and interview (in-depth interview by using an AHP analysis (Analytical Hierarchy Process and processing software of "Expert Choice". The result of the first analysis indicated the four marketing mixes that had been performed (Price, Product, Process, and Place; while the second one showed that the purchase of Smart Plus product is based on the factors with the level of interest as follow: benefit (36.3%, premium (35.7%, membership process (14.6%, and provider (13.4%. The result of the second analysis revealed the important sub-criteria including premium offer, additional benefits, membership card, and temporary certificate from the medical specialist.Keywords: AHP, life insurance, marketing mix, purchase decision

  20. Comparative analysis of numerical models of pipe handling equipment used in offshore drilling applications

    Energy Technology Data Exchange (ETDEWEB)

    Pawlus, Witold, E-mail: witold.p.pawlus@ieee.org; Ebbesen, Morten K.; Hansen, Michael R.; Choux, Martin; Hovland, Geir [Department of Engineering Sciences, University of Agder, PO Box 509, N-4898 Grimstad (Norway)

    2016-06-08

    Design of offshore drilling equipment is a task that involves not only analysis of strict machine specifications and safety requirements but also consideration of changeable weather conditions and harsh environment. These challenges call for a multidisciplinary approach and make the design process complex. Various modeling software products are currently available to aid design engineers in their effort to test and redesign equipment before it is manufactured. However, given the number of available modeling tools and methods, the choice of the proper modeling methodology becomes not obvious and – in some cases – troublesome. Therefore, we present a comparative analysis of two popular approaches used in modeling and simulation of mechanical systems: multibody and analytical modeling. A gripper arm of the offshore vertical pipe handling machine is selected as a case study for which both models are created. In contrast to some other works, the current paper shows verification of both systems by benchmarking their simulation results against each other. Such criteria as modeling effort and results accuracy are evaluated to assess which modeling strategy is the most suitable given its eventual application.

  1. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    Science.gov (United States)

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  2. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  3. Improvement of molten core-concrete interaction model of the debris spreading analysis model in the SAMPSON code - 15193

    International Nuclear Information System (INIS)

    Hidaka, M.; Fujii, T.; Sakai, T.

    2015-01-01

    A debris spreading analysis (DSA) module has been developed and improved. The module is used in the severe accident analysis code SAMPSON and it has models for 3-dimensional natural convection with simultaneous spreading, melting and solidification. The existing analysis method of the quasi-3D boundary transportation to simulate downward concrete erosion for evaluation of molten-core concrete interaction (MCCI) was improved to full-3D to solve, for instance, debris lateral erosion under concrete floors at the bottom of the sump pit. In the advanced MCCI model, buffer cells were defined in order to solve numerical problems in case of trammel formation. Mass, momentum, and the advection term of energy between the debris melt cells and the buffer cells are solved. On the other hand, only the heat transfer and thermal conduction are solved between the debris melt cells and the structure cells, and the crust cells and the structure cells. As a preliminary analysis, a validation calculation was performed for erosion that occurred in the core-concrete interaction (CCI-2) test in the OECD/MCCI program. Comparison between the calculation and the CCI-2 test results showed the analysis has the ability to simulate debris lateral erosion under concrete floors. (authors)

  4. A reactive transport model for mercury fate in contaminated soil--sensitivity analysis.

    Science.gov (United States)

    Leterme, Bertrand; Jacques, Diederik

    2015-11-01

    We present a sensitivity analysis of a reactive transport model of mercury (Hg) fate in contaminated soil systems. The one-dimensional model, presented in Leterme et al. (2014), couples water flow in variably saturated conditions with Hg physico-chemical reactions. The sensitivity of Hg leaching and volatilisation to parameter uncertainty is examined using the elementary effect method. A test case is built using a hypothetical 1-m depth sandy soil and a 50-year time series of daily precipitation and evapotranspiration. Hg anthropogenic contamination is simulated in the topsoil by separately considering three different sources: cinnabar, non-aqueous phase liquid and aqueous mercuric chloride. The model sensitivity to a set of 13 input parameters is assessed, using three different model outputs (volatilized Hg, leached Hg, Hg still present in the contaminated soil horizon). Results show that dissolved organic matter (DOM) concentration in soil solution and the binding constant to DOM thiol groups are critical parameters, as well as parameters related to Hg sorption to humic and fulvic acids in solid organic matter. Initial Hg concentration is also identified as a sensitive parameter. The sensitivity analysis also brings out non-monotonic model behaviour for certain parameters.

  5. Application of Statistical Model in Wastewater Treatment Process Modeling Using Data Analysis

    Directory of Open Access Journals (Sweden)

    Alireza Raygan Shirazinezhad

    2015-06-01

    Full Text Available Background: Wastewater treatment includes very complex and interrelated physical, chemical and biological processes which using data analysis techniques can be rigorously modeled by a non-complex mathematical calculation models. Materials and Methods: In this study, data on wastewater treatment processes from water and wastewater company of Kohgiluyeh and Boyer Ahmad were used. A total of 3306 data for COD, TSS, PH and turbidity were collected, then analyzed by SPSS-16 software (descriptive statistics and data analysis IBM SPSS Modeler 14.2, through 9 algorithm. Results: According to the results on logistic regression algorithms, neural networks, Bayesian networks, discriminant analysis, decision tree C5, tree C & R, CHAID, QUEST and SVM had accuracy precision of 90.16, 94.17, 81.37, 70.48, 97.89, 96.56, 96.46, 96.84 and 88.92, respectively. Discussion and conclusion: The C5 algorithm as the best and most applicable algorithms for modeling of wastewater treatment processes were chosen carefully with accuracy of 97.899 and the most influential variables in this model were PH, COD, TSS and turbidity.

  6. Using Evidence Credibility Decay Model for dependence assessment in human reliability analysis

    International Nuclear Information System (INIS)

    Guo, Xingfeng; Zhou, Yanhui; Qian, Jin; Deng, Yong

    2017-01-01

    Highlights: • A new computational model is proposed for dependence assessment in HRA. • We combined three factors of “CT”, “TR” and “SP” within Dempster–Shafer theory. • The BBA of “SP” is reconstructed by discounting rate based on the ECDM. • Simulation experiments are illustrated to show the efficiency of the proposed method. - Abstract: Dependence assessment among human errors plays an important role in human reliability analysis. When dependence between two sequent tasks exists in human reliability analysis, if the preceding task fails, the failure probability of the following task is higher than success. Typically, three major factors are considered: “Closeness in Time” (CT), “Task Relatedness” (TR) and “Similarity of Performers” (SP). Assume TR is not changed, both SP and CT influence the degree of dependence level and SP is discounted by the time as the result of combine two factors in this paper. In this paper, a new computational model is proposed based on the Dempster–Shafer Evidence Theory (DSET) and Evidence Credibility Decay Model (ECDM) to assess the dependence between tasks in human reliability analysis. First, the influenced factors among human tasks are identified and the basic belief assignments (BBAs) of each factor are constructed based on expert evaluation. Then, the BBA of SP is discounted as the result of combining two factors and reconstructed by using the ECDM, the factors are integrated into a fused BBA. Finally, the dependence level is calculated based on fused BBA. Experimental results demonstrate that the proposed model not only quantitatively describe the fact that the input factors influence the dependence level, but also exactly show how the dependence level regular changes with different situations of input factors.

  7. Interactive Visual Analysis within Dynamic Ocean Models

    Science.gov (United States)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  8. Sensitivity analysis of a new dual-porosity hydroloigcal model coupled with the SOSlope model for the numerical simulations of rainfall triggered shallow landslides.

    Science.gov (United States)

    Schwarz, Massimiliano; Cohen, Denis

    2017-04-01

    Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.

  9. Ductile failure analysis of high strength steel in hot forming based on micromechanical damage model

    Directory of Open Access Journals (Sweden)

    Ying Liang

    2016-01-01

    Full Text Available The damage evolution of high strength steel at elevated temperature is investigated by using the Gurson-Tvergaard-Needleman (GTN model. A hybrid method integrated thermal tensile test and numerical technique is employed to identify the damage parameters. The analysis results show that the damage parameters are different at different temperature as the variation of tested material microstructure. Furthermore, the calibrated damage parameters are implemented to simulate a bugling forming at elevated temperature. The experimental results show the availability of GTN damage model in analyzing sheet formability in hot forming.

  10. Implicit methods for equation-free analysis: convergence results and analysis of emergent waves in microscopic traffic models

    DEFF Research Database (Denmark)

    Marschler, Christian; Sieber, Jan; Berkemer, Rainer

    2014-01-01

    We introduce a general formulation for an implicit equation-free method in the setting of slow-fast systems. First, we give a rigorous convergence result for equation-free analysis showing that the implicitly defined coarse-level time stepper converges to the true dynamics on the slow manifold...... against the direction of traffic. Equation-free analysis enables us to investigate the behavior of the microscopic traffic model on a macroscopic level. The standard deviation of cars' headways is chosen as the macroscopic measure of the underlying dynamics such that traveling wave solutions correspond...... to equilibria on the macroscopic level in the equation-free setup. The collapse of the traffic jam to the free flow then corresponds to a saddle-node bifurcation of this macroscopic equilibrium. We continue this bifurcation in two parameters using equation-free analysis....

  11. Analysis of the influence of quantile regression model on mainland tourists' service satisfaction performance.

    Science.gov (United States)

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.

  12. Analysis of the Influence of Quantile Regression Model on Mainland Tourists' Service Satisfaction Performance

    Science.gov (United States)

    Wang, Wen-Cheng; Cho, Wen-Chien; Chen, Yin-Jen

    2014-01-01

    It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models. PMID:24574916

  13. Analysis of the Influence of Quantile Regression Model on Mainland Tourists’ Service Satisfaction Performance

    Directory of Open Access Journals (Sweden)

    Wen-Cheng Wang

    2014-01-01

    Full Text Available It is estimated that mainland Chinese tourists travelling to Taiwan can bring annual revenues of 400 billion NTD to the Taiwan economy. Thus, how the Taiwanese Government formulates relevant measures to satisfy both sides is the focus of most concern. Taiwan must improve the facilities and service quality of its tourism industry so as to attract more mainland tourists. This paper conducted a questionnaire survey of mainland tourists and used grey relational analysis in grey mathematics to analyze the satisfaction performance of all satisfaction question items. The first eight satisfaction items were used as independent variables, and the overall satisfaction performance was used as a dependent variable for quantile regression model analysis to discuss the relationship between the dependent variable under different quantiles and independent variables. Finally, this study further discussed the predictive accuracy of the least mean regression model and each quantile regression model, as a reference for research personnel. The analysis results showed that other variables could also affect the overall satisfaction performance of mainland tourists, in addition to occupation and age. The overall predictive accuracy of quantile regression model Q0.25 was higher than that of the other three models.

  14. Probabilistic modelling and analysis of stand-alone hybrid power systems

    International Nuclear Information System (INIS)

    Lujano-Rojas, Juan M.; Dufo-López, Rodolfo; Bernal-Agustín, José L.

    2013-01-01

    As a part of the Hybrid Intelligent Algorithm, a model based on an ANN (artificial neural network) has been proposed in this paper to represent hybrid system behaviour considering the uncertainty related to wind speed and solar radiation, battery bank lifetime, and fuel prices. The Hybrid Intelligent Algorithm suggests a combination of probabilistic analysis based on a Monte Carlo simulation approach and artificial neural network training embedded in a genetic algorithm optimisation model. The installation of a typical hybrid system was analysed. Probabilistic analysis was used to generate an input–output dataset of 519 samples that was later used to train the ANNs to reduce the computational effort required. The generalisation ability of the ANNs was measured in terms of RMSE (Root Mean Square Error), MBE (Mean Bias Error), MAE (Mean Absolute Error), and R-squared estimators using another data group of 200 samples. The results obtained from the estimation of the expected energy not supplied, the probability of a determined reliability level, and the estimation of expected value of net present cost show that the presented model is able to represent the main characteristics of a typical hybrid power system under uncertain operating conditions. - Highlights: • This paper presents a probabilistic model for stand-alone hybrid power system. • The model considers the main sources of uncertainty related to renewable resources. • The Hybrid Intelligent Algorithm has been applied to represent hybrid system behaviour. • The installation of a typical hybrid system was analysed. • The results obtained from the study case validate the presented model

  15. 3D Product authenticity model for online retail: An invariance analysis

    Directory of Open Access Journals (Sweden)

    Algharabat, R.

    2010-01-01

    Full Text Available This study investigates the effects of different levels of invariance analysis on three dimensional (3D product authenticity model (3DPAM constructs in the e- retailing context. A hypothetical retailer website presents a variety of laptops using 3D product visualisations. The proposed conceptual model achieves acceptable fit and the hypothesised paths are all valid. We empirically investigate the invariance across the subgroups to validate the results of our 3DPAM. We concluded that the 3D product authenticity model construct was invariant for our sample across different gender, level of education and study backgrounds. These findings suggested that all our subgroups conceptualised the 3DPAM similarly. Also the results show some non-invariance results for the structural and latent mean models. The gender group posits a non-invariance latent mean model. Study backgrounds group reveals a non-invariance result for the structural model. These findings allowed us to understand the 3DPAMs validity in the e-retail context. Managerial implications are explained.

  16. INFLUENCE ANALYSIS OF WATERLOGGING BASED ON DEEP LEARNING MODEL IN WUHAN

    Directory of Open Access Journals (Sweden)

    Y. Pan

    2017-09-01

    Full Text Available This paper analyses a large number of factors related to the influence degree of urban waterlogging in depth, and constructs the Stack Autoencoder model to explore the relationship between the waterlogging points’ influence degree and their surrounding spatial data, which will be used to realize the comprehensive analysis in the waterlogging influence on the work and life of residents. According to the data of rainstorm waterlogging in 2016 July in Wuhan, the model is validated. The experimental results show that the model has higher accuracy than the traditional linear regression model. Based on the experimental model and waterlogging points distribution information in Wuhan over the years, the influence degree of different waterlogging points can be quantitatively described, which will be beneficial to the formulation of urban flood control measures and provide a reference for the design of city drainage pipe network.

  17. Rubber particle proteins, HbREF and HbSRPP, show different interactions with model membranes.

    Science.gov (United States)

    Berthelot, Karine; Lecomte, Sophie; Estevez, Yannick; Zhendre, Vanessa; Henry, Sarah; Thévenot, Julie; Dufourc, Erick J; Alves, Isabel D; Peruch, Frédéric

    2014-01-01

    The biomembrane surrounding rubber particles from the hevea latex is well known for its content of numerous allergen proteins. HbREF (Hevb1) and HbSRPP (Hevb3) are major components, linked on rubber particles, and they have been shown to be involved in rubber synthesis or quality (mass regulation), but their exact function is still to be determined. In this study we highlighted the different modes of interactions of both recombinant proteins with various membrane models (lipid monolayers, liposomes or supported bilayers, and multilamellar vesicles) to mimic the latex particle membrane. We combined various biophysical methods (polarization-modulation-infrared reflection-adsorption spectroscopy (PM-IRRAS)/ellipsometry, attenuated-total reflectance Fourier-transform infrared (ATR-FTIR), solid-state nuclear magnetic resonance (NMR), plasmon waveguide resonance (PWR), fluorescence spectroscopy) to elucidate their interactions. Small rubber particle protein (SRPP) shows less affinity than rubber elongation factor (REF) for the membranes but displays a kind of "covering" effect on the lipid headgroups without disturbing the membrane integrity. Its structure is conserved in the presence of lipids. Contrarily, REF demonstrates higher membrane affinity with changes in its aggregation properties, the amyloid nature of REF, which we previously reported, is not favored in the presence of lipids. REF binds and inserts into membranes. The membrane integrity is highly perturbed, and we suspect that REF is even able to remove lipids from the membrane leading to the formation of mixed micelles. These two homologous proteins show affinity to all membrane models tested but neatly differ in their interacting features. This could imply differential roles on the surface of rubber particles. © 2013.

  18. Comparative Analysis and Modeling of the Severity of Steatohepatitis in DDC-Treated Mouse Strains

    Science.gov (United States)

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Background Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. Methodology and Results In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. Conclusions We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J

  19. Comparative analysis and modeling of the severity of steatohepatitis in DDC-treated mouse strains.

    Science.gov (United States)

    Pandey, Vikash; Sultan, Marc; Kashofer, Karl; Ralser, Meryem; Amstislavskiy, Vyacheslav; Starmann, Julia; Osprian, Ingrid; Grimm, Christina; Hache, Hendrik; Yaspo, Marie-Laure; Sültmann, Holger; Trauner, Michael; Denk, Helmut; Zatloukal, Kurt; Lehrach, Hans; Wierling, Christoph

    2014-01-01

    Non-alcoholic fatty liver disease (NAFLD) has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH) showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC) containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA) and S-adenosylmethionine (SAMe) metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE), 15-hydroxyeicosatetraenoic acid (15-HETE) and prostaglandin D2 (PGD2) are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A/J and C57BL/6J on the one hand and PWD/PhJ on the other.

  20. Comparative analysis and modeling of the severity of steatohepatitis in DDC-treated mouse strains.

    Directory of Open Access Journals (Sweden)

    Vikash Pandey

    Full Text Available BACKGROUND: Non-alcoholic fatty liver disease (NAFLD has a broad spectrum of disease states ranging from mild steatosis characterized by an abnormal retention of lipids within liver cells to steatohepatitis (NASH showing fat accumulation, inflammation, ballooning and degradation of hepatocytes, and fibrosis. Ultimately, steatohepatitis can result in liver cirrhosis and hepatocellular carcinoma. METHODOLOGY AND RESULTS: In this study we have analyzed three different mouse strains, A/J, C57BL/6J, and PWD/PhJ, that show different degrees of steatohepatitis when administered a 3,5-diethoxycarbonyl-1,4-dihydrocollidine (DDC containing diet. RNA-Seq gene expression analysis, protein analysis and metabolic profiling were applied to identify differentially expressed genes/proteins and perturbed metabolite levels of mouse liver samples upon DDC-treatment. Pathway analysis revealed alteration of arachidonic acid (AA and S-adenosylmethionine (SAMe metabolism upon other pathways. To understand metabolic changes of arachidonic acid metabolism in the light of disease expression profiles a kinetic model of this pathway was developed and optimized according to metabolite levels. Subsequently, the model was used to study in silico effects of potential drug targets for steatohepatitis. CONCLUSIONS: We identified AA/eicosanoid metabolism as highly perturbed in DDC-induced mice using a combination of an experimental and in silico approach. Our analysis of the AA/eicosanoid metabolic pathway suggests that 5-hydroxyeicosatetraenoic acid (5-HETE, 15-hydroxyeicosatetraenoic acid (15-HETE and prostaglandin D2 (PGD2 are perturbed in DDC mice. We further demonstrate that a dynamic model can be used for qualitative prediction of metabolic changes based on transcriptomics data in a disease-related context. Furthermore, SAMe metabolism was identified as being perturbed due to DDC treatment. Several genes as well as some metabolites of this module show differences between A

  1. Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales

    Energy Technology Data Exchange (ETDEWEB)

    Tselioudis, George [Columbia Univ., New York, NY (United States)

    2016-03-04

    From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis on low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.

  2. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  3. Proteomic Analysis Shows Constitutive Secretion of MIF and p53-associated Activity of COX-2−/− Lung Fibroblasts

    Directory of Open Access Journals (Sweden)

    Mandar Dave

    2017-12-01

    Full Text Available The differential expression of two closelyassociated cyclooxygenase isozymes, COX-1 and COX-2, exhibited functions beyond eicosanoid metabolism. We hypothesized that COX-1 or COX-2 knockout lung fibroblasts may display altered protein profiles which may allow us to further differentiate the functional roles of these isozymes at the molecular level. Proteomic analysis shows constitutive production of macrophage migration inhibitory factor (MIF in lung fibroblasts derived from COX-2−/− but not wild-type (WT or COX-1−/− mice. MIF was spontaneously released in high levels into the extracellular milieu of COX2−/− fibroblasts seemingly from the preformed intracellular stores, with no change in the basal gene expression of MIF. The secretion and regulation of MIF in COX-2−/− was “prostaglandin-independent.” GO analysis showed that concurrent with upregulation of MIF, there is a significant surge in expression of genes related to fibroblast growth, FK506 binding proteins, and isomerase activity in COX-2−/− cells. Furthermore, COX-2−/− fibroblasts also exhibit a significant increase in transcriptional activity of various regulators, antagonists, and co-modulators of p53, as well as in the expression of oncogenes and related transcripts. Integrative Oncogenomics Cancer Browser (IntroGen analysis shows downregulation of COX-2 and amplification of MIF and/or p53 activity during development of glioblastomas, ependymoma, and colon adenomas. These data indicate the functional role of the MIF-COX-p53 axis in inflammation and cancer at the genomic and proteomic levels in COX-2-ablated cells. This systematic analysis not only shows the proinflammatory state but also unveils a molecular signature of a pro-oncogenic state of COX-1 in COX-2 ablated cells.

  4. Parameter sensitivity and uncertainty analysis for a storm surge and wave model

    Directory of Open Access Journals (Sweden)

    L. A. Bastidas

    2016-09-01

    Full Text Available Development and simulation of synthetic hurricane tracks is a common methodology used to estimate hurricane hazards in the absence of empirical coastal surge and wave observations. Such methods typically rely on numerical models to translate stochastically generated hurricane wind and pressure forcing into coastal surge and wave estimates. The model output uncertainty associated with selection of appropriate model parameters must therefore be addressed. The computational overburden of probabilistic surge hazard estimates is exacerbated by the high dimensionality of numerical surge and wave models. We present a model parameter sensitivity analysis of the Delft3D model for the simulation of hazards posed by Hurricane Bob (1991 utilizing three theoretical wind distributions (NWS23, modified Rankine, and Holland. The sensitive model parameters (of 11 total considered include wind drag, the depth-induced breaking γB, and the bottom roughness. Several parameters show no sensitivity (threshold depth, eddy viscosity, wave triad parameters, and depth-induced breaking αB and can therefore be excluded to reduce the computational overburden of probabilistic surge hazard estimates. The sensitive model parameters also demonstrate a large number of interactions between parameters and a nonlinear model response. While model outputs showed sensitivity to several parameters, the ability of these parameters to act as tuning parameters for calibration is somewhat limited as proper model calibration is strongly reliant on accurate wind and pressure forcing data. A comparison of the model performance with forcings from the different wind models is also presented.

  5. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  6. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    Science.gov (United States)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  7. A Cross-Cultural Analysis of Personality Structure Through the Lens of the HEXACO Model.

    Science.gov (United States)

    Ion, Andrei; Iliescu, Dragos; Aldhafri, Said; Rana, Neeti; Ratanadilok, Kattiya; Widyanti, Ari; Nedelcea, Cătălin

    2017-01-01

    Across 5 different samples, totaling more than 1,600 participants from India, Indonesia, Oman, Romania, and Thailand, the authors address the question of cross-cultural replicability of a personality structure, while exploring the utility of exploratory structural equation modeling (ESEM) as a data analysis technique in cross-cultural personality research. Personality was measured with an alternative, non-Five-Factor Model (FFM) personality framework, provided by the HEXACO-PI (Lee & Ashton, 2004 ). The results show that the HEXACO framework was replicated in some of the investigated cultures. The ESEM data analysis technique proved to be especially useful in investigating the between-group measurement equivalence of broad personality measures across different cultures.

  8. Modeling issues in nuclear plant fire risk analysis

    International Nuclear Information System (INIS)

    Siu, N.

    1989-01-01

    This paper discusses various issues associated with current models for analyzing the risk due to fires in nuclear power plants. Particular emphasis is placed on the fire growth and suppression models, these being unique to the fire portion of the overall risk analysis. Potentially significant modeling improvements are identified; also discussed are a variety of modeling issues where improvements will help the credibility of the analysis, without necessarily changing the computed risk significantly. The mechanistic modeling of fire initiation is identified as a particularly promising improvement for reducing the uncertainties in the predicted risk. 17 refs., 5 figs. 2 tabs

  9. Models as Tools of Analysis of a Network Organisation

    Directory of Open Access Journals (Sweden)

    Wojciech Pająk

    2013-06-01

    Full Text Available The paper presents models which may be applied as tools of analysis of a network organisation. The starting point of the discussion is defining the following terms: supply chain and network organisation. Further parts of the paper present basic assumptions analysis of a network organisation. Then the study characterises the best known models utilised in analysis of a network organisation. The purpose of the article is to define the notion and the essence of network organizations and to present the models used for their analysis.

  10. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    Science.gov (United States)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  11. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    International Nuclear Information System (INIS)

    Wang, Chun-xi

    2003-01-01

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  12. Model Independent Analysis of Beam Centroid Dynamics in Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Chun-xi

    2003-04-21

    Fundamental issues in Beam-Position-Monitor (BPM)-based beam dynamics observations are studied in this dissertation. The major topic is the Model-Independent Analysis (MIA) of beam centroid dynamics. Conventional beam dynamics analysis requires a certain machine model, which itself of ten needs to be refined by beam measurements. Instead of using any particular machine model, MIA relies on a statistical analysis of the vast amount of BPM data that often can be collected non-invasively during normal machine operation. There are two major parts in MIA. One is noise reduction and degrees-of-freedom analysis using a singular value decomposition of a BPM-data matrix, which constitutes a principal component analysis of BPM data. The other is a physical base decomposition of the BPM-data matrix based on the time structure of pulse-by-pulse beam and/or machine parameters. The combination of these two methods allows one to break the resolution limit set by individual BPMs and observe beam dynamics at more accurate levels. A physical base decomposition is particularly useful for understanding various beam dynamics issues. MIA improves observation and analysis of beam dynamics and thus leads to better understanding and control of beams in both linacs and rings. The statistical nature of MIA makes it potentially useful in other fields. Another important topic discussed in this dissertation is the measurement of a nonlinear Poincare section (one-turn) map in circular accelerators. The beam dynamics in a ring is intrinsically nonlinear. In fact, nonlinearities are a major factor that limits stability and influences the dynamics of halos. The Poincare section map plays a basic role in characterizing and analyzing such a periodic nonlinear system. Although many kinds of nonlinear beam dynamics experiments have been conducted, no direct measurement of a nonlinear map has been reported for a ring in normal operation mode. This dissertation analyzes various issues concerning map

  13. Analysis and modelling of the energy consumption of chemical batch plants

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.S.

    2004-07-01

    This report for the Swiss Federal Office of Energy (SFOE) describes two different approaches for the energy analysis and modelling of chemical batch plants. A top-down model consisting of a linear equation based on the specific energy consumption per ton of production output and the base consumption of the plant is postulated. The model is shown to be applicable to single and multi-product batches for batch plants with constant production mix and multi-purpose batch plants in which only similar chemicals are produced. For multipurpose batch plants with highly varying production processes and changing production mix, the top-down model produced inaccurate results. A bottom-up model is postulated for such plants. The results obtained are discussed that show that the electricity consumption for infrastructure equipment was significant and responsible for about 50% of total electricity consumption. The specific energy consumption for the different buildings was related to the degree of automation and the production processes. Analyses of the results of modelling are presented. More detailed analyses of the energy consumption of this apparatus group show that about 30 to 40% of steam energy is lost and thus a large potential for optimisation exists. Various potentials for making savings, ranging from elimination of reflux conditions to the development of a new heating/cooling-system for a generic batch reactor, are identified.

  14. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  15. Hydraulic head interpolation using ANFIS—model selection and sensitivity analysis

    Science.gov (United States)

    Kurtulus, Bedri; Flipo, Nicolas

    2012-01-01

    The aim of this study is to investigate the efficiency of ANFIS (adaptive neuro fuzzy inference system) for interpolating hydraulic head in a 40-km 2 agricultural watershed of the Seine basin (France). Inputs of ANFIS are Cartesian coordinates and the elevation of the ground. Hydraulic head was measured at 73 locations during a snapshot campaign on September 2009, which characterizes low-water-flow regime in the aquifer unit. The dataset was then split into three subsets using a square-based selection method: a calibration one (55%), a training one (27%), and a test one (18%). First, a method is proposed to select the best ANFIS model, which corresponds to a sensitivity analysis of ANFIS to the type and number of membership functions (MF). Triangular, Gaussian, general bell, and spline-based MF are used with 2, 3, 4, and 5 MF per input node. Performance criteria on the test subset are used to select the 5 best ANFIS models among 16. Then each is used to interpolate the hydraulic head distribution on a (50×50)-m grid, which is compared to the soil elevation. The cells where the hydraulic head is higher than the soil elevation are counted as "error cells." The ANFIS model that exhibits the less "error cells" is selected as the best ANFIS model. The best model selection reveals that ANFIS models are very sensitive to the type and number of MF. Finally, a sensibility analysis of the best ANFIS model with four triangular MF is performed on the interpolation grid, which shows that ANFIS remains stable to error propagation with a higher sensitivity to soil elevation.

  16. Application of thermodynamics-based rate-dependent constitutive models of concrete in the seismic analysis of concrete dams

    Directory of Open Access Journals (Sweden)

    Leng Fei

    2008-09-01

    Full Text Available This paper discusses the seismic analysis of concrete dams with consideration of material nonlinearity. Based on a consistent rate-dependent model and two thermodynamics-based models, two thermodynamics-based rate-dependent constitutive models were developed with consideration of the influence of the strain rate. They can describe the dynamic behavior of concrete and be applied to nonlinear seismic analysis of concrete dams taking into account the rate sensitivity of concrete. With the two models, a nonlinear analysis of the seismic response of the Koyna Gravity Dam and the Dagangshan Arch Dam was conducted. The results were compared with those of a linear elastic model and two rate-independent thermodynamics-based constitutive models, and the influences of constitutive models and strain rate on the seismic response of concrete dams were discussed. It can be concluded from the analysis that, during seismic response, the tensile stress is the control stress in the design and seismic safety evaluation of concrete dams. In different models, the plastic strain and plastic strain rate of concrete dams show a similar distribution. When the influence of the strain rate is considered, the maximum plastic strain and plastic strain rate decrease.

  17. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  18. A Conceptual Model for Multidimensional Analysis of Documents

    Science.gov (United States)

    Ravat, Franck; Teste, Olivier; Tournier, Ronan; Zurlfluh, Gilles

    Data warehousing and OLAP are mainly used for the analysis of transactional data. Nowadays, with the evolution of Internet, and the development of semi-structured data exchange format (such as XML), it is possible to consider entire fragments of data such as documents as analysis sources. As a consequence, an adapted multidimensional analysis framework needs to be provided. In this paper, we introduce an OLAP multidimensional conceptual model without facts. This model is based on the unique concept of dimensions and is adapted for multidimensional document analysis. We also provide a set of manipulation operations.

  19. An improved state-parameter analysis of ecosystem models using data assimilation

    Science.gov (United States)

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the

  20. Evaluation of Cost Models and Needs & Gaps Analysis

    DEFF Research Database (Denmark)

    Kejser, Ulla Bøgvad

    2014-01-01

    they breakdown costs. This is followed by an in depth analysis of stakeholders’ needs for financial information derived from the 4C project stakeholder consultation.The stakeholders’ needs analysis indicated that models should:• support accounting, but more importantly they should enable budgeting• be able......his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...

  1. Documentation and analysis of a global CO{sub 2} model developed by Peng et al. (1983)

    Energy Technology Data Exchange (ETDEWEB)

    Jager, H.I.; Peng, T.H.; King, A.W.; Sale, M.J.

    1990-07-01

    A global carbon model, the Peng `83 model, has been standardized according to protocols developed for an intermodel comparison. The first part of this document describes the model as they received it, and the second part describes a standardized version of the model, which has been parameterized according to the protocols described. Model performance was evaluated according to defined criteria and a sensitivity analysis of the model was conducted to identify the most important parameters. The standardized model was supplemented with a calibration routine to define reasonable combinations of initial conditions. This improved the ability of the model to hold an initial equilibrium state. Sensitivity analysis showed a shift in parameter importances with time. The initial conditions were of greatest importance for the length of these simulations, but declined in longer simulations. With the initial pCO{sub 2} excluded from the sensitivity analysis, ocean surface area (used to extrapolate results) was second in importance. While the CO{sub 2} exchange rate were initially most important, the model projections of atmospheric CO{sub 2} soon became more sensitive to the alkalinity of the ocean.

  2. Predict-first experimental analysis using automated and integrated magnetohydrodynamic modeling

    Science.gov (United States)

    Lyons, B. C.; Paz-Soldan, C.; Meneghini, O.; Lao, L. L.; Weisberg, D. B.; Belli, E. A.; Evans, T. E.; Ferraro, N. M.; Snyder, P. B.

    2018-05-01

    An integrated-modeling workflow has been developed for the purpose of performing predict-first analysis of transient-stability experiments. Starting from an existing equilibrium reconstruction from a past experiment, the workflow couples together the EFIT Grad-Shafranov solver [L. Lao et al., Fusion Sci. Technol. 48, 968 (2005)], the EPED model for the pedestal structure [P. B. Snyder et al., Phys. Plasmas 16, 056118 (2009)], and the NEO drift-kinetic-equation solver [E. A. Belli and J. Candy, Plasma Phys. Controlled Fusion 54, 015015 (2012)] (for bootstrap current calculations) in order to generate equilibria with self-consistent pedestal structures as the plasma shape and various scalar parameters (e.g., normalized β, pedestal density, and edge safety factor [q95]) are changed. These equilibria are then analyzed using automated M3D-C1 extended-magnetohydrodynamic modeling [S. C. Jardin et al., Comput. Sci. Discovery 5, 014002 (2012)] to compute the plasma response to three-dimensional magnetic perturbations. This workflow was created in conjunction with a DIII-D experiment examining the effect of triangularity on the 3D plasma response. Several versions of the workflow were developed, and the initial ones were used to help guide experimental planning (e.g., determining the plasma current necessary to maintain the constant edge safety factor in various shapes). Subsequent validation with the experimental results was then used to revise the workflow, ultimately resulting in the complete model presented here. We show that quantitative agreement was achieved between the M3D-C1 plasma response calculated for equilibria generated by the final workflow and equilibria reconstructed from experimental data. A comparison of results from earlier workflows is used to show the importance of properly matching certain experimental parameters in the generated equilibria, including the normalized β, pedestal density, and q95. On the other hand, the details of the pedestal

  3. Modeling and Analysis of the Motivations of Fast Fashion Consumers in Relation to Innovativeness

    Directory of Open Access Journals (Sweden)

    Saricam Canan

    2016-12-01

    Full Text Available In this study, fast fashion concept is investigated in order to understand the motivations of the consumers that make them adopt these products because of their willingness for the innovativeness. The relationship between the motivational factors which were named as “Social or status image” and “Uniqueness” as expressions of individuality, “Conformity” and the willingness for “Innovativeness” is analyzed using a conceptual model. Exploratory factor analysis, confirmatory factor analysis and structural equation modeling were used to analyze and validate the model. The data used for the study was obtained from 244 people living in Turkey. The findings showed that the motivational factors “Social or status image” and “Uniqueness” as expressions of individuality are influential on the consumers’ willingness for “Innovativeness”.

  4. A Bayesian Nonparametric Meta-Analysis Model

    Science.gov (United States)

    Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.

    2015-01-01

    In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…

  5. The Performance of Structure-Controller Coupled Systems Analysis Using Probabilistic Evaluation and Identification Model Approach

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2017-01-01

    Full Text Available This study evaluates the performance of passively controlled steel frame building under dynamic loads using time series analysis. A novel application is utilized for the time and frequency domains evaluation to analyze the behavior of controlling systems. In addition, the autoregressive moving average (ARMA neural networks are employed to identify the performance of the controller system. Three passive vibration control devices are utilized in this study, namely, tuned mass damper (TMD, tuned liquid damper (TLD, and tuned liquid column damper (TLCD. The results show that the TMD control system is a more reliable controller than TLD and TLCD systems in terms of vibration mitigation. The probabilistic evaluation and identification model showed that the probability analysis and ARMA neural network model are suitable to evaluate and predict the response of coupled building-controller systems.

  6. A Development of Nonstationary Regional Frequency Analysis Model with Large-scale Climate Information: Its Application to Korean Watershed

    Science.gov (United States)

    Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo

    2015-04-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  7. Differential expression analysis for RNAseq using Poisson mixed models.

    Science.gov (United States)

    Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny; Zhou, Xiang

    2017-06-20

    Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    Science.gov (United States)

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  9. Personalization of models with many model parameters: an efficient sensitivity analysis approach.

    Science.gov (United States)

    Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T

    2015-10-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Integration of Design and Control through Model Analysis

    DEFF Research Database (Denmark)

    Russel, Boris Mariboe; Henriksen, Jens Peter; Jørgensen, Sten Bay

    2002-01-01

    A systematic computer aided analysis of the process model is proposed as a pre-solution step for integration of design and control problems. The process model equations are classified in terms of balance equations, constitutive equations and conditional equations. Analysis of the phenomena models...... (structure selection) issues for the integrated problems are considered. (C) 2002 Elsevier Science Ltd. All rights reserved....... representing the constitutive equations identify the relationships between the important process and design variables, which help to understand, define and address some of the issues related to integration of design and control. Furthermore, the analysis is able to identify a set of process (control) variables...

  11. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  12. Representing uncertainty on model analysis plots

    Directory of Open Access Journals (Sweden)

    Trevor I. Smith

    2016-09-01

    Full Text Available Model analysis provides a mechanism for representing student learning as measured by standard multiple-choice surveys. The model plot contains information regarding both how likely students in a particular class are to choose the correct answer and how likely they are to choose an answer consistent with a well-documented conceptual model. Unfortunately, Bao’s original presentation of the model plot did not include a way to represent uncertainty in these measurements. I present details of a method to add error bars to model plots by expanding the work of Sommer and Lindell. I also provide a template for generating model plots with error bars.

  13. Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.

    Science.gov (United States)

    Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao

    2016-01-15

    When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Histidine decarboxylase knockout mice, a genetic model of Tourette syndrome, show repetitive grooming after induced fear.

    Science.gov (United States)

    Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher

    2015-05-19

    Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Moderation analysis using a two-level regression model.

    Science.gov (United States)

    Yuan, Ke-Hai; Cheng, Ying; Maxwell, Scott

    2014-10-01

    Moderation analysis is widely used in social and behavioral research. The most commonly used model for moderation analysis is moderated multiple regression (MMR) in which the explanatory variables of the regression model include product terms, and the model is typically estimated by least squares (LS). This paper argues for a two-level regression model in which the regression coefficients of a criterion variable on predictors are further regressed on moderator variables. An algorithm for estimating the parameters of the two-level model by normal-distribution-based maximum likelihood (NML) is developed. Formulas for the standard errors (SEs) of the parameter estimates are provided and studied. Results indicate that, when heteroscedasticity exists, NML with the two-level model gives more efficient and more accurate parameter estimates than the LS analysis of the MMR model. When error variances are homoscedastic, NML with the two-level model leads to essentially the same results as LS with the MMR model. Most importantly, the two-level regression model permits estimating the percentage of variance of each regression coefficient that is due to moderator variables. When applied to data from General Social Surveys 1991, NML with the two-level model identified a significant moderation effect of race on the regression of job prestige on years of education while LS with the MMR model did not. An R package is also developed and documented to facilitate the application of the two-level model.

  16. ANALYSIS MODELS OF THE BANKRUPTCY RISK IN ROMANIA’S ENERGY SECTOR

    Directory of Open Access Journals (Sweden)

    MIRON VASILE CRISTIAN IOACHIM

    2015-12-01

    Full Text Available The risk, as a concept found in the economic sphere of business represents an analysis area often approached by researchers from the finance and accounting field. Although it is often seen, along with cost-effectiveness and value, as a fundamental element of finance (Stancu, I., 2007, the risk has often facets that make it useful also in analyzing other sides of the economic sphere of business, such as financial position and economic performance of it. From these meanings, we believe that the most suitable for this purpose is the one through which it is analyzed the ability of an entity to avoid bankruptcy. The present study has as main objectives the presentation of bankruptcy risk of an entity from a theoretical point of view and the analysis (from an empirical and comparative point of view through the scoring method of the implementation of various models for analyzing the risk of bankruptcy (Altman, Conan-holder, Taffler, Robertson in Romania’s energy sector, in order to issue an opinion regarding the optimal method for analyzing the bankruptcy risk in the energy sector. The results show that there are significant differences regarding the analysis of the bankruptcy risk through the appliction of different models, proposing the Conan-Holder model as the most appropriate for this sector.

  17. Cryogenic Fuel Tank Draining Analysis Model

    Science.gov (United States)

    Greer, Donald

    1999-01-01

    One of the technological challenges in designing advanced hypersonic aircraft and the next generation of spacecraft is developing reusable flight-weight cryogenic fuel tanks. As an aid in the design and analysis of these cryogenic tanks, a computational fluid dynamics (CFD) model has been developed specifically for the analysis of flow in a cryogenic fuel tank. This model employs the full set of Navier-Stokes equations, except that viscous dissipation is neglected in the energy equation. An explicit finite difference technique in two-dimensional generalized coordinates, approximated to second-order accuracy in both space and time is used. The stiffness resulting from the low Mach number is resolved by using artificial compressibility. The model simulates the transient, two-dimensional draining of a fuel tank cross section. To calculate the slosh wave dynamics the interface between the ullage gas and liquid fuel is modeled as a free surface. Then, experimental data for free convection inside a horizontal cylinder are compared with model results. Finally, cryogenic tank draining calculations are performed with three different wall heat fluxes to demonstrate the effect of wall heat flux on the internal tank flow field.

  18. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    Science.gov (United States)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  19. On the stability analysis of a general discrete-time population model involving predation and Allee effects

    International Nuclear Information System (INIS)

    Merdan, H.; Duman, O.

    2009-01-01

    This paper presents the stability analysis of equilibrium points of a general discrete-time population dynamics involving predation with and without Allee effects which occur at low population density. The mathematical analysis and numerical simulations show that the Allee effect has a stabilizing role on the local stability of the positive equilibrium points of this model.

  20. Modelling wedding marketing strategies: An fsQCA Analysis

    Directory of Open Access Journals (Sweden)

    Anestis Fotiadis

    2018-05-01

    Full Text Available Aim of the study is to develop a model delineating customer perceptions on wedding marketing strategies in Kaohsiung, Taiwan. Main objective of this paper is to analyse a category of special events: the wedding market sector in Kaohsiung, Taiwan by examining how they attract consumers regarding their marketing strategies using the method of fuzzy-set Qualitative Comparative Analysis (fsQCA. Based on a survey to married, in relationship and singles local citizens of Taiwan the relationships between impressions, importance, push factors with decision making was explored. To test the hypotheses of the proposed model a primary research study was conducted employing a mall intercept technique via distribution of a self-administered questionnaire within a cross sectional on-site field research context. A fsQCA modelling approach technique was employed in order to measure, estimate and confirm the different casual paths constructs, as well as to test the significance of the paths between different segments of the wedding industry. Our findings reveal that the presence of importance, push factors and decision making determines the level of consumer perception performance. However, impressions do not show significant impact on consumer perceptions.

  1. Data Structure Analysis to Represent Basic Models of Finite State Automation

    Directory of Open Access Journals (Sweden)

    V. V. Gurenko

    2015-01-01

    Full Text Available Complex system engineering based on the automaton models requires a reasoned data structure selection to implement them. The problem of automaton representation and data structure selection to be used in it has been understudied. Arbitrary data structure selection for automaton model software implementation leads to unnecessary computational burden and reduces the developed system efficiency. This article proposes an approach to the reasoned selection of data structures to represent finite algoristic automaton basic models and gives practical considerations based on it.Static and dynamic data structures are proposed for three main ways to assign Mealy and Moore automatons: a transition table, a matrix of coupling and a transition graph. A thirddimensional array, a rectangular matrix and a matrix of lists are the static structures. Dynamic structures are list-oriented structures: two-level and three-level Ayliff vectors and a multi-linked list. These structures allow us to store all required information about finite state automaton model components - characteristic set cardinalities and data of transition and output functions.A criterion system is proposed for data structure comparative evaluation in virtue of algorithmic features of automata theory problems. The criteria focused on capacitive and time computational complexity of operations performed in tasks such as equivalent automaton conversions, proving of automaton equivalence and isomorphism, and automaton minimization.A data structure comparative analysis based on the criterion system has done for both static and dynamic type. The analysis showed advantages of the third-dimensional array, matrix and two-level Ayliff vector. These are structures that assign automaton by transition table. For these structures an experiment was done to measure the execution time of automation operations included in criterion system.The analysis of experiment results showed that a dynamic structure - two

  2. NEPHRUS: model of intelligent multilayers expert system for evaluation of the renal system based on scintigraphic images analysis

    International Nuclear Information System (INIS)

    Silva, Jose W.E. da; Schirru, Roberto; Boasquevisque, Edson M.

    1997-01-01

    This work develops a prototype for the system model based on Artificial Intelligence devices able to perform functions related to scintigraphic image analysis of the urinary system. Criteria used by medical experts for analysis images obtained with 99m Tc+DTPA and/or 99m Tc+DMSA were modeled and a multi resolution diagnosis technique was implemented. Special attention was given to the programs user interface design. Human Factor Engineering techniques were considered so as to ally friendliness and robustness. Results obtained using Artificial Neural Networks for the qualitative image analysis and the knowledge model constructed shows the feasibility of Artificial Intelligence implementation that use 'inherent' abilities of each technique in the resolution of diagnosis image analysis problems. (author). 12 refs., 2 figs., 2 tabs

  3. Adaptive streaming applications : analysis and implementation models

    NARCIS (Netherlands)

    Zhai, Jiali Teddy

    2015-01-01

    This thesis presents a highly automated design framework, called DaedalusRT, and several novel techniques. As the foundation of the DaedalusRT design framework, two types of dataflow Models-of-Computation (MoC) are used, one as timing analysis model and another one as the implementation model. The

  4. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Science.gov (United States)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the

  5. MSSV Modeling for Wolsong-1 Safety Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Bok Ja; Choi, Chul Jin; Kim, Seoung Rae [KEPCO EandC, Daejeon (Korea, Republic of)

    2010-10-15

    The main steam safety valves (MSSVs) are installed on the main steam line to prevent the overpressurization of the system. MSSVs are held in closed position by spring force and the valves pop open by internal force when the main steam pressure increases to open set pressure. If the overpressure condition is relieved, the valves begin to close. For the safety analysis of anticipated accident condition, the safety systems are modeled conservatively to simulate the accident condition more severe. MSSVs are also modeled conservatively for the analysis of over-pressurization accidents. In this paper, the pressure transient is analyzed at over-pressurization condition to evaluate the conservatism for MSSV models

  6. Development and Sensitivity Analysis of a Fully Kinetic Model of Sequential Reductive Dechlorination in Groundwater

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup

    2011-01-01

    experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most...

  7. 68Ga/177Lu-labeled DOTA-TATE shows similar imaging and biodistribution in neuroendocrine tumor model.

    Science.gov (United States)

    Liu, Fei; Zhu, Hua; Yu, Jiangyuan; Han, Xuedi; Xie, Qinghua; Liu, Teli; Xia, Chuanqin; Li, Nan; Yang, Zhi

    2017-06-01

    Somatostatin receptors are overexpressed in neuroendocrine tumors, whose endogenous ligands are somatostatin. DOTA-TATE is an analogue of somatostatin, which shows high binding affinity to somatostatin receptors. We aim to evaluate the 68 Ga/ 177 Lu-labeling DOTA-TATE kit in neuroendocrine tumor model for molecular imaging and to try human-positron emission tomography/computed tomography imaging of 68 Ga-DOTA-TATE in neuroendocrine tumor patients. DOTA-TATE kits were formulated and radiolabeled with 68 Ga/ 177 Lu for 68 Ga/ 177 Lu-DOTA-TATE (M-DOTA-TATE). In vitro and in vivo stability of 177 Lu-DOTA-TATE were performed. Nude mice bearing human tumors were injected with 68 Ga-DOTA-TATE or 177 Lu-DOTA-TATE for micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging separately, and clinical positron emission tomography/computed tomography images of 68 Ga-DOTA-TATE were obtained at 1 h post-intravenous injection from patients with neuroendocrine tumors. Micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging of 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE both showed clear tumor uptake which could be blocked by excess DOTA-TATE. In addition, 68 Ga-DOTA-TATE-positron emission tomography/computed tomography imaging in neuroendocrine tumor patients could show primary and metastatic lesions. 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE could accumulate in tumors in animal models, paving the way for better clinical peptide receptor radionuclide therapy for neuroendocrine tumor patients in Asian population.

  8. Nonlinear analysis of pre-stressed concrete containment vessel (PCCV) using the damage plasticity model

    Energy Technology Data Exchange (ETDEWEB)

    Shokoohfar, Ahmad; Rahai, Alireza, E-mail: rahai@aut.ac.ir

    2016-03-15

    Highlights: • This paper describes nonlinear analyses of a 1:4 scale model of a (PCCV). • Coupled temp-disp. analysis and concrete damage plasticity are considered. • Temperature has limited effects on correct failure mode estimation. • Higher pre-stressing forces have limited effects on ultimate radial displacements. • Anchorage details of liner plates leads to prediction of correct failure mode. - Abstract: This paper describes the nonlinear analyses of a 1:4 scale model of a pre-stressed concrete containment vessel (PCCV). The analyses are performed under pressure and high temperature effects with considering anchorage details of liner plate. The temperature-time history of the model test is considered as an input boundary condition in the coupled temp-displacement analysis. The constitutive model developed by Chang and Mander (1994) is adopted in the model as the basis for the concrete stress–strain relation. To trace the crack pattern of the PCCV concrete faces, the concrete damage plasticity model is applied. This study includes the results of the thermal and mechanical behaviors of the PCCV subject to temperature loading and internal pressure at the same time. The test results are compared with the analysis results. The analysis results show that the temperature has little impact on the ultimate pressure capacity of the PCCV. To simulate the exact failure mode of the PCCV, the anchorage details of the liner plates around openings should be maintained in the analytical models. Also the failure mode of the PCCV structure hasn’t influenced by hoop tendons pre-stressing force variations.

  9. Modelling pesticides volatilisation in greenhouses: Sensitivity analysis of a modified PEARL model.

    Science.gov (United States)

    Houbraken, Michael; Doan Ngoc, Kim; van den Berg, Frederik; Spanoghe, Pieter

    2017-12-01

    The application of the existing PEARL model was extended to include estimations of the concentration of crop protection products in greenhouse (indoor) air due to volatilisation from the plant surface. The model was modified to include the processes of ventilation of the greenhouse air to the outside atmosphere and transformation in the air. A sensitivity analysis of the model was performed by varying selected input parameters on a one-by-one basis and comparing the model outputs with the outputs of the reference scenarios. The sensitivity analysis indicates that - in addition to vapour pressure - the model had the highest ratio of variation for the rate ventilation rate and thickness of the boundary layer on the day of application. On the days after application, competing processes, degradation and uptake in the plant, becomes more important. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Model reduction using a posteriori analysis

    KAUST Repository

    Whiteley, Jonathan P.

    2010-05-01

    Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.

  11. Model reduction using a posteriori analysis

    KAUST Repository

    Whiteley, Jonathan P.

    2010-01-01

    Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.

  12. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  13. Model parameter uncertainty analysis for annual field-scale P loss model

    Science.gov (United States)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  14. Observability analysis for model-based fault detection and sensor selection in induction motors

    International Nuclear Information System (INIS)

    Nakhaeinejad, Mohsen; Bryant, Michael D

    2011-01-01

    Sensors in different types and configurations provide information on the dynamics of a system. For a specific task, the question is whether measurements have enough information or whether the sensor configuration can be changed to improve the performance or to reduce costs. Observability analysis may answer the questions. This paper presents a general algorithm of nonlinear observability analysis with application to model-based diagnostics and sensor selection in three-phase induction motors. A bond graph model of the motor is developed and verified with experiments. A nonlinear observability matrix based on Lie derivatives is obtained from state equations. An observability index based on the singular value decomposition of the observability matrix is obtained. Singular values and singular vectors are used to identify the most and least observable configurations of sensors and parameters. A complex step derivative technique is used in the calculation of Jacobians to improve the computational performance of the observability analysis. The proposed algorithm of observability analysis can be applied to any nonlinear system to select the best configuration of sensors for applications of model-based diagnostics, observer-based controller, or to determine the level of sensor redundancy. Observability analysis on induction motors provides various sensor configurations with corresponding observability indices. Results show the redundancy levels for different sensors, and provide a sensor selection guideline for model-based diagnostics, and for observer-based controllers. The results can also be used for sensor fault detection and to improve the reliability of the system by increasing the redundancy level in measurements

  15. Modelling structural systems for transient response analysis

    International Nuclear Information System (INIS)

    Melosh, R.J.

    1975-01-01

    This paper introduces and reports success of a direct means of determining the time periods in which a structural system behaves as a linear system. Numerical results are based on post fracture transient analyses of simplified nuclear piping systems. Knowledge of the linear response ranges will lead to improved analysis-test correlation and more efficient analyses. It permits direct use of data from physical tests in analysis and simplication of the analytical model and interpretation of its behavior. The paper presents a procedure for deducing linearity based on transient responses. Given the forcing functions and responses of discrete points of the system at various times, the process produces evidence of linearity and quantifies an adequate set of equations of motion. Results of use of the process with linear and nonlinear analyses of piping systems with damping illustrate its success. Results cover the application to data from mathematical system responses. The process is successfull with mathematical models. In loading ranges in which all modes are excited, eight digit accuracy of predictions are obtained from the equations of motion deduced. Small changes (less than 0.01%) in the norm of the transfer matrices are produced by manipulation errors for linear systems yielding evidence that nonlinearity is easily distinguished. Significant changes (greater than five %) are coincident with relatively large norms of the equilibrium correction vector in nonlinear analyses. The paper shows that deducing linearity and, when admissible, quantifying linear equations of motion from transient response data for piping systems can be achieved with accuracy comparable to that of response data

  16. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  17. Corrected Statistical Energy Analysis Model for Car Interior Noise

    Directory of Open Access Journals (Sweden)

    A. Putra

    2015-01-01

    Full Text Available Statistical energy analysis (SEA is a well-known method to analyze the flow of acoustic and vibration energy in a complex structure. For an acoustic space where significant absorptive materials are present, direct field component from the sound source dominates the total sound field rather than a reverberant field, where the latter becomes the basis in constructing the conventional SEA model. Such environment can be found in a car interior and thus a corrected SEA model is proposed here to counter this situation. The model is developed by eliminating the direct field component from the total sound field and only the power after the first reflection is considered. A test car cabin was divided into two subsystems and by using a loudspeaker as a sound source, the power injection method in SEA was employed to obtain the corrected coupling loss factor and the damping loss factor from the corrected SEA model. These parameters were then used to predict the sound pressure level in the interior cabin using the injected input power from the engine. The results show satisfactory agreement with the directly measured SPL.

  18. Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results

    Science.gov (United States)

    Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul

    1992-01-01

    The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.

  19. SBKF Modeling and Analysis Plan: Buckling Analysis of Compression-Loaded Orthogrid and Isogrid Cylinders

    Science.gov (United States)

    Lovejoy, Andrew E.; Hilburger, Mark W.

    2013-01-01

    This document outlines a Modeling and Analysis Plan (MAP) to be followed by the SBKF analysts. It includes instructions on modeling and analysis formulation and execution, model verification and validation, identifying sources of error and uncertainty, and documentation. The goal of this MAP is to provide a standardized procedure that ensures uniformity and quality of the results produced by the project and corresponding documentation.

  20. Renewable Energy and Efficiency Modeling Analysis Partnership (REMAP): An Analysis of How Different Energy Models Addressed a Common High Renewable Energy Penetration Scenario in 2025

    Energy Technology Data Exchange (ETDEWEB)

    Blair, N.; Jenkin, T.; Milford, J.; Short, W.; Sullivan, P.; Evans, D.; Lieberman, E.; Goldstein, G.; Wright, E.; Jayaraman, K. R.; Venkatesh, B.; Kleiman, G.; Namovicz, C.; Smith, B.; Palmer, K.; Wiser, R.; Wood, F.

    2009-09-01

    Energy system modeling can be intentionally or unintentionally misused by decision-makers. This report describes how both can be minimized through careful use of models and thorough understanding of their underlying approaches and assumptions. The analysis summarized here assesses the impact that model and data choices have on forecasting energy systems by comparing seven different electric-sector models. This analysis was coordinated by the Renewable Energy and Efficiency Modeling Analysis Partnership (REMAP), a collaboration among governmental, academic, and nongovernmental participants.

  1. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  2. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  3. Advertising Discourse Analysis of FES stores: Killing Love, Cowards Show

    Directory of Open Access Journals (Sweden)

    Cristian Venegas Ahumada

    2013-08-01

    Full Text Available The objective is to analyze the structural and photographic discourse of the Autumn-Winter campaign 2008 of FES stores for young people. This was done by a semiotic theory and a critical-structural methodology of discourse. An analysis of 4 advertising photographs was done, and at once an analysis of the discourse “FES says no to violence against Women”, which explains the campaign’s target. The result is: The discourse was subjected to production condition (society of control and makes advertising a way to homogenize subjectivity of masses to consume. Recognition conditions demonstrate that this advertising discourse of symbolic violence means a type of violation of Men and Women Rights. An action like this requires commitment of Psychology in order to promote the social humanizing change, by means of university teaching and professional tasks.

  4. Water Management in the Camargue Biosphere Reserve: Insights from Comparative Mental Models Analysis

    Directory of Open Access Journals (Sweden)

    Raphael Mathevet

    2011-03-01

    Full Text Available Mental models are the cognitive representations of the world that frame how people interact with the world. Learning implies changing these mental models. The successful management of complex social-ecological systems requires the coordination of actions to achieve shared goals. The coordination of actions requires a level of shared understanding of the system or situation; a shared or common mental model. We first describe the elicitation and analysis of mental models of different stakeholder groups associated with water management in the Camargue Biosphere Reserve in the Rhône River delta on the French Mediterranean coast. We use cultural consensus analysis to explore the degree to which different groups shared mental models of the whole system, of stakeholders, of resources, of processes, and of interactions among these last three. The analysis of the elicited data from this group structure enabled us to tentatively explore the evidence for learning in the nonstatute Water Board; comprising important stakeholders related to the management of the central Rhône delta. The results indicate that learning does occur and results in richer mental models that are more likely to be shared among group members. However, the results also show lower than expected levels of agreement with these consensual mental models. Based on this result, we argue that a careful process and facilitation design can greatly enhance the functioning of the participatory process in the Water Board. We conclude that this methodology holds promise for eliciting and comparing mental models. It enriches group-model building and participatory approaches with a broader view of social learning and knowledge-sharing issues.

  5. Verification and Validation of FAARR Model and Data Envelopment Analysis Models for United States Army Recruiting

    National Research Council Canada - National Science Library

    Piskator, Gene

    1998-01-01

    ...) model and to develop a Data Envelopment Analysis (DEA) modeling strategy. First, the FAARR model was verified using a simulation of a known production function and validated using sensitivity analysis and ex-post forecasts...

  6. Analysis of perceived risk among construction workers: a cross-cultural study and reflection on the Hofstede model.

    Science.gov (United States)

    Martinez-Fiestas, Myriam; Rodríguez-Garzón, Ignacio; Delgado-Padial, Antonio; Lucas-Ruiz, Valeriano

    2017-09-01

    This article presents a cross-cultural study on perceived risk in the construction industry. Worker samples from three different countries were studied: Spain, Peru and Nicaragua. The main goal was to explain how construction workers perceive their occupational hazard and to analyze how this is related to their national culture. The model used to measure perceived risk was the psychometric paradigm. The results show three very similar profiles, indicating that risk perception is independent of nationality. A cultural analysis was conducted using the Hofstede model. The results of this analysis and the relation to perceived risk showed that risk perception in construction is independent of national culture. Finally, a multiple lineal regression analysis was conducted to determine what qualitative attributes could predict the global quantitative size of risk perception. All of the findings have important implications regarding the management of safety in the workplace.

  7. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    Science.gov (United States)

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Quantitative Analysis of the Security of Software-Defined Network Controller Using Threat/Effort Model

    Directory of Open Access Journals (Sweden)

    Zehui Wu

    2017-01-01

    Full Text Available SDN-based controller, which is responsible for the configuration and management of the network, is the core of Software-Defined Networks. Current methods, which focus on the secure mechanism, use qualitative analysis to estimate the security of controllers, leading to inaccurate results frequently. In this paper, we employ a quantitative approach to overcome the above shortage. Under the analysis of the controller threat model we give the formal model results of the APIs, the protocol interfaces, and the data items of controller and further provide our Threat/Effort quantitative calculation model. With the help of Threat/Effort model, we are able to compare not only the security of different versions of the same kind controller but also different kinds of controllers and provide a basis for controller selection and secure development. We evaluated our approach in four widely used SDN-based controllers which are POX, OpenDaylight, Floodlight, and Ryu. The test, which shows the similarity outcomes with the traditional qualitative analysis, demonstrates that with our approach we are able to get the specific security values of different controllers and presents more accurate results.

  9. A goal programming model for environmental policy analysis: Application to Spain

    International Nuclear Information System (INIS)

    San Cristóbal, José Ramón

    2012-01-01

    Sustainable development has become an important part of international and national approaches to integrate economic, environmental, social and ethical considerations so that a good quality of life can be enjoyed by current and future generations for as long as possible. However, nowadays sustainable development is threatened by industrial pollution emissions which cause serious environmental problems. Due to a lack of adequate quantitative models for environmental policy analysis, there is a strong need for analytical models in order to know the effects of environmental policies. In the present paper, a goal programming model, based on an environmental/input–output linear programming model, is developed and applied to the Spanish economy. The model combines relations between economic, energy, social and environmental effects, providing valuable information for policy-makers in order to define and examine the different goals that must be implemented to reach sustainability. - Highlights: ► In this paper a goal programming model is developed. ► The model considers environmental, energy, social and economic goals. ► The model shows the effects of a reduction in greenhouse gasses emission and energy requirements. ► The model is applied to the Spanish economy.

  10. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    Science.gov (United States)

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  11. Bias and inference from misspecified mixed-effect models in stepped wedge trial analysis.

    Science.gov (United States)

    Thompson, Jennifer A; Fielding, Katherine L; Davey, Calum; Aiken, Alexander M; Hargreaves, James R; Hayes, Richard J

    2017-10-15

    Many stepped wedge trials (SWTs) are analysed by using a mixed-effect model with a random intercept and fixed effects for the intervention and time periods (referred to here as the standard model). However, it is not known whether this model is robust to misspecification. We simulated SWTs with three groups of clusters and two time periods; one group received the intervention during the first period and two groups in the second period. We simulated period and intervention effects that were either common-to-all or varied-between clusters. Data were analysed with the standard model or with additional random effects for period effect or intervention effect. In a second simulation study, we explored the weight given to within-cluster comparisons by simulating a larger intervention effect in the group of the trial that experienced both the control and intervention conditions and applying the three analysis models described previously. Across 500 simulations, we computed bias and confidence interval coverage of the estimated intervention effect. We found up to 50% bias in intervention effect estimates when period or intervention effects varied between clusters and were treated as fixed effects in the analysis. All misspecified models showed undercoverage of 95% confidence intervals, particularly the standard model. A large weight was given to within-cluster comparisons in the standard model. In the SWTs simulated here, mixed-effect models were highly sensitive to departures from the model assumptions, which can be explained by the high dependence on within-cluster comparisons. Trialists should consider including a random effect for time period in their SWT analysis model. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  12. Model-based safety analysis of a control system using Simulink and Simscape extended models

    Directory of Open Access Journals (Sweden)

    Shao Nian

    2017-01-01

    Full Text Available The aircraft or system safety assessment process is an integral part of the overall aircraft development cycle. It is usually characterized by a very high timely and financial effort and can become a critical design driver in certain cases. Therefore, an increasing demand of effective methods to assist the safety assessment process arises within the aerospace community. One approach is the utilization of model-based technology, which is already well-established in the system development, for safety assessment purposes. This paper mainly describes a new tool for Model-Based Safety Analysis. A formal model for an example system is generated and enriched with extended models. Then, system safety analyses are performed on the model with the assistance of automation tools and compared to the results of a manual analysis. The objective of this paper is to improve the increasingly complex aircraft systems development process. This paper develops a new model-based analysis tool in Simulink/Simscape environment.

  13. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  14. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  15. Traffic analysis toolbox volume XI : weather and traffic analysis, modeling and simulation.

    Science.gov (United States)

    2010-12-01

    This document presents a weather module for the traffic analysis tools program. It provides traffic engineers, transportation modelers and decisions makers with a guide that can incorporate weather impacts into transportation system analysis and mode...

  16. Bayesian analysis of CCDM models

    Science.gov (United States)

    Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  17. Bayesian analysis of CCDM models

    Energy Technology Data Exchange (ETDEWEB)

    Jesus, J.F. [Universidade Estadual Paulista (Unesp), Câmpus Experimental de Itapeva, Rua Geraldo Alckmin 519, Vila N. Sra. de Fátima, Itapeva, SP, 18409-010 Brazil (Brazil); Valentim, R. [Departamento de Física, Instituto de Ciências Ambientais, Químicas e Farmacêuticas—ICAQF, Universidade Federal de São Paulo (UNIFESP), Unidade José Alencar, Rua São Nicolau No. 210, Diadema, SP, 09913-030 Brazil (Brazil); Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk [Institute of Cosmology and Gravitation—University of Portsmouth, Burnaby Road, Portsmouth, PO1 3FX United Kingdom (United Kingdom)

    2017-09-01

    Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.

  18. Chaotic convective behavior and stability analysis of a fractional viscoelastic fluids model in porous media

    KAUST Repository

    N'Doye, Ibrahima

    2015-05-25

    In this paper, a dynamical fractional viscoelastic fluids convection model in porous media is proposed and its chaotic behavior is studied. A preformed equilibrium points analysis indicates the conditions where chaotic dynamics can be observed, and show the existence of chaos. The behavior and stability analysis of the integer-order and the fractional commensurate and non-commensurate orders of a fractional viscoelastic fluids system, which exhibits chaos, are presented as well.

  19. Kinetic Analysis of 2-[11C]Thymidine PET Imaging Studies of Malignant Brain Tumors: Compartmental Model Investigation and Mathematical Analysis

    Directory of Open Access Journals (Sweden)

    Joanne M. Wells

    2002-07-01

    Full Text Available 2-[11C]Thymidine (TdR, a PET tracer for cellular proliferation, may be advantageous for monitoring brain tumor progression and response to therapy. We previously described and validated a five-compartment model for thymidine incorporation into DNA in somatic tissues, but the effect of the blood–brain barrier on the transport of TdR and its metabolites necessitated further validation before it could be applied to brain tumors. Methods: We investigated the behavior of the model under conditions experienced in the normal brain and brain tumors, performed sensitivity and identifiability analysis to determine the ability of the model to estimate the model parameters, and conducted simulations to determine whether it can distinguish between thymidine transport and retention. Results: Sensitivity and identifiability analysis suggested that the non-CO2 metabolite parameters could be fixed without significantly affecting thymidine parameter estimation. Simulations showed that K1t and KTdR could be estimated accurately (r = .97 and .98 for estimated vs. true parameters with standard errors < 15%. The model was able to separate increased transport from increased retention associated with tumor proliferation. Conclusion: Our model adequately describes normal brain and brain tumor kinetics for thymidine and its metabolites, and it can provide an estimate of the rate of cellular proliferation in brain tumors.

  20. Energy Systems Modelling Research and Analysis

    DEFF Research Database (Denmark)

    Møller Andersen, Frits; Alberg Østergaard, Poul

    2015-01-01

    This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out b...... by 11 university and industry partners has improved the basis for decision-making within energy planning and energy scenario making by providing new and improved tools and methods for energy systems analyses.......This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out...

  1. Parametric Analysis of Flexible Logic Control Model

    Directory of Open Access Journals (Sweden)

    Lihua Fu

    2013-01-01

    Full Text Available Based on deep analysis about the essential relation between two input variables of normal two-dimensional fuzzy controller, we used universal combinatorial operation model to describe the logic relationship and gave a flexible logic control method to realize the effective control for complex system. In practical control application, how to determine the general correlation coefficient of flexible logic control model is a problem for further studies. First, the conventional universal combinatorial operation model has been limited in the interval [0,1]. Consequently, this paper studies a kind of universal combinatorial operation model based on the interval [a,b]. And some important theorems are given and proved, which provide a foundation for the flexible logic control method. For dealing reasonably with the complex relations of every factor in complex system, a kind of universal combinatorial operation model with unequal weights is put forward. Then, this paper has carried out the parametric analysis of flexible logic control model. And some research results have been given, which have important directive to determine the values of the general correlation coefficients in practical control application.

  2. Models and Stability Analysis of Boiling Water Reactors

    Energy Technology Data Exchange (ETDEWEB)

    John Dorning

    2002-04-15

    We have studied the nuclear-coupled thermal-hydraulic stability of boiling water reactors (BWRs) using a model that includes: space-time modal neutron kinetics based on spatial w-modes; single- and two-phase flow in parallel boiling channels; fuel rod heat conduction dynamics; and a simple model of the recirculation loop. The BR model is represented by a set of time-dependent nonlinear ordinary differential equations, and is studied as a dynamical system using the modern bifurcation theory and nonlinear dynamical systems analysis. We first determine the stability boundary (SB) - or Hopf bifurcation set- in the most relevant parameter plane, the inlet-subcooling-number/external-pressure-drop plane, for a fixed control rod induced external reactivity equal to the 100% rod line value; then we transform the SB to the practical power-flow map used by BWR operating engineers and regulatory agencies. Using this SB, we show that the normal operating point at 100% power is very stable, that stability of points on the 100% rod line decreases as the flow rate is reduced, and that operating points in the low-flow/high-power region are least stable. We also determine the SB that results when the modal kinetics is replaced by simple point reactor kinetics, and we thereby show that the first harmonic mode does not have a significant effect on the SB. However, we later show that it nevertheless has a significant effect on stability because it affects the basin of attraction of stable operating points. Using numerical simulations we show that, in the important low-flow/high-power region, the Hopf bifurcation that occurs as the SB is crossed is subcritical; hence, growing oscillations can result following small finite perturbations of stable steady-states on the 100% rod line at points in the low-flow/high-power region. Numerical simulations are also performed to calculate the decay ratios (DRs) and frequencies of oscillations for various points on the 100% rod line. It is

  3. Models and Stability Analysis of Boiling Water Reactors

    International Nuclear Information System (INIS)

    Dorning, John

    2002-01-01

    We have studied the nuclear-coupled thermal-hydraulic stability of boiling water reactors (BWRs) using a model that includes: space-time modal neutron kinetics based on spatial w-modes; single- and two-phase flow in parallel boiling channels; fuel rod heat conduction dynamics; and a simple model of the recirculation loop. The BR model is represented by a set of time-dependent nonlinear ordinary differential equations, and is studied as a dynamical system using the modern bifurcation theory and nonlinear dynamical systems analysis. We first determine the stability boundary (SB) - or Hopf bifurcation set- in the most relevant parameter plane, the inlet-subcooling-number/external-pressure-drop plane, for a fixed control rod induced external reactivity equal to the 100% rod line value; then we transform the SB to the practical power-flow map used by BWR operating engineers and regulatory agencies. Using this SB, we show that the normal operating point at 100% power is very stable, that stability of points on the 100% rod line decreases as the flow rate is reduced, and that operating points in the low-flow/high-power region are least stable. We also determine the SB that results when the modal kinetics is replaced by simple point reactor kinetics, and we thereby show that the first harmonic mode does not have a significant effect on the SB. However, we later show that it nevertheless has a significant effect on stability because it affects the basin of attraction of stable operating points. Using numerical simulations we show that, in the important low-flow/high-power region, the Hopf bifurcation that occurs as the SB is crossed is subcritical; hence, growing oscillations can result following small finite perturbations of stable steady-states on the 100% rod line at points in the low-flow/high-power region. Numerical simulations are also performed to calculate the decay ratios (DRs) and frequencies of oscillations for various points on the 100% rod line. It is

  4. Time-Dependent Global Sensitivity Analysis for Long-Term Degeneracy Model Using Polynomial Chaos

    Directory of Open Access Journals (Sweden)

    Jianbin Guo

    2014-07-01

    Full Text Available Global sensitivity is used to quantify the influence of uncertain model inputs on the output variability of static models in general. However, very few approaches can be applied for the sensitivity analysis of long-term degeneracy models, as far as time-dependent reliability is concerned. The reason is that the static sensitivity may not reflect the completed sensitivity during the entire life circle. This paper presents time-dependent global sensitivity analysis for long-term degeneracy models based on polynomial chaos expansion (PCE. Sobol’ indices are employed as the time-dependent global sensitivity since they provide accurate information on the selected uncertain inputs. In order to compute Sobol’ indices more efficiently, this paper proposes a moving least squares (MLS method to obtain the time-dependent PCE coefficients with acceptable simulation effort. Then Sobol’ indices can be calculated analytically as a postprocessing of the time-dependent PCE coefficients with almost no additional cost. A test case is used to show how to conduct the proposed method, then this approach is applied to an engineering case, and the time-dependent global sensitivity is obtained for the long-term degeneracy mechanism model.

  5. Towards Improving the Efficiency of Bayesian Model Averaging Analysis for Flow in Porous Media via the Probabilistic Collocation Method

    Directory of Open Access Journals (Sweden)

    Liang Xue

    2018-04-01

    Full Text Available The characterization of flow in subsurface porous media is associated with high uncertainty. To better quantify the uncertainty of groundwater systems, it is necessary to consider the model uncertainty. Multi-model uncertainty analysis can be performed in the Bayesian model averaging (BMA framework. However, the BMA analysis via Monte Carlo method is time consuming because it requires many forward model evaluations. A computationally efficient BMA analysis framework is proposed by using the probabilistic collocation method to construct a response surface model, where the log hydraulic conductivity field and hydraulic head are expanded into polynomials through Karhunen–Loeve and polynomial chaos methods. A synthetic test is designed to validate the proposed response surface analysis method. The results show that the posterior model weight and the key statistics in BMA framework can be accurately estimated. The relative errors of mean and total variance in the BMA analysis results are just approximately 0.013% and 1.18%, but the proposed method can be 16 times more computationally efficient than the traditional BMA method.

  6. One dimensional analysis model for condensation heat transfer in feed water heater

    International Nuclear Information System (INIS)

    Murase, Michio; Takamori, Kazuhide; Aihara, Tsuyoshi

    1998-01-01

    In order to simplify condensation heat transfer calculations for feed water heaters, one dimensional (1D) analyses were compared with three dimensional (3D) analyses. The results showed that average condensation heat transfer coefficients by 1D analyses with 1/2 rows of heat transfer tubes agreed with those by 3D analyses within 7%. Using the 1D analysis model, effects of the pitch of heat transfer tubes were evaluated. The results showed that the pitch did not affect much on heat transfer rates and that the size of heat transfer tube bundle could be decreased by a small pitch. (author)

  7. Global thermal niche models of two European grasses show high invasion risks in Antarctica.

    Science.gov (United States)

    Pertierra, Luis R; Aragón, Pedro; Shaw, Justine D; Bergstrom, Dana M; Terauds, Aleks; Olalla-Tárraga, Miguel Ángel

    2017-07-01

    The two non-native grasses that have established long-term populations in Antarctica (Poa pratensis and Poa annua) were studied from a global multidimensional thermal niche perspective to address the biological invasion risk to Antarctica. These two species exhibit contrasting introduction histories and reproductive strategies and represent two referential case studies of biological invasion processes. We used a multistep process with a range of species distribution modelling techniques (ecological niche factor analysis, multidimensional envelopes, distance/entropy algorithms) together with a suite of thermoclimatic variables, to characterize the potential ranges of these species. Their native bioclimatic thermal envelopes in Eurasia, together with the different naturalized populations across continents, were compared next. The potential niche of P. pratensis was wider at the cold extremes; however, P. annua life history attributes enable it to be a more successful colonizer. We observe that particularly cold summers are a key aspect of the unique Antarctic environment. In consequence, ruderals such as P. annua can quickly expand under such harsh conditions, whereas the more stress-tolerant P. pratensis endures and persist through steady growth. Compiled data on human pressure at the Antarctic Peninsula allowed us to provide site-specific biosecurity risk indicators. We conclude that several areas across the region are vulnerable to invasions from these and other similar species. This can only be visualized in species distribution models (SDMs) when accounting for founder populations that reveal nonanalogous conditions. Results reinforce the need for strict management practices to minimize introductions. Furthermore, our novel set of temperature-based bioclimatic GIS layers for ice-free terrestrial Antarctica provide a mechanism for regional and global species distribution models to be built for other potentially invasive species. © 2017 John Wiley & Sons Ltd.

  8. GPU-powered model analysis with PySB/cupSODA.

    Science.gov (United States)

    Harris, Leonard A; Nobile, Marco S; Pino, James C; Lubbock, Alexander L R; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo; Lopez, Carlos F

    2017-11-01

    A major barrier to the practical utilization of large, complex models of biochemical systems is the lack of open-source computational tools to evaluate model behaviors over high-dimensional parameter spaces. This is due to the high computational expense of performing thousands to millions of model simulations required for statistical analysis. To address this need, we have implemented a user-friendly interface between cupSODA, a GPU-powered kinetic simulator, and PySB, a Python-based modeling and simulation framework. For three example models of varying size, we show that for large numbers of simulations PySB/cupSODA achieves order-of-magnitude speedups relative to a CPU-based ordinary differential equation integrator. The PySB/cupSODA interface has been integrated into the PySB modeling framework (version 1.4.0), which can be installed from the Python Package Index (PyPI) using a Python package manager such as pip. cupSODA source code and precompiled binaries (Linux, Mac OS/X, Windows) are available at github.com/aresio/cupSODA (requires an Nvidia GPU; developer.nvidia.com/cuda-gpus). Additional information about PySB is available at pysb.org. paolo.cazzaniga@unibg.it or c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  9. International Space Station Model Correlation Analysis

    Science.gov (United States)

    Laible, Michael R.; Fitzpatrick, Kristin; Hodge, Jennifer; Grygier, Michael

    2018-01-01

    This paper summarizes the on-orbit structural dynamic data and the related modal analysis, model validation and correlation performed for the International Space Station (ISS) configuration ISS Stage ULF7, 2015 Dedicated Thruster Firing (DTF). The objective of this analysis is to validate and correlate the analytical models used to calculate the ISS internal dynamic loads and compare the 2015 DTF with previous tests. During the ISS configurations under consideration, on-orbit dynamic measurements were collected using the three main ISS instrumentation systems; Internal Wireless Instrumentation System (IWIS), External Wireless Instrumentation System (EWIS) and the Structural Dynamic Measurement System (SDMS). The measurements were recorded during several nominal on-orbit DTF tests on August 18, 2015. Experimental modal analyses were performed on the measured data to extract modal parameters including frequency, damping, and mode shape information. Correlation and comparisons between test and analytical frequencies and mode shapes were performed to assess the accuracy of the analytical models for the configurations under consideration. These mode shapes were also compared to earlier tests. Based on the frequency comparisons, the accuracy of the mathematical models is assessed and model refinement recommendations are given. In particular, results of the first fundamental mode will be discussed, nonlinear results will be shown, and accelerometer placement will be assessed.

  10. Survival analysis of clinical mastitis data using a nested frailty Cox model fit as a mixed-effects Poisson model.

    Science.gov (United States)

    Elghafghuf, Adel; Dufour, Simon; Reyher, Kristen; Dohoo, Ian; Stryhn, Henrik

    2014-12-01

    Mastitis is a complex disease affecting dairy cows and is considered to be the most costly disease of dairy herds. The hazard of mastitis is a function of many factors, both managerial and environmental, making its control a difficult issue to milk producers. Observational studies of clinical mastitis (CM) often generate datasets with a number of characteristics which influence the analysis of those data: the outcome of interest may be the time to occurrence of a case of mastitis, predictors may change over time (time-dependent predictors), the effects of factors may change over time (time-dependent effects), there are usually multiple hierarchical levels, and datasets may be very large. Analysis of such data often requires expansion of the data into the counting-process format - leading to larger datasets - thus complicating the analysis and requiring excessive computing time. In this study, a nested frailty Cox model with time-dependent predictors and effects was applied to Canadian Bovine Mastitis Research Network data in which 10,831 lactations of 8035 cows from 69 herds were followed through lactation until the first occurrence of CM. The model was fit to the data as a Poisson model with nested normally distributed random effects at the cow and herd levels. Risk factors associated with the hazard of CM during the lactation were identified, such as parity, calving season, herd somatic cell score, pasture access, fore-stripping, and proportion of treated cases of CM in a herd. The analysis showed that most of the predictors had a strong effect early in lactation and also demonstrated substantial variation in the baseline hazard among cows and between herds. A small simulation study for a setting similar to the real data was conducted to evaluate the Poisson maximum likelihood estimation approach with both Gaussian quadrature method and Laplace approximation. Further, the performance of the two methods was compared with the performance of a widely used estimation

  11. Dynamical analysis of a PWR internals using super-elements in an integrated 3-D model model. Part 2: dynamical tests and seismic analysis

    International Nuclear Information System (INIS)

    Jesus Miranda, C.A. de.

    1992-01-01

    The results of the test analysis (frequencies) for the isolated super-elements and for the developed 3-D model of the internals core support structures of a PWR research reactor are presented. Once certified of the model effectiveness for this type of analysis the seismic spectral analysis was performed. From the results can be seen that the structures are rigid for this load, isolated or together with the other in the 3-D model, and there are no impacts among them during the earthquake (OBE). (author)

  12. Evaluation of RCAS Inflow Models for Wind Turbine Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tangler, J.; Bir, G.

    2004-02-01

    The finite element structural modeling in the Rotorcraft Comprehensive Analysis System (RCAS) provides a state-of-the-art approach to aeroelastic analysis. This, coupled with its ability to model all turbine components, results in a methodology that can simulate complex system interactions characteristic of large wind. In addition, RCAS is uniquely capable of modeling advanced control algorithms and the resulting dynamic responses.

  13. Topic Modeling in Sentiment Analysis: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Toqir Ahmad Rana

    2016-06-01

    Full Text Available With the expansion and acceptance of Word Wide Web, sentiment analysis has become progressively popular research area in information retrieval and web data analysis. Due to the huge amount of user-generated contents over blogs, forums, social media, etc., sentiment analysis has attracted researchers both in academia and industry, since it deals with the extraction of opinions and sentiments. In this paper, we have presented a review of topic modeling, especially LDA-based techniques, in sentiment analysis. We have presented a detailed analysis of diverse approaches and techniques, and compared the accuracy of different systems among them. The results of different approaches have been summarized, analyzed and presented in a sophisticated fashion. This is the really effort to explore different topic modeling techniques in the capacity of sentiment analysis and imparting a comprehensive comparison among them.

  14. Rigid-body-spring model numerical analysis of joint performance of engineered cementitious composites and concrete

    Science.gov (United States)

    Khmurovska, Y.; Štemberk, P.; Křístek, V.

    2017-09-01

    This paper presents a numerical investigation of effectiveness of using engineered cementitious composites with polyvinyl alcohol fibers for concrete cover layer repair. A numerical model of a monolithic concaved L-shaped concrete structural detail which is strengthened with an engineered cementitious composite layer with polyvinyl alcohol fibers is created and loaded with bending moment. The numerical analysis employs nonlinear 3-D Rigid-Body-Spring Model. The proposed material model shows reliable results and can be used in further studies. The engineered cementitious composite shows extremely good performance in tension due to the strain-hardening effect. Since durability of the bond can be decreased significantly by its degradation due to the thermal loading, this effect should be also taken into account in the future work, as well as the experimental investigation, which should be performed for validation of the proposed numerical model.

  15. Finite element analysis of a model scale footing on clean and oil contaminated sand

    International Nuclear Information System (INIS)

    Evgin, E.; Boulon, M.; Das, B.M.

    1995-01-01

    The effects of oil contamination on the behavior of a model scale footing is determined. Tests are carried out with both clean and oil contaminated sand. The data show that the bearing capacity of the footing is reduced significantly as a result of oil contamination. A finite element analysis is performed to calculate the bearing capacity of the footing and the results are compared with the experimental data. The significance of using an interface element in the analysis is discussed

  16. Analytic uncertainty and sensitivity analysis of models with input correlations

    Science.gov (United States)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  17. Two sustainable energy system analysis models

    DEFF Research Database (Denmark)

    Lund, Henrik; Goran Krajacic, Neven Duic; da Graca Carvalho, Maria

    2005-01-01

    This paper presents a comparative study of two energy system analysis models both designed with the purpose of analysing electricity systems with a substantial share of fluctuating renewable energy....

  18. Analysis on reduced chemical kinetic model of N-heptane for HCCI combustion. Paper no. IGEC-1-072

    International Nuclear Information System (INIS)

    Yao, M.; Zheng, Z.

    2005-01-01

    Because of high complexity coupled with multidimensional fluid dynamics, it is difficult to apply detailed chemical kinetic model to simulate practical engines. A reduced model of n-heptane has been developed on the basic of detailed mechanism by sensitivity analysis and reaction path analysis of every stage of combustion. The new reduced mechanism consists of 35 species and 41 reactions, and it is effective in engine condition. The results show that it gives predictions similar to the detailed model in ignition timing, in-cylinder temperature and pressure. Furthermore, the reduced mechanism can be used to simulate boundary condition of partial combustion in good agreement with the detailed mechanism. (author)

  19. A sensitivity analysis of the WIPP disposal room model: Phase 1

    Energy Technology Data Exchange (ETDEWEB)

    Labreche, D.A.; Beikmann, M.A. [RE/SPEC, Inc., Albuquerque, NM (United States); Osnes, J.D. [RE/SPEC, Inc., Rapid City, SD (United States); Butcher, B.M. [Sandia National Labs., Albuquerque, NM (United States)

    1995-07-01

    The WIPP Disposal Room Model (DRM) is a numerical model with three major components constitutive models of TRU waste, crushed salt backfill, and intact halite -- and several secondary components, including air gap elements, slidelines, and assumptions on symmetry and geometry. A sensitivity analysis of the Disposal Room Model was initiated on two of the three major components (waste and backfill models) and on several secondary components as a group. The immediate goal of this component sensitivity analysis (Phase I) was to sort (rank) model parameters in terms of their relative importance to model response so that a Monte Carlo analysis on a reduced set of DRM parameters could be performed under Phase II. The goal of the Phase II analysis will be to develop a probabilistic definition of a disposal room porosity surface (porosity, gas volume, time) that could be used in WIPP Performance Assessment analyses. This report documents a literature survey which quantifies the relative importance of the secondary room components to room closure, a differential analysis of the creep consolidation model and definition of a follow-up Monte Carlo analysis of the model, and an analysis and refitting of the waste component data on which a volumetric plasticity model of TRU drum waste is based. A summary, evaluation of progress, and recommendations for future work conclude the report.

  20. Personalization of models with many model parameters : an efficient sensitivity analysis approach

    NARCIS (Netherlands)

    Donders, W.P.; Huberts, W.; van de Vosse, F.N.; Delhaas, T.

    2015-01-01

    Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of

  1. Credit Risk Evaluation : Modeling - Analysis - Management

    OpenAIRE

    Wehrspohn, Uwe

    2002-01-01

    An analysis and further development of the building blocks of modern credit risk management: -Definitions of default -Estimation of default probabilities -Exposures -Recovery Rates -Pricing -Concepts of portfolio dependence -Time horizons for risk calculations -Quantification of portfolio risk -Estimation of risk measures -Portfolio analysis and portfolio improvement -Evaluation and comparison of credit risk models -Analytic portfolio loss distributions The thesis contributes to the evaluatio...

  2. Multi model and data analysis of terrestrial carbon cycle in Asia: From 2001 to 2006

    Science.gov (United States)

    Ichii, K.; Takahashi, K.; Suzuki, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.

    2009-12-01

    Accurate monitoring and modeling of the current status and their causes of interannual variations in terrestrial carbon cycle are important. Recently, many studies analyze using multiple methods (e.g. satellite data and ecosystem models) to clarify the underlain mechanisms and recent trend since each single methodology contains its own biases. The multi-model and data ensemble approach is a powerful method to clarify the current status and their underlain mechanisms. So far, many studies using multiple sources of data and models are conducted in North America, Europe, Africa, Amazon, and Japan, however, studies in monsoon Asia are lacking. In this study, we analyzed interannual variations in terrestrial carbon cycles in monsoon Asia, and evaluated current capability of remote sensing and ecosystem model to capture them based on multiple model and data sources; flux observations, remote sensing (e.g. MODIS, AVHRR, and VGT), and ecosystem models (e.g. SVM, BEAMS, CASA, Biome-BGC, LPJ, and TRIFFID). The satellite observation and ecosystem models show clear characteristics in interannual variabilities in satellite-based NDVI and model-based GPP. These are characterized by (1) spring NDVI and modeled GPP anomalies related to temperature anomaly in mid and high latitudinal areas (positive anomalies in 2002 and 2005 and negative one in 2006), (2) NDVI and GPP anomalies in southeastern and central Asia related to precipitation (e.g. India from 2003-2006), and (3) summer NDVI and GPP anomalies in 2003 related to strong anomalies in solar radiations. NDVI anomalies related to radiation ones (2003 summer) were not accurately captured by terrestrial ecosystem models. For example, LPJ model rather shows GPP positive anomalies in Far East Asia regions probably caused by positive precipitation anomalies. Further analysis requires improvement of models to reproduce more consistent spatial patterns in NDVI anomaly, and longer term analysis (e.g. after 1982).

  3. Modeling and analysis of cell membrane systems with probabilistic model checking

    Science.gov (United States)

    2011-01-01

    Background Recently there has been a growing interest in the application of Probabilistic Model Checking (PMC) for the formal specification of biological systems. PMC is able to exhaustively explore all states of a stochastic model and can provide valuable insight into its behavior which are more difficult to see using only traditional methods for system analysis such as deterministic and stochastic simulation. In this work we propose a stochastic modeling for the description and analysis of sodium-potassium exchange pump. The sodium-potassium pump is a membrane transport system presents in all animal cell and capable of moving sodium and potassium ions against their concentration gradient. Results We present a quantitative formal specification of the pump mechanism in the PRISM language, taking into consideration a discrete chemistry approach and the Law of Mass Action aspects. We also present an analysis of the system using quantitative properties in order to verify the pump reversibility and understand the pump behavior using trend labels for the transition rates of the pump reactions. Conclusions Probabilistic model checking can be used along with other well established approaches such as simulation and differential equations to better understand pump behavior. Using PMC we can determine if specific events happen such as the potassium outside the cell ends in all model traces. We can also have a more detailed perspective on its behavior such as determining its reversibility and why its normal operation becomes slow over time. This knowledge can be used to direct experimental research and make it more efficient, leading to faster and more accurate scientific discoveries. PMID:22369714

  4. Modelling, simulation and dynamic analysis of the time delay model of the recuperative heat exchanger

    Directory of Open Access Journals (Sweden)

    Debeljković Dragutin Lj.

    2016-01-01

    Full Text Available The heat exchangers are frequently used as constructive elements in various plants and their dynamics is very important. Their operation is usually controlled by manipulating inlet fluid temperatures or mass flow rates. On the basis of the accepted and critically clarified assumptions, a linearized mathematical model of the cross-flow heat exchanger has been derived, taking into account the wall dynamics. The model is based on the fundamental law of energy conservation, covers all heat accumulation storages in the process, and leads to the set of partial differential equations (PDE, which solution is not possible in closed form. In order to overcome this problem the approach based on physical discretization was applied with associated time delay on the positions where it was necessary and unavoidable. This is quite new approach, which represent the further extension of previous results which did not include significant time delay existing in the working media. Simulation results, were derived, showing progress in building such a model suitable for further treatment from the position of analysis as well as the needs for control synthesis problem.

  5. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  6. Dynamic data analysis modeling data with differential equations

    CERN Document Server

    Ramsay, James

    2017-01-01

    This text focuses on the use of smoothing methods for developing and estimating differential equations following recent developments in functional data analysis and building on techniques described in Ramsay and Silverman (2005) Functional Data Analysis. The central concept of a dynamical system as a buffer that translates sudden changes in input into smooth controlled output responses has led to applications of previously analyzed data, opening up entirely new opportunities for dynamical systems. The technical level has been kept low so that those with little or no exposure to differential equations as modeling objects can be brought into this data analysis landscape. There are already many texts on the mathematical properties of ordinary differential equations, or dynamic models, and there is a large literature distributed over many fields on models for real world processes consisting of differential equations. However, a researcher interested in fitting such a model to data, or a statistician interested in...

  7. Program impact pathway analysis of a social franchise model shows potential to improve infant and young child feeding practices in Vietnam.

    Science.gov (United States)

    Nguyen, Phuong H; Menon, Purnima; Keithly, Sarah C; Kim, Sunny S; Hajeebhoy, Nemat; Tran, Lan M; Ruel, Marie T; Rawat, Rahul

    2014-10-01

    By mapping the mechanisms through which interventions are expected to achieve impact, program impact pathway (PIP) analysis lays out the theoretical causal links between program activities, outcomes, and impacts. This study examines the pathways through which the Alive & Thrive (A&T) social franchise model is intended to improve infant and young child feeding (IYCF) practices in Vietnam. Mixed methods were used, including qualitative interviews with franchise management board members (n = 12), surveys with health providers (n = 120), counseling observations (n = 160), and household surveys (n = 2045). Six PIP components were assessed: 1) franchise management, 2) training and IYCF knowledge of health providers, 3) service delivery, 4) program exposure and utilization, 5) maternal behavioral determinants (knowledge, beliefs, and intentions) toward optimal IYCF practices, and 6) IYCF practices. Data were collected from A&T-intensive areas (A&T-I; mass media + social franchise) and A&T-nonintensive areas (A&T-NI; mass media only) by using a cluster-randomized controlled trial design. Data from 2013 were compared with baseline where similar measures were available. Results indicate that mechanisms are in place for effective management of the franchise system, despite challenges to routine monitoring. A&T training was associated with increased capacity of providers, resulting in higher-quality IYCF counseling (greater technical knowledge and communication skills during counseling) in A&T-I areas. Franchise utilization increased from 10% in 2012 to 45% in 2013 but fell below the expected frequency of 9-15 contacts per mother-child dyad. Improvements in breastfeeding knowledge, beliefs, intentions, and practices were greater among mothers in A&T-I areas than among those in A&T-NI areas. In conclusion, there are many positive changes along the impact pathway of the franchise services, but challenges in utilization and demand creation should be addressed to achieve the full

  8. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    OpenAIRE

    MURILLO, Carol Andrea; THOREL, Luc; CAICEDO, Bernardo

    2009-01-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge test...

  9. A mathematical model of the hypothalamo-pituitary-adrenocortical system and its stability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Danka [Vinca Institute of Nuclear Sciences, Laboratory for Theoretical and Condensed Matter Physics, P.O. Box 522, Belgrade 11001 (Serbia and Montenegro)] e-mail: dankasav@eunet.yu; Jelic, Smiljana [Vinca Institute of Nuclear Sciences, Laboratory for Theoretical and Condensed Matter Physics, P.O. Box 522, Belgrade 11001 (Serbia and Montenegro)

    2005-10-01

    It is commonly assumed that the hypothalamo-pituitary-adrenocortical (HPA) axis generates oscillations, because a regular daily rhythm of its component hormones is observed. We offer another plausible explanation of the origin of its circadian oscillations: HPA just responds to an independent external pacemaker (from the suprachiazmatic nucleus, SCN). Five versions (with and without time delay) of a qualitative non-phenomenological mathematical model of the HPA axis as a feedback mechanism are constructed wherein all the terms in the equations are introduced according to the rules of chemical kinetics, i.e. are physicochemically interpretable. The dynamics of the HPA axis model was examined using linear stability analysis. The results show stability of this system, meaning that it does not generate diurnal oscillations. Computer simulation based on this model shows oscillations that are system's response to an external pulsing activator (SCN) implying that the observed time-periodic pattern does not have to be an intrinsic property of the HPA axis.

  10. A mathematical model of the hypothalamo-pituitary-adrenocortical system and its stability analysis

    International Nuclear Information System (INIS)

    Savic, Danka; Jelic, Smiljana

    2005-01-01

    It is commonly assumed that the hypothalamo-pituitary-adrenocortical (HPA) axis generates oscillations, because a regular daily rhythm of its component hormones is observed. We offer another plausible explanation of the origin of its circadian oscillations: HPA just responds to an independent external pacemaker (from the suprachiazmatic nucleus, SCN). Five versions (with and without time delay) of a qualitative non-phenomenological mathematical model of the HPA axis as a feedback mechanism are constructed wherein all the terms in the equations are introduced according to the rules of chemical kinetics, i.e. are physicochemically interpretable. The dynamics of the HPA axis model was examined using linear stability analysis. The results show stability of this system, meaning that it does not generate diurnal oscillations. Computer simulation based on this model shows oscillations that are system's response to an external pulsing activator (SCN) implying that the observed time-periodic pattern does not have to be an intrinsic property of the HPA axis

  11. Three-dimensional model analysis and processing

    CERN Document Server

    Yu, Faxin; Luo, Hao; Wang, Pinghui

    2011-01-01

    This book focuses on five hot research directions in 3D model analysis and processing in computer science:  compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.

  12. Aircraft/Air Traffic Management Functional Analysis Model: Technical Description. 2.0

    Science.gov (United States)

    Etheridge, Melvin; Plugge, Joana; Retina, Nusrat

    1998-01-01

    The Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 (FAM 2.0), is a discrete event simulation model designed to support analysis of alternative concepts in air traffic management and control. FAM 2.0 was developed by the Logistics Management Institute (LMI) under a National Aeronautics and Space Administration (NASA) contract. This document provides a technical description of FAM 2.0 and its computer files to enable the modeler and programmer to make enhancements or modifications to the model. Those interested in a guide for using the model in analysis should consult the companion document, Aircraft/Air Traffic Management Functional Analysis Model, Version 2.0 Users Manual.

  13. Study on the systematic approach of Markov modeling for dependability analysis of complex fault-tolerant features with voting logics

    International Nuclear Information System (INIS)

    Son, Kwang Seop; Kim, Dong Hoon; Kim, Chang Hwoi; Kang, Hyun Gook

    2016-01-01

    The Markov analysis is a technique for modeling system state transitions and calculating the probability of reaching various system states. While it is a proper tool for modeling complex system designs involving timing, sequencing, repair, redundancy, and fault tolerance, as the complexity or size of the system increases, so does the number of states of interest, leading to difficulty in constructing and solving the Markov model. This paper introduces a systematic approach of Markov modeling to analyze the dependability of a complex fault-tolerant system. This method is based on the decomposition of the system into independent subsystem sets, and the system-level failure rate and the unavailability rate for the decomposed subsystems. A Markov model for the target system is easily constructed using the system-level failure and unavailability rates for the subsystems, which can be treated separately. This approach can decrease the number of states to consider simultaneously in the target system by building Markov models of the independent subsystems stage by stage, and results in an exact solution for the Markov model of the whole target system. To apply this method we construct a Markov model for the reactor protection system found in nuclear power plants, a system configured with four identical channels and various fault-tolerant architectures. The results show that the proposed method in this study treats the complex architecture of the system in an efficient manner using the merits of the Markov model, such as a time dependent analysis and a sequential process analysis. - Highlights: • Systematic approach of Markov modeling for system dependability analysis is proposed based on the independent subsystem set, its failure rate and unavailability rate. • As an application example, we construct the Markov model for the digital reactor protection system configured with four identical and independent channels, and various fault-tolerant architectures. • The

  14. Cortical surface-based analysis reduces bias and variance in kinetic modeling of brain PET data

    DEFF Research Database (Denmark)

    Greve, Douglas N; Svarer, Claus; Fisher, Patrick M

    2014-01-01

    Exploratory (i.e., voxelwise) spatial methods are commonly used in neuroimaging to identify areas that show an effect when a region-of-interest (ROI) analysis cannot be performed because no strong a priori anatomical hypothesis exists. However, noise at a single voxel is much higher than noise...... in a ROI making noise management critical to successful exploratory analysis. This work explores how preprocessing choices affect the bias and variability of voxelwise kinetic modeling analysis of brain positron emission tomography (PET) data. These choices include the use of volume- or cortical surface...

  15. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  16. Global analysis of dynamical decision-making models through local computation around the hidden saddle.

    Directory of Open Access Journals (Sweden)

    Laura Trotta

    Full Text Available Bistable dynamical switches are frequently encountered in mathematical modeling of biological systems because binary decisions are at the core of many cellular processes. Bistable switches present two stable steady-states, each of them corresponding to a distinct decision. In response to a transient signal, the system can flip back and forth between these two stable steady-states, switching between both decisions. Understanding which parameters and states affect this switch between stable states may shed light on the mechanisms underlying the decision-making process. Yet, answering such a question involves analyzing the global dynamical (i.e., transient behavior of a nonlinear, possibly high dimensional model. In this paper, we show how a local analysis at a particular equilibrium point of bistable systems is highly relevant to understand the global properties of the switching system. The local analysis is performed at the saddle point, an often disregarded equilibrium point of bistable models but which is shown to be a key ruler of the decision-making process. Results are illustrated on three previously published models of biological switches: two models of apoptosis, the programmed cell death and one model of long-term potentiation, a phenomenon underlying synaptic plasticity.

  17. A stochastic multicriteria model for evidence-based decision making in drug benefit-risk analysis.

    Science.gov (United States)

    Tervonen, Tommi; van Valkenhoef, Gert; Buskens, Erik; Hillege, Hans L; Postmus, Douwe

    2011-05-30

    Drug benefit-risk (BR) analysis is based on firm clinical evidence regarding various safety and efficacy outcomes. In this paper, we propose a new and more formal approach for constructing a supporting multi-criteria model that fully takes into account the evidence on efficacy and adverse drug reactions. Our approach is based on the stochastic multi-criteria acceptability analysis methodology, which allows us to compute the typical value judgments that support a decision, to quantify decision uncertainty, and to compute a comprehensive BR profile. We construct a multi-criteria model for the therapeutic group of second-generation antidepressants. We assess fluoxetine and venlafaxine together with placebo according to incidence of treatment response and three common adverse drug reactions by using data from a published study. Our model shows that there are clear trade-offs among the treatment alternatives. Copyright © 2011 John Wiley & Sons, Ltd.

  18. FAME, the flux analysis and modelling environment

    NARCIS (Netherlands)

    Boele, J.; Olivier, B.G.; Teusink, B.

    2012-01-01

    Background: The creation and modification of genome-scale metabolic models is a task that requires specialized software tools. While these are available, subsequently running or visualizing a model often relies on disjoint code, which adds additional actions to the analysis routine and, in our

  19. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  20. Segmentation of 3d Models for Cultural Heritage Structural Analysis - Some Critical Issues

    Science.gov (United States)

    Gonizzi Barsanti, S.; Guidi, G.; De Luca, L.

    2017-08-01

    Cultural Heritage documentation and preservation has become a fundamental concern in this historical period. 3D modelling offers a perfect aid to record ancient buildings and artefacts and can be used as a valid starting point for restoration, conservation and structural analysis, which can be performed by using Finite Element Methods (FEA). The models derived from reality-based techniques, made up of the exterior surfaces of the objects captured at high resolution, are - for this reason - made of millions of polygons. Such meshes are not directly usable in structural analysis packages and need to be properly pre-processed in order to be transformed in volumetric meshes suitable for FEA. In addition, dealing with ancient objects, a proper segmentation of 3D volumetric models is needed to analyse the behaviour of the structure with the most suitable level of detail for the different sections of the structure under analysis. Segmentation of 3D models is still an open issue, especially when dealing with ancient, complicated and geometrically complex objects that imply the presence of anomalies and gaps, due to environmental agents such as earthquakes, pollution, wind and rain, or human factors. The aims of this paper is to critically analyse some of the different methodologies and algorithms available to segment a 3D point cloud or a mesh, identifying difficulties and problems by showing examples on different structures.

  1. Comparative Analysis of Investment Decision Models

    Directory of Open Access Journals (Sweden)

    Ieva Kekytė

    2017-06-01

    Full Text Available Rapid development of financial markets resulted new challenges for both investors and investment issues. This increased demand for innovative, modern investment and portfolio management decisions adequate for market conditions. Financial market receives special attention, creating new models, includes financial risk management and investment decision support systems.Researchers recognize the need to deal with financial problems using models consistent with the reality and based on sophisticated quantitative analysis technique. Thus, role mathematical modeling in finance becomes important. This article deals with various investments decision-making models, which include forecasting, optimization, stochatic processes, artificial intelligence, etc., and become useful tools for investment decisions.

  2. Perturbation analysis for Monte Carlo continuous cross section models

    International Nuclear Information System (INIS)

    Kennedy, Chris B.; Abdel-Khalik, Hany S.

    2011-01-01

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  3. Bayesian Sensitivity Analysis of a Nonlinear Dynamic Factor Analysis Model with Nonparametric Prior and Possible Nonignorable Missingness.

    Science.gov (United States)

    Tang, Niansheng; Chow, Sy-Miin; Ibrahim, Joseph G; Zhu, Hongtu

    2017-12-01

    Many psychological concepts are unobserved and usually represented as latent factors apprehended through multiple observed indicators. When multiple-subject multivariate time series data are available, dynamic factor analysis models with random effects offer one way of modeling patterns of within- and between-person variations by combining factor analysis and time series analysis at the factor level. Using the Dirichlet process (DP) as a nonparametric prior for individual-specific time series parameters further allows the distributional forms of these parameters to deviate from commonly imposed (e.g., normal or other symmetric) functional forms, arising as a result of these parameters' restricted ranges. Given the complexity of such models, a thorough sensitivity analysis is critical but computationally prohibitive. We propose a Bayesian local influence method that allows for simultaneous sensitivity analysis of multiple modeling components within a single fitting of the model of choice. Five illustrations and an empirical example are provided to demonstrate the utility of the proposed approach in facilitating the detection of outlying cases and common sources of misspecification in dynamic factor analysis models, as well as identification of modeling components that are sensitive to changes in the DP prior specification.

  4. Modeling and analysis of stochastic systems

    CERN Document Server

    Kulkarni, Vidyadhar G

    2011-01-01

    Based on the author's more than 25 years of teaching experience, Modeling and Analysis of Stochastic Systems, Second Edition covers the most important classes of stochastic processes used in the modeling of diverse systems, from supply chains and inventory systems to genetics and biological systems. For each class of stochastic process, the text includes its definition, characterization, applications, transient and limiting behavior, first passage times, and cost/reward models. Along with reorganizing the material, this edition revises and adds new exercises and examples. New to the second edi

  5. Independent Component Analysis in Multimedia Modeling

    DEFF Research Database (Denmark)

    Larsen, Jan

    2003-01-01

    largely refers to text, images/video, audio and combinations of such data. We review a number of applications within single and combined media with the hope that this might provide inspiration for further research in this area. Finally, we provide a detailed presentation of our own recent work on modeling......Modeling of multimedia and multimodal data becomes increasingly important with the digitalization of the world. The objective of this paper is to demonstrate the potential of independent component analysis and blind sources separation methods for modeling and understanding of multimedia data, which...

  6. Modeling and Analysis of Component Faults and Reliability

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Ravn, Anders Peter

    2016-01-01

    This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets that are automati......This chapter presents a process to design and validate models of reactive systems in the form of communicating timed automata. The models are extended with faults associated with probabilities of occurrence. This enables a fault tree analysis of the system using minimal cut sets...... that are automatically generated. The stochastic information on the faults is used to estimate the reliability of the fault affected system. The reliability is given with respect to properties of the system state space. We illustrate the process on a concrete example using the Uppaal model checker for validating...... the ideal system model and the fault modeling. Then the statistical version of the tool, UppaalSMC, is used to find reliability estimates....

  7. Genomic analysis of codon usage shows influence of mutation pressure, natural selection, and host features on Marburg virus evolution.

    Science.gov (United States)

    Nasrullah, Izza; Butt, Azeem M; Tahir, Shifa; Idrees, Muhammad; Tong, Yigang

    2015-08-26

    The Marburg virus (MARV) has a negative-sense single-stranded RNA genome, belongs to the family Filoviridae, and is responsible for several outbreaks of highly fatal hemorrhagic fever. Codon usage patterns of viruses reflect a series of evolutionary changes that enable viruses to shape their survival rates and fitness toward the external environment and, most importantly, their hosts. To understand the evolution of MARV at the codon level, we report a comprehensive analysis of synonymous codon usage patterns in MARV genomes. Multiple codon analysis approaches and statistical methods were performed to determine overall codon usage patterns, biases in codon usage, and influence of various factors, including mutation pressure, natural selection, and its two hosts, Homo sapiens and Rousettus aegyptiacus. Nucleotide composition and relative synonymous codon usage (RSCU) analysis revealed that MARV shows mutation bias and prefers U- and A-ended codons to code amino acids. Effective number of codons analysis indicated that overall codon usage among MARV genomes is slightly biased. The Parity Rule 2 plot analysis showed that GC and AU nucleotides were not used proportionally which accounts for the presence of natural selection. Codon usage patterns of MARV were also found to be influenced by its hosts. This indicates that MARV have evolved codon usage patterns that are specific to both of its hosts. Moreover, selection pressure from R. aegyptiacus on the MARV RSCU patterns was found to be dominant compared with that from H. sapiens. Overall, mutation pressure was found to be the most important and dominant force that shapes codon usage patterns in MARV. To our knowledge, this is the first detailed codon usage analysis of MARV and extends our understanding of the mechanisms that contribute to codon usage and evolution of MARV.

  8. Comparative analysis of elements and models of implementation in local-level spatial plans in Serbia

    Directory of Open Access Journals (Sweden)

    Stefanović Nebojša

    2017-01-01

    Full Text Available Implementation of local-level spatial plans is of paramount importance to the development of the local community. This paper aims to demonstrate the importance of and offer further directions for research into the implementation of spatial plans by presenting the results of a study on models of implementation. The paper describes the basic theoretical postulates of a model for implementing spatial plans. A comparative analysis of the application of elements and models of implementation of plans in practice was conducted based on the spatial plans for the local municipalities of Arilje, Lazarevac and Sremska Mitrovica. The analysis includes four models of implementation: the strategy and policy of spatial development; spatial protection; the implementation of planning solutions of a technical nature; and the implementation of rules of use, arrangement and construction of spaces. The main results of the analysis are presented and used to give recommendations for improving the elements and models of implementation. Final deliberations show that models of implementation are generally used in practice and combined in spatial plans. Based on the analysis of how models of implementation are applied in practice, a general conclusion concerning the complex character of the local level of planning is presented and elaborated. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR 36035: Spatial, Environmental, Energy and Social Aspects of Developing Settlements and Climate Change - Mutual Impacts and Grant no. III 47014: The Role and Implementation of the National Spatial Plan and Regional Development Documents in Renewal of Strategic Research, Thinking and Governance in Serbia

  9. Coping with Complexity Model Reduction and Data Analysis

    CERN Document Server

    Gorban, Alexander N

    2011-01-01

    This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.

  10. CPR in medical TV shows: non-health care student perspective.

    Science.gov (United States)

    Alismail, Abdullah; Meyer, Nicole C; Almutairi, Waleed; Daher, Noha S

    2018-01-01

    There are over a dozen medical shows airing on television, many of which are during prime time. Researchers have recently become more interested in the role of these shows, and the awareness on cardiopulmonary resuscitation. Several cases have been reported where a lay person resuscitated a family member using medical TV shows as a reference. The purpose of this study is to examine and evaluate college students' perception on cardiopulmonary resuscitation and when to shock using an automated external defibrillator based on their experience of watching medical TV shows. A total of 170 students (nonmedical major) were surveyed in four different colleges in the United States. The survey consisted of questions that reflect their perception and knowledge acquired from watching medical TV shows. A stepwise regression was used to determine the significant predictors of "How often do you watch medical drama TV shows" in addition to chi-square analysis for nominal variables. Regression model showed significant effect that TV shows did change students' perception positively ( p <0.001), and they would select shock on asystole as the frequency of watching increases ( p =0.023). The findings of this study show that high percentage of nonmedical college students are influenced significantly by medical shows. One particular influence is the false belief about when a shock using the automated external defibrillator (AED) is appropriate as it is portrayed falsely in most medical shows. This finding raises a concern about how these shows portray basic life support, especially when not following American Heart Association (AHA) guidelines. We recommend the medical advisors in these shows to use AHA guidelines and AHA to expand its expenditures to include medical shows to educate the public on the appropriate action to rescue an out-of-hospital cardiac arrest patient.

  11. MODELING AND ANALYSIS OF UNSTEADY FLOW BEHAVIOR IN DEEPWATER CONTROLLED MUD-CAP DRILLING

    Directory of Open Access Journals (Sweden)

    Jiwei Li

    Full Text Available Abstract A new mathematical model was developed in this study to simulate the unsteady flow in controlled mud-cap drilling systems. The model can predict the time-dependent flow inside the drill string and annulus after a circulation break. This model consists of the continuity and momentum equations solved using the explicit Euler method. The model considers both Newtonian and non-Newtonian fluids flowing inside the drill string and annular space. The model predicts the transient flow velocity of mud, the equilibrium time, and the change in the bottom hole pressure (BHP during the unsteady flow. The model was verified using data from U-tube flow experiments reported in the literature. The result shows that the model is accurate, with a maximum average error of 3.56% for the velocity prediction. Together with the measured data, the computed transient flow behavior can be used to better detect well kick and a loss of circulation after the mud pump is shut down. The model sensitivity analysis show that the water depth, mud density and drill string size are the three major factors affecting the fluctuation of the BHP after a circulation break. These factors should be carefully examined in well design and drilling operations to minimize BHP fluctuation and well kick. This study provides the fundamentals for designing a safe system in controlled mud-cap drilling operati.

  12. Trajectory modeling of gestational weight: A functional principal component analysis approach.

    Directory of Open Access Journals (Sweden)

    Menglu Che

    Full Text Available Suboptimal gestational weight gain (GWG, which is linked to increased risk of adverse outcomes for a pregnant woman and her infant, is prevalent. In the study of a large cohort of Canadian pregnant women, our goals are to estimate the individual weight growth trajectory using sparsely collected bodyweight data, and to identify the factors affecting the weight change during pregnancy, such as prepregnancy body mass index (BMI, dietary intakes and physical activity. The first goal was achieved through functional principal component analysis (FPCA by conditional expectation. For the second goal, we used linear regression with the total weight gain as the response variable. The trajectory modeling through FPCA had a significantly smaller root mean square error (RMSE and improved adaptability than the classic nonlinear mixed-effect models, demonstrating a novel tool that can be used to facilitate real time monitoring and interventions of GWG. Our regression analysis showed that prepregnancy BMI had a high predictive value for the weight changes during pregnancy, which agrees with the published weight gain guideline.

  13. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  14. Modeling time-to-event (survival) data using classification tree analysis.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  15. Parameter identification and global sensitivity analysis of Xin'anjiang model using meta-modeling approach

    Directory of Open Access Journals (Sweden)

    Xiao-meng Song

    2013-01-01

    Full Text Available Parameter identification, model calibration, and uncertainty quantification are important steps in the model-building process, and are necessary for obtaining credible results and valuable information. Sensitivity analysis of hydrological model is a key step in model uncertainty quantification, which can identify the dominant parameters, reduce the model calibration uncertainty, and enhance the model optimization efficiency. There are, however, some shortcomings in classical approaches, including the long duration of time and high computation cost required to quantitatively assess the sensitivity of a multiple-parameter hydrological model. For this reason, a two-step statistical evaluation framework using global techniques is presented. It is based on (1 a screening method (Morris for qualitative ranking of parameters, and (2 a variance-based method integrated with a meta-model for quantitative sensitivity analysis, i.e., the Sobol method integrated with the response surface model (RSMSobol. First, the Morris screening method was used to qualitatively identify the parameters' sensitivity, and then ten parameters were selected to quantify the sensitivity indices. Subsequently, the RSMSobol method was used to quantify the sensitivity, i.e., the first-order and total sensitivity indices based on the response surface model (RSM were calculated. The RSMSobol method can not only quantify the sensitivity, but also reduce the computational cost, with good accuracy compared to the classical approaches. This approach will be effective and reliable in the global sensitivity analysis of a complex large-scale distributed hydrological model.

  16. Inter-subchannel heat transfer modeling for a subchannel analysis of liquid metal-cooled reactors

    International Nuclear Information System (INIS)

    Hae-Yong, Jeong; Kwi-Seok, Ha; Young-Min, Kwon; Yong-Bum, Lee; Dohee, Hahn

    2007-01-01

    In a subchannel approach, the temperature, pressure and velocity in a subchannel are averaged, and one representative thermal-hydraulic condition specifies the state of a subchannel. To enhance the predictability of a subchannel analysis code, it is required to model the inter-subchannel heat transfer between the adjacent subchannels as accurately as possible. One of the critical parameters which determine the thermal-hydraulic behavior of the coolant in subchannels is the heat conduction between two neighboring sub-channels. This portion of a heat transfer becomes more important in the design of an LMR (Liquid Metal-cooled Reactor) because of the high heat capacity of the liquid metal coolant. The other important part of heat transfer is the mixing of flow as a form of cross flow. Especially, the turbulent mixing caused by the eddy motion of fluid across the gap between the subchannels enhances the exchange of the momentum and the energy through the gap with no net transport of the mass. Major results of recent efforts on these modeling have been implemented in a subchannel analysis code MATRA-LMR-FB. The analysis shows that the accuracy of a subchannel analysis code is improved by enhancing the models describing the conduction heat transfer and the cross-flow mixing, especially at low flow rate. (authors)

  17. Structural equation modeling analysis of factors influencing architects' trust in project design teams

    Institute of Scientific and Technical Information of China (English)

    DING Zhi-kun; NG Fung-fai; WANG Jia-yuan

    2009-01-01

    This paper describes a structural equation modeling (SEM) analysis of factors influencing architects' trust in project design teams. We undertook a survey of architects, during which we distributed 193 questionnaires in 29 A-level architectural We used Amos 6.0 for SEM to identify significant personal construct based factors affecting interpersonal trust. The results show that only social interaction between architects significantly affects their interpersonal trust. The explained variance of trust is not very high in the model. Therefore, future research should add more factors into the current model. The practical implication is that team managers should promote the social interactions between team members such that the interpersonal trust level between team members can be improved.

  18. Bayesian nonparametric meta-analysis using Polya tree mixture models.

    Science.gov (United States)

    Branscum, Adam J; Hanson, Timothy E

    2008-09-01

    Summary. A common goal in meta-analysis is estimation of a single effect measure using data from several studies that are each designed to address the same scientific inquiry. Because studies are typically conducted in geographically disperse locations, recent developments in the statistical analysis of meta-analytic data involve the use of random effects models that account for study-to-study variability attributable to differences in environments, demographics, genetics, and other sources that lead to heterogeneity in populations. Stemming from asymptotic theory, study-specific summary statistics are modeled according to normal distributions with means representing latent true effect measures. A parametric approach subsequently models these latent measures using a normal distribution, which is strictly a convenient modeling assumption absent of theoretical justification. To eliminate the influence of overly restrictive parametric models on inferences, we consider a broader class of random effects distributions. We develop a novel hierarchical Bayesian nonparametric Polya tree mixture (PTM) model. We present methodology for testing the PTM versus a normal random effects model. These methods provide researchers a straightforward approach for conducting a sensitivity analysis of the normality assumption for random effects. An application involving meta-analysis of epidemiologic studies designed to characterize the association between alcohol consumption and breast cancer is presented, which together with results from simulated data highlight the performance of PTMs in the presence of nonnormality of effect measures in the source population.

  19. Removal of Cr(III ions from salt solution by nanofiltration: experimental and modelling analysis

    Directory of Open Access Journals (Sweden)

    Kowalik-Klimczak Anna

    2016-09-01

    Full Text Available The aim of this study was experimental and modelling analysis of the nanofiltration process used for the removal of chromium(III ions from salt solution characterized by low pH. The experimental results were interpreted with Donnan and Steric Partitioning Pore (DSP model based on the extended Nernst-Planck equation. In this model, one of the main parameters, describing retention of ions by the membrane, is pore dielectric constant. In this work, it was identified for various process pressures and feed compositions. The obtained results showed the satisfactory agreement between the experimental and modelling data. It means that the DSP model may be helpful for the monitoring of nanofiltration process applied for treatment of chromium tannery wastewater.

  20. Hidden-Markov-Model Analysis Of Telemanipulator Data

    Science.gov (United States)

    Hannaford, Blake; Lee, Paul

    1991-01-01

    Mathematical model and procedure based on hidden-Markov-model concept undergoing development for use in analysis and prediction of outputs of force and torque sensors of telerobotic manipulators. In model, overall task broken down into subgoals, and transition probabilities encode ease with which operator completes each subgoal. Process portion of model encodes task-sequence/subgoal structure, and probability-density functions for forces and torques associated with each state of manipulation encode sensor signals that one expects to observe at subgoal. Parameters of model constructed from engineering knowledge of task.

  1. Analysis of the quench propagation along Nb3Sn Rutherford cables with the THELMA code. Part I: Geometric and thermal models

    Science.gov (United States)

    Manfreda, G.; Bellina, F.

    2016-12-01

    The paper describes the new lumped thermal model recently implemented in THELMA code for the coupled electromagnetic-thermal analysis of superconducting cables. A new geometrical model is also presented, which describes the Rutherford cables used for the accelerator magnets. A first validation of these models has been given by the analysis of the quench longitudinal propagation velocity in the Nb3Sn prototype coil SMC3, built and tested in the frame of the EUCARD project for the development of high field magnets for LHC machine. This paper shows in detail the models, while their application to the quench propagation analysis is presented in a companion paper.

  2. Comparison of Prediction Model for Cardiovascular Autonomic Dysfunction Using Artificial Neural Network and Logistic Regression Analysis

    Science.gov (United States)

    Zeng, Fangfang; Li, Zhongtao; Yu, Xiaoling; Zhou, Linuo

    2013-01-01

    Background This study aimed to develop the artificial neural network (ANN) and multivariable logistic regression (LR) analyses for prediction modeling of cardiovascular autonomic (CA) dysfunction in the general population, and compare the prediction models using the two approaches. Methods and Materials We analyzed a previous dataset based on a Chinese population sample consisting of 2,092 individuals aged 30–80 years. The prediction models were derived from an exploratory set using ANN and LR analysis, and were tested in the validation set. Performances of these prediction models were then compared. Results Univariate analysis indicated that 14 risk factors showed statistically significant association with the prevalence of CA dysfunction (P<0.05). The mean area under the receiver-operating curve was 0.758 (95% CI 0.724–0.793) for LR and 0.762 (95% CI 0.732–0.793) for ANN analysis, but noninferiority result was found (P<0.001). The similar results were found in comparisons of sensitivity, specificity, and predictive values in the prediction models between the LR and ANN analyses. Conclusion The prediction models for CA dysfunction were developed using ANN and LR. ANN and LR are two effective tools for developing prediction models based on our dataset. PMID:23940593

  3. Seismic soil structure interaction: analysis and centrifuge model studies

    International Nuclear Information System (INIS)

    Finn, W.D.L.; Ledbetter, R.H.; Beratan, L.L.

    1985-01-01

    A method for non-linear dynamic effective stress analysis is introduced which is applicable to soil-structure interaction problems. Full interaction including slip between structure and foundation is taken into account and the major factors are included which must be considered when computing dynamic soil response. An experimental investigation was conducted using simulated earthquake tests on centrifuged geotechnical models in order to obtain prototype response data of foundation soils carrying both surface and embedded structures and to validate the dynamic effective stress analysis. Horizontal and vertical accelerations were measured at various points on structures and in the sand foundation. Seismically-induced pore water pressure changes were also measured at various locations in the foundation. Computer plots of the data were obtained while the centrifuge was in flight and representative samples are presented. The results show clearly the pronounced effect that increasing pore water pressures have on dynamic response. It is demonstrated that a coherent picture of dynamic response of soil-structure systems is provided by dynamic effective stress non-linear analysis. Based on preliminary results, it appears that the pore water pressure effects can be predicted

  4. Seismic soil-structure interaction: Analysis and centrifuge model studies

    International Nuclear Information System (INIS)

    Finn, W.D.L.; Ledbetter, R.H.; Beratan, L.L.

    1986-01-01

    A method for nonlinear dynamic effective stress analysis applicable to soil-structure interaction problems is introduced. Full interaction including slip between structure and foundation is taken into account and the major factors that must be considered when computing dynamic soil response are included. An experimental investigation using simulated earthquake tests on centrifuged geotechnical models was conducted to obtain prototype response data of foundation soils carrying both surface and embedded structures and to validate the dynamic effective stress analysis. The centrifuge tests were conducted in the Geotechnical Centrifuge at Cambridge University, England. Horizontal and vertical accelerations were measured at various points on structures and in the sand foundation. Seismically induced pore water pressure changes were also measured at various locations in the foundation. Computer plots of the data were obtained while the centrifuge was in flight and representative samples are presented. The results clearly show the pronounced effect of increasing pore water pressures on dynamic response. It is demonstrated that a coherent picture of dynamic response of soil-structure systems is provided by dynamic effective stress nonlinear analysis. On the basis of preliminary results, it appears that the effects of pore water pressure can be predicted. (orig.)

  5. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  6. Ignalina NPP Safety Analysis: Models and Results

    International Nuclear Information System (INIS)

    Uspuras, E.

    1999-01-01

    Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)

  7. Harmonic Instability Analysis of Single-Phase Grid Connected Converter using Harmonic State Space (HSS) modeling method

    DEFF Research Database (Denmark)

    Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth

    2015-01-01

    The increasing number of renewable energy sources at the distribution grid is becoming a major issue for utility companies, since the grid connected converters are operating at different operating points due to the probabilistic characteristics of renewable energy. Besides, typically, the harmonics...... proposes a new model of a single phase grid connected renewable energy source using the Harmonic State Space modeling approach, which is able to identify such problems and the model can be extended to be applied in the multiple connected converter analysis. The modeling results show the different harmonic...... and impedance from other renewable energy sources are not taken carefully into account in the installation and design. However, this may bring an unknown harmonic instability into the multiple power sourced system and also make the analysis difficult due to the complexity of the grid network. This paper...

  8. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  9. The selection pressures induced non-smooth infectious disease model and bifurcation analysis

    International Nuclear Information System (INIS)

    Qin, Wenjie; Tang, Sanyi

    2014-01-01

    Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation

  10. A proposal for a determination method of element division on an analytical model for finite element elastic waves propagation analysis

    International Nuclear Information System (INIS)

    Ishida, Hitoshi; Meshii, Toshiyuki

    2010-01-01

    This study proposes an element size selection method named the 'Impact-Meshing (IM) method' for a finite element waves propagation analysis model, which is characterized by (1) determination of element division of the model with strain energy in the whole model, (2) static analysis (dynamic analysis in a single time step) with boundary conditions which gives a maximum change of displacement in the time increment and inertial (impact) force caused by the displacement change. In this paper, an example of application of the IM method to 3D ultrasonic wave propagation problem in an elastic solid is described. These examples showed an analysis result with a model determined by the IM method was convergence and calculation time for determination of element subdivision was reduced to about 1/6 by the IM Method which did not need determination of element subdivision by a dynamic transient analysis with 100 time steps. (author)

  11. Simplified Models for Analysis and Design of the Control System Main Loops of CAREM Reactor

    International Nuclear Information System (INIS)

    Etchepareborda, Andres; Flury, Celso

    2000-01-01

    The target of this work is to show a few models developed for control analysis and design of the reactor CAREM's main control loops within a broad range of power (between 40 % and 100%).By one side, it is shown the main features of a analytic model programed in MATLAB.This model is based on fitting steady state points at different power levels of the CAREM's RETRAN model.By the other side, it is shown linear models of black-box type denoting the perturbed behavior of the system for each level power point.These models are identified from temporal responses of CAREM's RETRAN model to perturbed input signals over the different steady power level points.Then the dynamics of these models are verified contrasting the temporal responses of the RETRAN model versus the responses of the MATLAB model and the identified models, in each steady power level point.Also are contrasting the frequency response of the linearization of MATLAB model versus the frequency response of the identified models, in each steady power level point.Either the MATLAB model as the identified models are good enough for the control analysis and design of the three main control loops.The MATLAB model has a few differences against the RETRAN model in the primary pressure output variable, that it must be taken into account in the design of this control loop if this model is used.The aim of these models is to represent in a satisfactory way the dynamics of the plant for a later control analysis and design of the control loops in a frequency range between 0.01 rad/seg and 0.3 rad/seg, and a power range between 40 % and 100 %

  12. Model parameter uncertainty analysis for an annual field-scale phosphorus loss model

    Science.gov (United States)

    Phosphorous (P) loss models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. All P loss models, however, have an inherent amount of uncertainty associated with them. In this study, we conducted an uncertainty analysis with ...

  13. Supercritical kinetic analysis in simplified system of fuel debris using integral kinetic model

    International Nuclear Information System (INIS)

    Tuya, Delgersaikhan; Obara, Toru

    2016-01-01

    Highlights: • Kinetic analysis in simplified weakly coupled fuel debris system was performed. • The integral kinetic model was used to simulate criticality accidents. • The fission power and released energy during simulated accident were obtained. • Coupling between debris regions and its effect on the fission power was obtained. - Abstract: Preliminary prompt supercritical kinetic analyses in a simplified coupled system of fuel debris designed to roughly resemble a melted core of a nuclear reactor were performed using an integral kinetic model. The integral kinetic model, which can describe region- and time-dependent fission rate in a coupled system of arbitrary geometry, was used because the fuel debris system is weakly coupled in terms of neutronics. The results revealed some important characteristics of coupled systems, such as the coupling between debris regions and the effect of the coupling on the fission rate and released energy in each debris region during the simulated criticality accident. In brief, this study showed that the integral kinetic model can be applied to supercritical kinetic analysis in fuel debris systems and also that it can be a useful tool for investigating the effect of the coupling on consequences of a supercritical accident.

  14. Finite element analysis and modeling of temperature distribution in turning of titanium alloys

    Directory of Open Access Journals (Sweden)

    Moola Mohan Reddy

    2018-04-01

    Full Text Available The titanium alloys (Ti-6Al-4V have been widely used in aerospace, and medical applications and the demand is ever-growing due to its outstanding properties. In this paper, the finite element modeling on machinability of Ti-6Al-4V using cubic boron nitride and polycrystalline diamond tool in dry turning environment was investigated. This research was carried out to generate mathematical models at 95% confidence level for cutting force and temperature distribution regarding cutting speed, feed rate and depth of cut. The Box-Behnken design of experiment was used as Response Surface Model to generate combinations of cutting variables for modeling. Then, finite element simulation was performed using AdvantEdge®. The influence of each cutting parameters on the cutting responses was investigated using Analysis of Variance. The analysis shows that depth of cut is the most influential parameter on resultant cutting force whereas feed rate is the most influential parameter on cutting temperature. Also, the effect of the cutting-edge radius was investigated for both tools. This research would help to maximize the tool life and to improve surface finish.

  15. Modeling and analysis of long term energy demands in residential sector of pakistan

    International Nuclear Information System (INIS)

    Rashid, T.; Sahir, M.H.

    2015-01-01

    Residential sector is the core among the energy demand sectors in Pakistan. Currently, various techniques are being used worldwide to assess future energy demands including integrated system modeling (ISM). Therefore, the current study is focused on implementation of ISM approach for future energy demand analysis of Pakistan's residential sector in terms of increase in population, rapid urbanization, household size and type, and increase/decrease in GDP. A detailed business-as-usual (BAU) model is formulated in TIMES energy modeling framework using different factors like growth in future energy services, end-use technology characterization, and restricted fuel supplies. Additionally, the developed model is capable to compare the projected energy demand under different scenarios e.g. strong economy, weak economy and energy efficiency. The implementation of ISM proved a viable approach to predict the future energy demands of Pakistan's residential sector. Furthermore, the analysis shows that the energy consumption in the residential sector would be 46.5 Mtoe (Million Ton of Oil Equivalent) in 2040 compared to 23 Mtoe of the base year (2007) along with 600% increase in electricity demands. The study further maps the potential residential energy policies to congregate the future demands. (author)

  16. Towards the generation of a parametric foot model using principal component analysis: A pilot study.

    Science.gov (United States)

    Scarton, Alessandra; Sawacha, Zimi; Cobelli, Claudio; Li, Xinshan

    2016-06-01

    There have been many recent developments in patient-specific models with their potential to provide more information on the human pathophysiology and the increase in computational power. However they are not yet successfully applied in a clinical setting. One of the main challenges is the time required for mesh creation, which is difficult to automate. The development of parametric models by means of the Principle Component Analysis (PCA) represents an appealing solution. In this study PCA has been applied to the feet of a small cohort of diabetic and healthy subjects, in order to evaluate the possibility of developing parametric foot models, and to use them to identify variations and similarities between the two populations. Both the skin and the first metatarsal bones have been examined. Besides the reduced sample of subjects considered in the analysis, results demonstrated that the method adopted herein constitutes a first step towards the realization of a parametric foot models for biomechanical analysis. Furthermore the study showed that the methodology can successfully describe features in the foot, and evaluate differences in the shape of healthy and diabetic subjects. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Integrated dynamic modeling and management system mission analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, A.K.

    1994-12-28

    This document summarizes the mission analysis performed on the Integrated Dynamic Modeling and Management System (IDMMS). The IDMMS will be developed to provide the modeling and analysis capability required to understand the TWRS system behavior in terms of the identified TWRS performance measures. The IDMMS will be used to demonstrate in a verified and validated manner the satisfactory performance of the TWRS system configuration and assurance that the requirements have been satisfied.

  18. Integrated dynamic modeling and management system mission analysis

    International Nuclear Information System (INIS)

    Lee, A.K.

    1994-01-01

    This document summarizes the mission analysis performed on the Integrated Dynamic Modeling and Management System (IDMMS). The IDMMS will be developed to provide the modeling and analysis capability required to understand the TWRS system behavior in terms of the identified TWRS performance measures. The IDMMS will be used to demonstrate in a verified and validated manner the satisfactory performance of the TWRS system configuration and assurance that the requirements have been satisfied

  19. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Peter E [ORNL; Wang, Weile [ORNL; Law, Beverly E. [Oregon State University; Nemani, Ramakrishna R [NASA Ames Research Center

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.

  20. Radiographic analysis of odontogenic cysts showing displacement of the mandibular canal

    International Nuclear Information System (INIS)

    Cho, Bong Hae

    2003-01-01

    To assess the radiographic findings of odontogenic cysts showing displacement of the mandibular canal using computed tomographic (CT) and panoramic images. CT and panoramic images of 63 odontogenic cysts (27 dentigerous, 16 odontogenic keratocysts, and 20 radicular cysts) were analyzed to evaluate the following parameters: the dimension and shape of the cysts, and the effect of the cysts on the mandibular canal and cortical plates. Of the 63 cysts examined in the study, 35 (55.6%) showed inferior displacement of the mandibular canal and 46 (73.0%) showed perforation of the canal. There were statistically significant differenced between CT and panoramic images in depicting displacement and perforation of the mandibular canal. Cortical expansion was seen in 46 cases (73.0%) and cortical perforation in 23 cases (36.5%). The radicular cysts showed cortical expansion and perforation less frequently than the other cyst groups. Large cysts of mandible should be evaluated by multiplanar CT images in order to detect the mandibular canal and cortical bone involvement.

  1. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  2. Plectasin shows intracellular activity against Staphylococcus aureus in human THP-1 monocytes and in a mouse peritonitis model

    DEFF Research Database (Denmark)

    Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre

    2009-01-01

    was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...

  3. Nonlinear analysis of an extended traffic flow model in ITS environment

    Energy Technology Data Exchange (ETDEWEB)

    Yu Lei [College of Automation, Northwestern Polytechnical University, Xi' an, Shaanxi 710072 (China)], E-mail: yuleijk@126.com; Shi Zhongke [College of Automation, Northwestern Polytechnical University, Xi' an, Shaanxi 710072 (China)

    2008-05-15

    An extended traffic flow model is proposed by introducing the relative velocity of arbitrary number of cars that precede and that follow into the Newell-Whitham-type car-following model. The stability condition of this model is obtained by using the linear stability theory. The results show that the stability of traffic flow is improved by taking into account the relative velocity of cars ahead and backward. By applying the nonlinear analysis the modified Korteweg-de Vries (mKdV) equation is derived to describe the traffic behavior near the critical point. The kink-antikink soliton, the solution of the mKdV equation, is obtained to describe the traffic jams. From the numerical simulation, it is shown that the traffic jams are suppressed efficiently by taking into account the relative velocity of cars ahead and backward. The analytical results are consistent with the simulation one.

  4. Nonlinear analysis of an extended traffic flow model in ITS environment

    International Nuclear Information System (INIS)

    Yu Lei; Shi Zhongke

    2008-01-01

    An extended traffic flow model is proposed by introducing the relative velocity of arbitrary number of cars that precede and that follow into the Newell-Whitham-type car-following model. The stability condition of this model is obtained by using the linear stability theory. The results show that the stability of traffic flow is improved by taking into account the relative velocity of cars ahead and backward. By applying the nonlinear analysis the modified Korteweg-de Vries (mKdV) equation is derived to describe the traffic behavior near the critical point. The kink-antikink soliton, the solution of the mKdV equation, is obtained to describe the traffic jams. From the numerical simulation, it is shown that the traffic jams are suppressed efficiently by taking into account the relative velocity of cars ahead and backward. The analytical results are consistent with the simulation one

  5. Container Throughput Forecasting Using Dynamic Factor Analysis and ARIMAX Model

    Directory of Open Access Journals (Sweden)

    Marko Intihar

    2017-11-01

    Full Text Available The paper examines the impact of integration of macroeconomic indicators on the accuracy of container throughput time series forecasting model. For this purpose, a Dynamic factor analysis and AutoRegressive Integrated Moving-Average model with eXogenous inputs (ARIMAX are used. Both methodologies are integrated into a novel four-stage heuristic procedure. Firstly, dynamic factors are extracted from external macroeconomic indicators influencing the observed throughput. Secondly, the family of ARIMAX models of different orders is generated based on the derived factors. In the third stage, the diagnostic and goodness-of-fit testing is applied, which includes statistical criteria such as fit performance, information criteria, and parsimony. Finally, the best model is heuristically selected and tested on the real data of the Port of Koper. The results show that by applying macroeconomic indicators into the forecasting model, more accurate future throughput forecasts can be achieved. The model is also used to produce future forecasts for the next four years indicating a more oscillatory behaviour in (2018-2020. Hence, care must be taken concerning any bigger investment decisions initiated from the management side. It is believed that the proposed model might be a useful reinforcement of the existing forecasting module in the observed port.

  6. Bayesian Sensitivity Analysis of a Cardiac Cell Model Using a Gaussian Process Emulator

    Science.gov (United States)

    Chang, Eugene T Y; Strong, Mark; Clayton, Richard H

    2015-01-01

    Models of electrical activity in cardiac cells have become important research tools as they can provide a quantitative description of detailed and integrative physiology. However, cardiac cell models have many parameters, and how uncertainties in these parameters affect the model output is difficult to assess without undertaking large numbers of model runs. In this study we show that a surrogate statistical model of a cardiac cell model (the Luo-Rudy 1991 model) can be built using Gaussian process (GP) emulators. Using this approach we examined how eight outputs describing the action potential shape and action potential duration restitution depend on six inputs, which we selected to be the maximum conductances in the Luo-Rudy 1991 model. We found that the GP emulators could be fitted to a small number of model runs, and behaved as would be expected based on the underlying physiology that the model represents. We have shown that an emulator approach is a powerful tool for uncertainty and sensitivity analysis in cardiac cell models. PMID:26114610

  7. Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.

    Science.gov (United States)

    van Erp, Sara; Mulder, Joris; Oberski, Daniel L

    2017-11-27

    Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Global plastic models for computerized structural analysis

    International Nuclear Information System (INIS)

    Roche, R.L.; Hoffmann, A.

    1977-01-01

    In many types of structures, it is possible to use generalized stresses (like membrane forces, bending moment, torsion moment...) to define a yield surface for a part of the structure. Analysis can be achieved by using the HILL's principle and a hardening rule. The whole formulation is said 'Global Plastic Model'. Two different global models are used in the CEASEMT system for structural analysis, one for shell analysis and the other for piping analysis (in plastic or creep field). In shell analysis the generalized stresses chosen are the membrane forces and bending (including torsion) moments. There is only one yield condition for a normal to the middle surface and no integration along the thickness is required. In piping analysis, the choice of generalized stresses is bending moments, torsional moment, hoop stress and tension stress. There is only a set of stresses for a cross section and no integration over the cross section area is needed. Connected strains are axis curvature, torsion, uniform strains. The definition of the yield surface is the most important item. A practical way is to use a diagonal quadratic function of the stress components. But the coefficients are depending of the shape of the pipe element, especially for curved segments. Indications will be given on the yield functions used. Some examples of applications in structural analysis are added to the text

  9. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  10. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  11. Guideliness for system modeling: fault tree [analysis

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard

  12. Vortexlet models of flapping flexible wings show tuning for force production and control

    International Nuclear Information System (INIS)

    Mountcastle, A M; Daniel, T L

    2010-01-01

    Insect wings are compliant structures that experience deformations during flight. Such deformations have recently been shown to substantially affect induced flows, with appreciable consequences to flight forces. However, there are open questions related to the aerodynamic mechanisms underlying the performance benefits of wing deformation, as well as the extent to which such deformations are determined by the boundary conditions governing wing actuation together with mechanical properties of the wing itself. Here we explore aerodynamic performance parameters of compliant wings under periodic oscillations, subject to changes in phase between wing elevation and pitch, and magnitude and spatial pattern of wing flexural stiffness. We use a combination of computational structural mechanics models and a 2D computational fluid dynamics approach to ask how aerodynamic force production and control potential are affected by pitch/elevation phase and variations in wing flexural stiffness. Our results show that lift and thrust forces are highly sensitive to flexural stiffness distributions, with performance optima that lie in different phase regions. These results suggest a control strategy for both flying animals and engineering applications of micro-air vehicles.

  13. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  14. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  15. Conceptual models for waste tank mechanistic analysis

    International Nuclear Information System (INIS)

    Allemann, R.T.; Antoniak, Z.I.; Eyler, L.L.; Liljegren, L.M.; Roberts, J.S.

    1992-02-01

    Pacific Northwest Laboratory (PNL) is conducting a study for Westinghouse Hanford Company (Westinghouse Hanford), a contractor for the US Department of Energy (DOE). The purpose of the work is to study possible mechanisms and fluid dynamics contributing to the periodic release of gases from double-shell waste storage tanks at the Hanford Site in Richland, Washington. This interim report emphasizing the modeling work follows two other interim reports, Mechanistic Analysis of Double-Shell Tank Gas Release Progress Report -- November 1990 and Collection and Analysis of Existing Data for Waste Tank Mechanistic Analysis Progress Report -- December 1990, that emphasized data correlation and mechanisms. The approach in this study has been to assemble and compile data that are pertinent to the mechanisms, analyze the data, evaluate physical properties and parameters, evaluate hypothetical mechanisms, and develop mathematical models of mechanisms

  16. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  17. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  18. The stability of the extended model of hypothalamic-pituitary-adrenal axis examined by stoichiometric network analysis

    Science.gov (United States)

    Marković, V. M.; Čupić, Ž.; Ivanović, A.; Kolar-Anić, Lj.

    2011-12-01

    Stoichiometric network analysis (SNA) represents a powerful mathematical tool for stability analysis of complex stoichiometric networks. Recently, the important improvement of the method has been made, according to which instability relations can be entirely expressed via reaction rates, instead of thus far used, in general case undefined, current rates. Such an improved SNA methodology was applied to the determination of exact instability conditions of the extended model of the hypothalamic-pituitary-adrenal (HPA) axis, a neuroendocrinological system, whose hormone concentrations exert complex oscillatory evolution. For emergence of oscillations, the Hopf bifurcation condition was utilized. Instability relations predicted by SNA showed good correlation with numerical simulation data of the HPA axis model.

  19. Sensitivity analysis using the FRAPCON-1/EM: development of a calculation model for licensing

    International Nuclear Information System (INIS)

    Chapot, J.L.C.

    1985-01-01

    The FRAPCON-1/EM is version of the FRAPCON-1 code which analyses fuel rods performance under normal operation conditions. This version yields conservative results and is used by the NRC in its licensing activities. A sensitivity analysis was made, to determine the combination of models from the FRAPCON-1/EM which yields the most conservative results for a typical Angra-1 reactor fuel rod. The present analysis showed that this code can be used as a calculation tool for the licensing of the Angra-1 reload. (F.E.) [pt

  20. The economics of natural gas infrastructure investments. Theory and model-based analysis for Europe

    Energy Technology Data Exchange (ETDEWEB)

    Lochner, Stefan

    2012-07-01

    Changing supply structures, security of supply threats and efforts to eliminate bottlenecks and increase competition in the European gas market potentially warrant infrastructure investments. However, which investments are actually efficient is unclear. From a theoretical perspective, concepts from other sectors regarding the estimation of congestion cost and efficient investment can be applied - with some extensions - to natural gas markets. Investigations in a simple analytical framework, thereby, show that congestion does not necessarily imply that investment is efficient, and that there are multiple interdependencies between investments in different infrastructure elements (pipeline grid, gas storage, import terminals for liquefied natural gas (LNG)) which need to be considered in an applied analysis. Such interdependencies strengthen the case for a model-based analysis. An optimization model minimizing costs can illustrate the first-best solution with respect to investments in natural gas infrastructure; gas market characteristics such as temperature-dependent stochasticity of demand or the lumpiness of investments can be included. Scenario analyses help to show the effects of changing the underlying model presumption. Hence, results are projections subject to data and model assumption - and not forecasts. However, as they depict the optimal, cost-minimizing outcome, results provide a guideline to policymakers and regulators regarding the desirable market outcome. A stochastic mixed-integer dispatch and investment model for the European natural gas infrastructure is developed as an optimization model taking the theoretical inter-dependencies into account. It is based on an extensive infrastructure database including long-distance transmission pipelines, LNG terminals and gas storage sites with a high level of spatial granularity. It is parameterized with assumptions on supply and demand developments as well as empirically derived infrastructure extension costs