WorldWideScience

Sample records for complexes model experiments

  1. Surface complexation modelling: Experiments on the sorption of nickel on quartz

    International Nuclear Information System (INIS)

    Puukko, E.; Hakanen, M.

    1995-10-01

    Assessing the safety of a final repository for nuclear wastes requires knowledge concerning the way in which the radionuclides released are retarded in the geosphere. The aim of the work is to aquire knowledge of empirical methods repeating the experiments on the sorption of nickel on quartz described in the reports published by the British Geological Survey (BGS). The experimental results were modelled with computer models at the Technical Research Centre of Finland (VTT Chemical Technology). The results showed that the experimental knowledge of the sorption of Ni on quartz have been acheved by repeating the experiments of BGS. Experiments made with the two quartz types, Min-U-Sil 5 (MUS) and Nilsiae, showed the difference in sorption of Ni in the low ionic strength solution (0.001 M NaNO 3 ). The sorption of Ni on MUS was higher than predicted by the Surface Complexation Model (SCM). The phenomenon was also observed by the BGS, and may be due to the different amounts of inpurities in the MUS and in the NLS. In other respects, the results of the sorption experiments fitted quite well with those predicted by the SCM model. (8 refs., 8 figs., 11 tabs.)

  2. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2013-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20...

  3. Dynamics of vortices in complex wakes: Modeling, analysis, and experiments

    Science.gov (United States)

    Basu, Saikat

    The thesis develops singly-periodic mathematical models for complex laminar wakes which are formed behind vortex-shedding bluff bodies. These wake structures exhibit a variety of patterns as the bodies oscillate or are in close proximity of one another. The most well-known formation comprises two counter-rotating vortices in each shedding cycle and is popularly known as the von Karman vortex street. Of the more complex configurations, as a specific example, this thesis investigates one of the most commonly occurring wake arrangements, which consists of two pairs of vortices in each shedding period. The paired vortices are, in general, counter-rotating and belong to a more general definition of the 2P mode, which involves periodic release of four vortices into the flow. The 2P arrangement can, primarily, be sub-classed into two types: one with a symmetric orientation of the two vortex pairs about the streamwise direction in a periodic domain and the other in which the two vortex pairs per period are placed in a staggered geometry about the wake centerline. The thesis explores the governing dynamics of such wakes and characterizes the corresponding relative vortex motion. In general, for both the symmetric as well as the staggered four vortex periodic arrangements, the thesis develops two-dimensional potential flow models (consisting of an integrable Hamiltonian system of point vortices) that consider spatially periodic arrays of four vortices with their strengths being +/-Gamma1 and +/-Gamma2. Vortex formations observed in the experiments inspire the assumed spatial symmetry. The models demonstrate a number of dynamic modes that are classified using a bifurcation analysis of the phase space topology, consisting of level curves of the Hamiltonian. Despite the vortex strengths in each pair being unequal in magnitude, some initial conditions lead to relative equilibrium when the vortex configuration moves with invariant size and shape. The scaled comparisons of the

  4. Complexity effects in choice experiments-based models

    NARCIS (Netherlands)

    Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.

    2012-01-01

    Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual

  5. Complex terrain experiments in the New European Wind Atlas

    DEFF Research Database (Denmark)

    Mann, Jakob; Angelou, Nikolas; Arnqvist, Johan

    2017-01-01

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiment...

  6. Complex terrain experiments in the New European Wind Atlas

    Science.gov (United States)

    Angelou, N.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga

    2017-01-01

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265025

  7. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    Science.gov (United States)

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-04-11

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Applicability of surface complexation modelling in TVO's studies on sorption of radionuclides

    International Nuclear Information System (INIS)

    Carlsson, T.

    1994-03-01

    The report focuses on the possibility of applying surface complexation theories to the conditions at a potential repository site in Finland and of doing proper experimental work in order to determine necessary constants for the models. The report provides background information on: (1) what type experiments should be carried out in order to produce data for surface complexation modelling of sorption phenomena under potential Finnish repository conditions, and (2) how to design and perform properly such experiments, in order to gather data, develop models or both. The report does not describe in detail how proper surface complexation experiments or modelling should be carried out. The work contains several examples of information that may be valuable in both modelling and experimental work. (51 refs., 6 figs., 4 tabs.)

  9. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

    Science.gov (United States)

    Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

    2013-04-01

    In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

  10. A business process modeling experience in a complex information system re-engineering.

    Science.gov (United States)

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.

  11. Experiment planning using high-level component models at W7-X

    International Nuclear Information System (INIS)

    Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred

    2012-01-01

    Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.

  12. Prediction of homoprotein and heteroprotein complexes by protein docking and template-based modeling: A CASP-CAPRI experiment

    KAUST Repository

    Lensink, Marc F.; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaë l A.G.; Bates, Paul A.; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Franç oise; Guerois, Raphaë l; Vangone, Anna; Rodrigues, Joã o P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jimé nez-Garcí a, Brian; Moal, Iain H.; Fé rnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J.

    2016-01-01

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. © 2016 Wiley Periodicals, Inc.

  13. Prediction of homoprotein and heteroprotein complexes by protein docking and template-based modeling: A CASP-CAPRI experiment

    KAUST Repository

    Lensink, Marc F.

    2016-04-28

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. © 2016 Wiley Periodicals, Inc.

  14. Predictive modelling of complex agronomic and biological systems.

    Science.gov (United States)

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.

  15. A Practical Philosophy of Complex Climate Modelling

    Science.gov (United States)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  16. Modeling Users' Experiences with Interactive Systems

    CERN Document Server

    Karapanos, Evangelos

    2013-01-01

    Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences.  This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...

  17. Extracting Models in Single Molecule Experiments

    Science.gov (United States)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  18. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    Science.gov (United States)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  19. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  20. Model complexity in carbon sequestration:A design of experiment and response surface uncertainty analysis

    Science.gov (United States)

    Zhang, Y.; Li, S.

    2014-12-01

    Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest

  1. Complexity and formative experiences

    Directory of Open Access Journals (Sweden)

    Roque Strieder

    2017-12-01

    Full Text Available The contemporaneity is characterized by instability and diversity calling into question certainties and truths proposed in modernity. We recognize that the reality of things and phenomena become effective as a set of events, interactions, retroactions and chances. This different frame extends the need for revision of the epistemological foundations that sustain educational practices and give them sense. The complex thinking is an alternative option for acting as a counterpoint to classical science and its reductionist logic and knowledge compartmentalization, as well as to answer to contemporary epistemological and educational challenges. It aims to associate different areas and forms of knowledge, without, however merge them, distinguishing without separating the several disciplines and instances of the realities. This study, in theoretical references, highlights the relevance of complex approaches to support formative experiences because also able to produce complexities in reflections about educational issues. We conclude that formative possibilities from complexity potentialize the resignification of human’s conception and the understanding of its singularity in interdependence; The understanding that pedagogical and educational activities is a constant interrogation about the possibilities of knowing the knowledge and reframe learning, far beyond knowing its functions and utilitarian purposes; and, as a formative possibility, places us on the trail of responsibility, not as something eventual, but present and indicative of freedom to choose to stay or go beyond.

  2. The task complexity experiment 2003/2004

    International Nuclear Information System (INIS)

    Laumann, Karin; Braarud, Per Oeivind; Svengren, Haakan

    2005-08-01

    The purpose of this experiment was to explore how additional tasks added to base case scenarios affected the operators' performance of the main tasks. These additional tasks were in different scenario variants intended to cause high time pressure, high information load, and high masking. The experiment was run in Halden Man-Machine Laboratory's BWR simulator. Seven crews participated, each for one week. There were three operators in each crew. Five main types of scenarios and 20 scenario variants were run. The data from the experiment were analysed by completion time for important actions and by in-depth qualitative analyses of the crews' communications. The results showed that high time pressure decreased some of the crews' performance in the scenarios. When a crew had problems in solving a task for which the time pressure was high, they had even more problems in solving other important tasks. High information load did not affect the operators' performance much and in general the crews were very good at selecting the most important tasks in the scenarios. The scenarios that included both high time pressure and high information load resulted in more reduced performance for the crews compared to the scenarios that only included high time pressure. The total amount of tasks to do and information load to attend to seemed to affect the crews' performance. To solve the scenarios with high time pressure well, it was important to have good communication and good allocation of tasks within the crew. Furthermore, the results showed that scenarios with an added complex, masked task created problems for some crews when solving a relatively simple main task. Overall, the results confirmed that complicating, but secondary tasks, that are not normally taken into account when modelling the primary tasks in a PRA scenario can adversely affect the performance of the main tasks modelled in the PRA scenario. (Author)

  3. Reassessing Geophysical Models of the Bushveld Complex in 3D

    Science.gov (United States)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  4. Computations, Complexity, Experiments, and the World Outside Physics

    International Nuclear Information System (INIS)

    Kadanoff, L.P

    2009-01-01

    Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)

  5. System Testability Analysis for Complex Electronic Devices Based on Multisignal Model

    International Nuclear Information System (INIS)

    Long, B; Tian, S L; Huang, J G

    2006-01-01

    It is necessary to consider the system testability problems for electronic devices during their early design phase because modern electronic devices become smaller and more compositive while their function and structure are more complex. Multisignal model, combining advantage of structure model and dependency model, is used to describe the fault dependency relationship for the complex electronic devices, and the main testability indexes (including optimal test program, fault detection rate, fault isolation rate, etc.) to evaluate testability and corresponding algorithms are given. The system testability analysis process is illustrated for USB-GPIB interface circuit with TEAMS toolbox. The experiment results show that the modelling method is simple, the computation speed is rapid and this method has important significance to improve diagnostic capability for complex electronic devices

  6. Complexity in Climate Change Manipulation Experiments

    DEFF Research Database (Denmark)

    Kreyling, Juergen; Beier, Claus

    2014-01-01

    Climate change goes beyond gradual changes in mean conditions. It involves increased variability in climatic drivers and increased frequency and intensity of extreme events. Climate manipulation experiments are one major tool to explore the ecological impacts of climate change. Until now...... variability in temperature are ecologically important. Embracing complexity in future climate change experiments in general is therefore crucial......., precipitation experiments have dealt with temporal variability or extreme events, such as drought, resulting in a multitude of approaches and scenarios with limited comparability among studies. Temperature manipulations have mainly been focused only on warming, resulting in better comparability among studies...

  7. Numerical Modeling of Fluid-Structure Interaction with Rheologically Complex Fluids

    OpenAIRE

    Chen, Xingyuan

    2014-01-01

    In the present work the interaction between rheologically complex fluids and elastic solids is studied by means of numerical modeling. The investigated complex fluids are non-Newtonian viscoelastic fluids. The fluid-structure interaction (FSI) of this kind is frequently encountered in injection molding, food processing, pharmaceutical engineering and biomedicine. The investigation via experiments is costly, difficult or in some cases, even impossible. Therefore, research is increasingly aided...

  8. Surface complexation modelling applied to the sorption of nickel on silica

    International Nuclear Information System (INIS)

    Olin, M.

    1995-10-01

    The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)

  9. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  10. Modeling Complex Systems

    CERN Document Server

    Boccara, Nino

    2010-01-01

    Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

  11. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    Energy Technology Data Exchange (ETDEWEB)

    Koestner, Stefan [CERN (Switzerland)], E-mail: koestner@mpi-halle.mpg.de

    2009-09-11

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  12. Design of experiments for identification of complex biochemical systems with applications to mitochondrial bioenergetics.

    Science.gov (United States)

    Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K

    2009-01-01

    Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.

  13. Modeling Complex Systems

    International Nuclear Information System (INIS)

    Schreckenberg, M

    2004-01-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)

  14. Complexities in barrier island response to sea level rise: Insights from numerical model experiments, North Carolina Outer Banks

    Science.gov (United States)

    Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David

    2010-09-01

    Using a morphological-behavior model to conduct sensitivity experiments, we investigate the sea level rise response of a complex coastal environment to changes in a variety of factors. Experiments reveal that substrate composition, followed in rank order by substrate slope, sea level rise rate, and sediment supply rate, are the most important factors in determining barrier island response to sea level rise. We find that geomorphic threshold crossing, defined as a change in state (e.g., from landward migrating to drowning) that is irreversible over decadal to millennial time scales, is most likely to occur in muddy coastal systems where the combination of substrate composition, depth-dependent limitations on shoreface response rates, and substrate erodibility may prevent sand from being liberated rapidly enough, or in sufficient quantity, to maintain a subaerial barrier. Analyses indicate that factors affecting sediment availability such as low substrate sand proportions and high sediment loss rates cause a barrier to migrate landward along a trajectory having a lower slope than average barrier island slope, thereby defining an "effective" barrier island slope. Other factors being equal, such barriers will tend to be smaller and associated with a more deeply incised shoreface, thereby requiring less migration per sea level rise increment to liberate sufficient sand to maintain subaerial exposure than larger, less incised barriers. As a result, the evolution of larger/less incised barriers is more likely to be limited by shoreface erosion rates or substrate erodibility making them more prone to disintegration related to increasing sea level rise rates than smaller/more incised barriers. Thus, the small/deeply incised North Carolina barriers are likely to persist in the near term (although their long-term fate is less certain because of the low substrate slopes that will soon be encountered). In aggregate, results point to the importance of system history (e

  15. Complexities in barrier island response to sea level rise: Insights from numerical model experiments, North Carolina Outer Banks

    Science.gov (United States)

    Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David

    2010-01-01

    Using a morphological-behavior model to conduct sensitivity experiments, we investigate the sea level rise response of a complex coastal environment to changes in a variety of factors. Experiments reveal that substrate composition, followed in rank order by substrate slope, sea level rise rate, and sediment supply rate, are the most important factors in determining barrier island response to sea level rise. We find that geomorphic threshold crossing, defined as a change in state (e.g., from landward migrating to drowning) that is irreversible over decadal to millennial time scales, is most likely to occur in muddy coastal systems where the combination of substrate composition, depth-dependent limitations on shoreface response rates, and substrate erodibility may prevent sand from being liberated rapidly enough, or in sufficient quantity, to maintain a subaerial barrier. Analyses indicate that factors affecting sediment availability such as low substrate sand proportions and high sediment loss rates cause a barrier to migrate landward along a trajectory having a lower slope than average barrier island slope, thereby defining an “effective” barrier island slope. Other factors being equal, such barriers will tend to be smaller and associated with a more deeply incised shoreface, thereby requiring less migration per sea level rise increment to liberate sufficient sand to maintain subaerial exposure than larger, less incised barriers. As a result, the evolution of larger/less incised barriers is more likely to be limited by shoreface erosion rates or substrate erodibility making them more prone to disintegration related to increasing sea level rise rates than smaller/more incised barriers. Thus, the small/deeply incised North Carolina barriers are likely to persist in the near term (although their long-term fate is less certain because of the low substrate slopes that will soon be encountered). In aggregate, results point to the importance of system history (e

  16. Adolescents' experience of complex persistent pain.

    Science.gov (United States)

    Sørensen, Kari; Christiansen, Bjørg

    2017-04-01

    Persistent (chronic) pain is a common phenomenon in adolescents. When young people are referred to a pain clinic, they usually have amplified pain signals, with pain syndromes of unconfirmed ethology, such as fibromyalgia and complex regional pain syndrome (CRPS). Pain is complex and seems to be related to a combination of illness, injury, psychological distress, and environmental factors. These young people are found to have higher levels of distress, anxiety, sleep disturbance, and lower mood than their peers and may be in danger of entering adulthood with mental and physical problems. In order to understand the complexity of persistent pain in adolescents, there seems to be a need for further qualitative research into their lived experiences. The aim of this study was to explore adolescents' experiences of complex persistent pain and its impact on everyday life. The study has an exploratory design with individual in-depth interviews with six youths aged 12-19, recruited from a pain clinic at a main referral hospital in Norway. A narrative approach allowed the informants to give voice to their experiences concerning complex persistent pain. A hermeneutic analysis was used, where the research question was the basis for a reflective interpretation. Three main themes were identified: (1) a life with pain and unpleasant bodily expressions; (2) an altered emotional wellbeing; and (3) the struggle to keep up with everyday life. The pain was experienced as extremely strong, emerging from a minor injury or without any obvious causation, and not always being recognised by healthcare providers. The pain intensity increased as the suffering got worse, and the sensation was hard to describe with words. Parts of their body could change in appearance, and some described having pain-attacks or fainting. The feeling of anxiety was strongly connected to the pain. Despair and uncertainty contributed to physical disability, major sleep problems, school absence, and withdrawal from

  17. Stern-Gerlach Experiments and Complex Numbers in Quantum Physics

    OpenAIRE

    Sivakumar, S.

    2012-01-01

    It is often stated that complex numbers are essential in quantum theory. In this article, the need for complex numbers in quantum theory is motivated using the results of tandem Stern-Gerlach experiments

  18. Complexity and agent-based modelling in urban research

    DEFF Research Database (Denmark)

    Fertner, Christian

    influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...

  19. Sorption of phosphate onto calcite; results from batch experiments and surface complexation modeling

    DEFF Research Database (Denmark)

    Sø, Helle Ugilt; Postma, Dieke; Jakobsen, Rasmus

    2011-01-01

    The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite was stud......The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite...... of a high degree of super-saturation with respect to hydroxyapatite (SIHAP⩽7.83). The amount of phosphate adsorbed varied with the solution composition, in particular, adsorption increases as the CO32- activity decreases (at constant pH) and as pH increases (at constant CO32- activity). The primary effect...... of ionic strength on phosphate sorption onto calcite is its influence on the activity of the different aqueous phosphate species. The experimental results were modeled satisfactorily using the constant capacitance model with >CaPO4Ca0 and either >CaHPO4Ca+ or >CaHPO4- as the adsorbed surface species...

  20. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  1. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    Science.gov (United States)

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  2. Software complex for developing dynamically packed program system for experiment automation

    International Nuclear Information System (INIS)

    Baluka, G.; Salamatin, I.M.

    1985-01-01

    Software complex for developing dynamically packed program system for experiment automation is considered. The complex includes general-purpose programming systems represented as the RT-11 standard operating system and specially developed problem-oriented moduli providing execution of certain jobs. The described complex is realized in the PASKAL' and MAKRO-2 languages and it is rather flexible to variations of the technique of the experiment

  3. Complex matrix model duality

    International Nuclear Information System (INIS)

    Brown, T.W.

    2010-11-01

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  4. Complex matrix model duality

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.W.

    2010-11-15

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  5. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    Science.gov (United States)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  6. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  7. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  8. Modelling and simulation of gas explosions in complex geometries

    Energy Technology Data Exchange (ETDEWEB)

    Saeter, Olav

    1998-12-31

    This thesis presents a three-dimensional Computational Fluid Dynamics (CFD) code (EXSIM94) for modelling and simulation of gas explosions in complex geometries. It gives the theory and validates the following sub-models : (1) the flow resistance and turbulence generation model for densely packed regions, (2) the flow resistance and turbulence generation model for single objects, and (3) the quasi-laminar combustion model. It is found that a simple model for flow resistance and turbulence generation in densely packed beds is able to reproduce the medium and large scale MERGE explosion experiments of the Commission of European Communities (CEC) within a band of factor 2. The model for a single representation is found to predict explosion pressure in better agreement with the experiments with a modified k-{epsilon} model. This modification also gives a slightly improved grid independence for realistic gas explosion approaches. One laminar model is found unsuitable for gas explosion modelling because of strong grid dependence. Another laminar model is found to be relatively grid independent and to work well in harmony with the turbulent combustion model. The code is validated against 40 realistic gas explosion experiments. It is relatively grid independent in predicting explosion pressure in different offshore geometries. It can predict the influence of ignition point location, vent arrangements, different geometries, scaling effects and gas reactivity. The validation study concludes with statistical and uncertainty analyses of the code performance. 98 refs., 96 figs, 12 tabs.

  9. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Simulation model 'methane' as a tool for effective biogas production during anaerobic conversion of complex organic matter

    Energy Technology Data Exchange (ETDEWEB)

    Vavilin, V A; Vasiliev, V B; Ponomarev, A V; Rytow, S V [Russian Academy of Sciences, Moscow (Russian Federation). Water Problems Inst.

    1994-01-01

    A universal basic model of anaerobic conversion of complex organic material is suggested. The model can be used for investigating the start-up experiments for food industry wastewater. General results obtained in the model agreed with the experimental data. An explanation of a complex dynamic behaviour of the anaerobic system is suggested. (author)

  11. Mathematical modelling of complex contagion on clustered networks

    Science.gov (United States)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  12. Mathematical modelling of complex contagion on clustered networks

    Directory of Open Access Journals (Sweden)

    David J. P. O'Sullivan

    2015-09-01

    Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  13. Assessment for Complex Learning Resources: Development and Validation of an Integrated Model

    Directory of Open Access Journals (Sweden)

    Gudrun Wesiak

    2013-01-01

    Full Text Available Today’s e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assessment forms. Therefore, learning experiences enriched with complex didactic resources - such as virtualized collaborations and serious games - have emerged. In this extension of [1] an integrated model for e-assessment (IMA is outlined, which incorporates complex learning resources and assessment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a group of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.

  14. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  15. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    Science.gov (United States)

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and

  16. Multifaceted Modelling of Complex Business Enterprises.

    Science.gov (United States)

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

  17. Multifaceted Modelling of Complex Business Enterprises

    Science.gov (United States)

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  18. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

    Science.gov (United States)

    Thompson, Kimberly M.

    Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

  19. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach.

    Science.gov (United States)

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-09-08

    The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and

  20. Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004

    Energy Technology Data Exchange (ETDEWEB)

    Torralba, B.; Martinez-Arias, R.

    2007-07-01

    Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

  1. Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004

    International Nuclear Information System (INIS)

    Torralba, B.; Martinez-Arias, R.

    2007-01-01

    Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)

  2. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    Science.gov (United States)

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  3. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  4. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  5. The general theory of the Quasi-reproducible experiments: How to describe the measured data of complex systems?

    Science.gov (United States)

    Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei

    2017-01-01

    In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as ;the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its

  6. Post-mining water treatment. Nanofiltration of uranium-contaminated drainage. Experiments and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Hoyer, Michael

    2017-07-01

    Nanofiltration of real uranium-contaminated mine drainage was successfully discussed in experiments and modeling. For the simulation a renowned model was adapted that is capable of describing multi-component solutions. Although the description of synthetic multi-component solutions with a limited number of components was performed before ([Garcia-Aleman2004], [Geraldes2006], [Bandini2003]) the results of this work show that the adapted model is capable of describing the very complex solution. The model developed here is based on: The Donnan-Steric Partitioning Pore Model incorporating Dielectric Exclusion - DSPM and DE ref. [Bowen1997], [Bandini2003], [Bowen2002], [Vezzani2002]. The steric, electric, and dielectric exclusion model - SEDE ref. [Szymczyk2005]. The developed modeling approach is capable of describing multi-component transport, and is based on the pore radius, membrane thickness, and volumetric membrane charge density as physically relevant membrane parameters instead of mere fitting parameters which allows conclusions concerning membrane modification or process design. The experiments involve typical commercially available membranes in combination with a water sample of industrial relevance in the mining sector. Furthermore, it has been shown experimentally that uranium speciation influences its retention. Hence, all experiments consider the speciation of uranium when assessing its charge and size. In the simulation 10 different ionic components have been taken into account. By freely fitting 4 parameters in parallel (pore radius, membrane thickness, membrane charge, relative permittivity of the oriented water layer at the pore wall) an excellent agreement between experiment and simulation was obtained. Moreover, the determined membrane thickness and pore radius is in close agreement with the values obtained by independent membrane characterization using pure water permeability and glucose retention. On the other hand, the fitted and the literature

  7. Post-mining water treatment. Nanofiltration of uranium-contaminated drainage. Experiments and modeling

    International Nuclear Information System (INIS)

    Hoyer, Michael

    2017-01-01

    Nanofiltration of real uranium-contaminated mine drainage was successfully discussed in experiments and modeling. For the simulation a renowned model was adapted that is capable of describing multi-component solutions. Although the description of synthetic multi-component solutions with a limited number of components was performed before ([Garcia-Aleman2004], [Geraldes2006], [Bandini2003]) the results of this work show that the adapted model is capable of describing the very complex solution. The model developed here is based on: The Donnan-Steric Partitioning Pore Model incorporating Dielectric Exclusion - DSPM and DE ref. [Bowen1997], [Bandini2003], [Bowen2002], [Vezzani2002]. The steric, electric, and dielectric exclusion model - SEDE ref. [Szymczyk2005]. The developed modeling approach is capable of describing multi-component transport, and is based on the pore radius, membrane thickness, and volumetric membrane charge density as physically relevant membrane parameters instead of mere fitting parameters which allows conclusions concerning membrane modification or process design. The experiments involve typical commercially available membranes in combination with a water sample of industrial relevance in the mining sector. Furthermore, it has been shown experimentally that uranium speciation influences its retention. Hence, all experiments consider the speciation of uranium when assessing its charge and size. In the simulation 10 different ionic components have been taken into account. By freely fitting 4 parameters in parallel (pore radius, membrane thickness, membrane charge, relative permittivity of the oriented water layer at the pore wall) an excellent agreement between experiment and simulation was obtained. Moreover, the determined membrane thickness and pore radius is in close agreement with the values obtained by independent membrane characterization using pure water permeability and glucose retention. On the other hand, the fitted and the literature

  8. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  9. Multi-Sensor As-Built Models of Complex Industrial Architectures

    Directory of Open Access Journals (Sweden)

    Jean-François Hullo

    2015-12-01

    Full Text Available In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF, is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models. In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”.

  10. Updating the debate on model complexity

    Science.gov (United States)

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  11. THE COMPLEX OF EMOTIONAL EXPERIENCES, RELEVANT MANIFESTATIONS OF INSPIRATION

    Directory of Open Access Journals (Sweden)

    Pavel A. Starikov

    2015-01-01

    Full Text Available The aim of the study is to investigate structure of emotional experiences, relevant manifestations of inspiration creative activities of students.Methods. The proposed methods of mathematical statistics (correlation analysis, factor analysis, multidimensional scaling are applied.Results and scientific novelty. The use of factor analysis, multidimensional scaling allowed to reveal a consistent set of positive experiences of the students, the relevant experience of inspiration in creative activities. «Operational» rueful feelings dedicated by M. Chiksentmihaji («feeling of full involvement, and dilution in what you do», «feeling of concentration, perfect clarity of purpose, complete control and a feeling of total immersion in a job that does not require special efforts» and experiences of the «spiritual» nature, more appropriate to peaks experiences of A. Maslow («feeling of love for all existing, all life»; «a deep sense of self importance, the inner feeling of approval of self»; «feeling of unity with the whole world»; «acute perception of the beauty of the world of nature, “beautiful instant”»; «feeling of lightness, flowing» are included in this complex in accordance with the study results. The interrelation of degree of expressiveness of the given complex of experiences with inspiration experience is considered.Practical significance. The results of the study show structure of emotional experiences, relevant manifestations of inspiration. Research materials can be useful both to psychologists, and experts in the field of pedagogy of creative activity.

  12. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  13. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  14. A unified modeling approach for physical experiment design and optimization in laser driven inertial confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)

    2015-11-15

    Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.

  15. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  16. Evaluation of a micro-scale wind model's performance over realistic building clusters using wind tunnel experiments

    Science.gov (United States)

    Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi

    2016-08-01

    The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.

  17. Complexity of Choice: Teachers' and Students' Experiences Implementing a Choice-Based Comprehensive School Health Model

    Science.gov (United States)

    Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan

    2016-01-01

    Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…

  18. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    Science.gov (United States)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  19. Evidence of complex contagion of information in social media: An experiment using Twitter bots.

    Directory of Open Access Journals (Sweden)

    Bjarke Mønsted

    Full Text Available It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion, or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.

  20. Evidence of complex contagion of information in social media: An experiment using Twitter bots.

    Science.gov (United States)

    Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune

    2017-01-01

    It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.

  1. Complex fluids in biological systems experiment, theory, and computation

    CERN Document Server

    2015-01-01

    This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...

  2. Evaluation of soil flushing of complex contaminated soil: An experimental and modeling simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Sung Mi; Kang, Christina S. [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Jonghwa [Department of Industrial Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Han S., E-mail: hankim@konkuk.ac.kr [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of)

    2015-04-28

    Highlights: • Remediation of complex contaminated soil achieved by sequential soil flushing. • Removal of Zn, Pb, and heavy petroleum oils using 0.05 M citric acid and 2% SDS. • Unified desorption distribution coefficients modeled and experimentally determined. • Nonequilibrium models for the transport behavior of complex contaminants in soils. - Abstract: The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, K{sub Ubatch} and K{sub Lbatch}, converged to constant values as C{sub e} increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of K{sub Ubatch} and K{sub Lbatch} were very close to those of K{sub Ufit} and K{sub Lfit} determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination.

  3. Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment

    CERN Document Server

    Madsen, Alexander; The ATLAS collaboration

    2015-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the latest results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  4. Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment

    CERN Document Server

    Vanadia, Marco; The ATLAS collaboration

    2015-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the latest Run 1 results from the ATLAS Experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in wellmotivated BSM Higgs frameworks, including the two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  5. Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment

    CERN Document Server

    Scutti, Federico; The ATLAS collaboration

    2015-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are summarized. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  6. Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment

    CERN Document Server

    Nagata, Kazuki; The ATLAS collaboration

    2014-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  7. Beyond-the-Standard Model Higgs physics using the ATLAS experiment

    CERN Document Server

    Ernis, G; The ATLAS collaboration

    2014-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  8. Complexity, Modeling, and Natural Resource Management

    Directory of Open Access Journals (Sweden)

    Paul Cilliers

    2013-09-01

    Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.

  9. Humic Acid Complexation of Th, Hf and Zr in Ligand Competition Experiments: Metal Loading and Ph Effects

    Science.gov (United States)

    Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.

    2014-01-01

    The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values

  10. Sutherland models for complex reflection groups

    International Nuclear Information System (INIS)

    Crampe, N.; Young, C.A.S.

    2008-01-01

    There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups

  11. Contrasting model complexity under a changing climate in a headwaters catchment.

    Science.gov (United States)

    Foster, L.; Williams, K. H.; Maxwell, R. M.

    2017-12-01

    Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater

  12. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  13. Mechanical Interaction in Pressurized Pipe Systems: Experiments and Numerical Models

    Directory of Open Access Journals (Sweden)

    Mariana Simão

    2015-11-01

    Full Text Available The dynamic interaction between the unsteady flow occurrence and the resulting vibration of the pipe are analyzed based on experiments and numerical models. Waterhammer, structural dynamic and fluid–structure interaction (FSI are the main subjects dealt with in this study. Firstly, a 1D model is developed based on the method of characteristics (MOC using specific damping coefficients for initial components associated with rheological pipe material behavior, structural and fluid deformation, and type of anchored structural supports. Secondly a 3D coupled complex model based on Computational Fluid Dynamics (CFD, using a Finite Element Method (FEM, is also applied to predict and distinguish the FSI events. Herein, a specific hydrodynamic model of viscosity to replicate the operation of a valve was also developed to minimize the number of mesh elements and the complexity of the system. The importance of integrated analysis of fluid–structure interaction, especially in non-rigidity anchored pipe systems, is equally emphasized. The developed models are validated through experimental tests.

  14. The complexity of role balance: support for the Model of Juggling Occupations.

    Science.gov (United States)

    Evans, Kiah L; Millsteed, Jeannine; Richmond, Janet E; Falkmer, Marita; Falkmer, Torbjorn; Girdler, Sonya J

    2014-09-01

    This pilot study aimed to establish the appropriateness of the Model of Juggling Occupations in exploring the complex experience of role balance amongst working women with family responsibilities living in Perth, Australia. In meeting this aim, an evaluation was conducted of a case study design, where data were collected through a questionnaire, time diary, and interview. Overall role balance varied over time and across participants. Positive indicators of role balance occurred frequently in the questionnaires and time diaries, despite the interviews revealing a predominance of negative evaluations of role balance. Between-role balance was achieved through compatible role overlap, buffering, and renewal. An exploration of within-role balance factors demonstrated that occupational participation, values, interests, personal causation, and habits were related to role balance. This pilot study concluded that the Model of Juggling Occupations is an appropriate conceptual framework to explore the complex and dynamic experience of role balance amongst working women with family responsibilities. It was also confirmed that the case study design, including the questionnaire, time diary, and interview methods, is suitable for researching role balance from this perspective.

  15. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  16. Creation of a simplified benchmark model for the neptunium sphere experiment

    International Nuclear Information System (INIS)

    Mosteller, Russell D.; Loaiza, David J.; Sanchez, Rene G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of 237 Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k eff by more than 1% Δk. This discrepancy supports the suspicion that better cross sections are needed for 237 Np.

  17. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  18. Implementation of a complex multi-phase equation of state for cerium and its correlation with experiment

    Energy Technology Data Exchange (ETDEWEB)

    Cherne, Frank J [Los Alamos National Laboratory; Jensen, Brian J [Los Alamos National Laboratory; Elkin, Vyacheslav M [VNIITF

    2009-01-01

    The complexity of cerium combined with its interesting material properties makes it a desirable material to examine dynamically. Characteristics such as the softening of the material before the phase change, low pressure solid-solid phase change, predicted low pressure melt boundary, and the solid-solid critical point add complexity to the construction of its equation of state. Currently, we are incorporating a feedback loop between a theoretical understanding of the material and an experimental understanding. Using a model equation of state for cerium we compare calculated wave profiles with experimental wave profiles for a number of front surface impact (cerium impacting a plated window) experiments. Using the calculated release isentrope we predict the temperature of the observed rarefaction shock. These experiments showed that the release state occurs at different magnitudes, thus allowing us to infer where dynamic {gamma} - {alpha} phase boundary is.

  19. Numerical experiments on 2D strongly coupled complex plasmas

    International Nuclear Information System (INIS)

    Hou Lujing; Ivlev, A V; Thomas, H M; Morfill, G E

    2010-01-01

    The Brownian Dynamics simulation method is briefly reviewed at first and then applied to study some non-equilibrium phenomena in strongly coupled complex plasmas, such as heat transfer processes, shock wave excitation/propagation and particle trapping, by directly mimicking the real experiments.

  20. Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment

    CERN Document Server

    Vanadia, Marco; The ATLAS collaboration

    2015-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV/$\\rm{c^2}$ has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this report, the latest Run 1 results from the ATLAS Experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well motivated BSM Higgs frameworks, including the two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.

  1. Modeling Musical Complexity: Commentary on Eerola (2016

    Directory of Open Access Journals (Sweden)

    Joshua Albrecht

    2016-07-01

    Full Text Available In his paper, "Expectancy violation and information-theoretic models of melodic complexity," Eerola compares a number of models that correlate musical features of monophonic melodies with participant ratings of perceived melodic complexity. He finds that fairly strong results can be achieved using several different approaches to modeling perceived melodic complexity. The data used in this study are gathered from several previously published studies that use widely different types of melodies, including isochronous folk melodies, isochronous 12-tone rows, and rhythmically complex African folk melodies. This commentary first briefly reviews the article's method and main findings, then suggests a rethinking of the theoretical framework of the study. Finally, some of the methodological issues of the study are discussed.

  2. Effect of glutamic acid on copper sorption onto kaolinite. Batch experiments and surface complexation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Karimzadeh, Lotfallah; Barthen, Robert; Gruendig, Marion; Franke, Karsten; Lippmann-Pipke, Johanna [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactive Transport; Stockmann, Madlen [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Surface Processes

    2017-06-01

    In this work, we study the mobility behavior of Cu(II) under conditions related to an alternative, neutrophile biohydrometallurgical Cu(II) leaching approach. Sorption of copper onto kaolinite influenced by glutamic acid (Glu) was investigated in the presence of 0.01 M NaClO{sub 4} by means of binary and ternary batch adsorption measurements over a pH range of 4 to 9 and surface complexation modeling.

  3. Effect of glutamic acid on copper sorption onto kaolinite. Batch experiments and surface complexation modeling

    International Nuclear Information System (INIS)

    Karimzadeh, Lotfallah; Barthen, Robert; Gruendig, Marion; Franke, Karsten; Lippmann-Pipke, Johanna; Stockmann, Madlen

    2017-01-01

    In this work, we study the mobility behavior of Cu(II) under conditions related to an alternative, neutrophile biohydrometallurgical Cu(II) leaching approach. Sorption of copper onto kaolinite influenced by glutamic acid (Glu) was investigated in the presence of 0.01 M NaClO_4 by means of binary and ternary batch adsorption measurements over a pH range of 4 to 9 and surface complexation modeling.

  4. Adsorption of uranium(VI) to manganese oxides: X-ray absorption spectroscopy and surface complexation modeling.

    Science.gov (United States)

    Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E

    2013-01-15

    The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.

  5. INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS

    Directory of Open Access Journals (Sweden)

    Petr Hejtmánek

    2017-12-01

    Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.

  6. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  7. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  8. Stated Choice Experiments with Complex Ecosystem Changes: The Effect of Information Formats on Estimated Variances and Choice Parameters

    OpenAIRE

    Hoehn, John P.; Lupi, Frank; Kaplowitz, Michael D.

    2010-01-01

    Stated choice experiments about ecosystem changes involve complex information. This study examines whether the format in which ecosystem information is presented to respondents affects stated choice outcomes. Our analysis develops a utility-maximizing model to describe respondent behavior. The model shows how alternative questionnaire formats alter respondents’ use of filtering heuristics and result in differences in preference estimates. Empirical results from a large-scale stated choice e...

  9. Uranium(VI) retention on quartz and kaolinite. Experiments and modelling

    International Nuclear Information System (INIS)

    Mignot, G.

    2001-01-01

    The behaviour of uranium in the geosphere is an important issue for safety performance assessment of nuclear waste repositories, or in the context of contaminated sites due to mining activity related to nuclear field. Under aerobic conditions, the fate of uranium is mainly governed by the ability of minerals to sorb U(VI) aqueous species. Hence, a thorough understanding of U(VI) sorption processes on minerals is required to provide a valuable prediction of U(VI) migration in the environment. In this study, we performed sorption/desorption experiments of U(VI) on quartz and kaolinite, for systems favouring the formation in solution (i) of UO 2 2+ and monomeric hydrolysis products or (ii) of di-/tri-meric uranyl aqueous species, and / or U(VI)-colloids or UO 2 (OH) 2 precipitates, or (iii) of uranyl-carbonate complexes. Particular attention was paid to determine the surface characteristics of the solids and their modification due to dissolution/precipitation processes during experiments. A double layer surface complexation model was applied to our experimental data in order to derive surface complexation equilibria and intrinsic constants which allow a valuable description of U(VI) retention over a wide range of pH, ionic strength, initial concentration of uranium [0.1-10μM] and solid - solution equilibration time. U(VI) sorption on quartz was successfully modeled by using two sets of adsorption equilibria, assuming (i) the formation of the surface complexes SiOUO 2 + , SiOUO 2 OH and SiO(UO 2 ) 3 (OH) 5 , or (ii) the formation of the mono-dentate complex SiO(UO 2 ) 3 (OH) 5 and of the bidentate complex (SiO) 2 UO 2 . Assumptions on the density of each type of surface sites of kaolinite and on their acid-base properties were made from potentiometric titrations of kaolinite suspensions. We proposed on such a basis a set of surface complexation equilibria which accounts for U(VI) uptake on kaolinite over a wide range of chemical conditions, with aluminol edge sites as

  10. Epidemic modeling in complex realities.

    Science.gov (United States)

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  11. The Kuramoto model in complex networks

    Science.gov (United States)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  12. Computational model of dose response for low-LET-induced complex chromosomal aberrations

    International Nuclear Information System (INIS)

    Eidelman, Y.A.; Andreev, S.G.

    2015-01-01

    Experiments with full-colour mFISH chromosome painting have revealed high yield of radiation-induced complex chromosomal aberrations (CAs). The ratio of complex to simple aberrations is dependent on cell type and linear energy transfer. Theoretical analysis has demonstrated that the mechanism of CA formation as a result of interaction between lesions at a surface of chromosome territories does not explain high complexes-to-simples ratio in human lymphocytes. The possible origin of high yields of γ-induced complex CAs was investigated in the present work by computer simulation. CAs were studied on the basis of chromosome structure and dynamics modelling and the hypothesis of CA formation on nuclear centres. The spatial organisation of all chromosomes in a human interphase nucleus was predicted by simulation of mitosis-to-interphase chromosome structure transition. Two scenarios of CA formation were analysed, 'static' (existing in a nucleus prior to irradiation) centres and 'dynamic' (formed in response to irradiation) centres. The modelling results reveal that under certain conditions, both scenarios explain quantitatively the dose-response relationships for both simple and complex γ-induced inter-chromosomal exchanges observed by mFISH chromosome painting in the first post-irradiation mitosis in human lymphocytes. (authors)

  13. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation

    Energy Technology Data Exchange (ETDEWEB)

    Veselská, Veronika, E-mail: veselskav@fzp.czu.cz [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Fajgar, Radek [Department of Analytical and Material Chemistry, Institute of Chemical Process Fundamentals of the CAS, v.v.i., Rozvojová 135/1, CZ-16502, Prague (Czech Republic); Číhalová, Sylva [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Bolanz, Ralph M. [Institute of Geosciences, Friedrich-Schiller-University Jena, Carl-Zeiss-Promenade 10, DE-07745, Jena (Germany); Göttlicher, Jörg; Steininger, Ralph [ANKA Synchrotron Radiation Facility, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, DE-76344, Eggenstein-Leopoldshafen (Germany); Siddique, Jamal A.; Komárek, Michael [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic)

    2016-11-15

    Highlights: • Study of Cr(VI) adsorption on soil minerals over a large range of conditions. • Combined surface complexation modeling and spectroscopic techniques. • Diffuse-layer and triple-layer models used to obtain fits to experimental data. • Speciation of Cr(VI) and Cr(III) was assessed. - Abstract: This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3–10), ionic strengths (0.001–0.1 M KNO{sub 3}), sorbate concentrations (10{sup −4}, 10{sup −5}, and 10{sup −6} M Cr(VI)), and sorbate/sorbent ratios (50–500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes.

  14. Beyond the Standard Model Higgs boson searches using the ATLAS Experiment

    CERN Document Server

    Tsukerman, Ilya; The ATLAS collaboration

    2014-01-01

    The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond the Standard Model (BSM) Higgs boson searches are outlined. The results are interpreted in well-motivated BSM Higgs frameworks.

  15. Cumulative Adverse Childhood Experiences and Sexual Satisfaction in Sex Therapy Patients: What Role for Symptom Complexity?

    Science.gov (United States)

    Bigras, Noémie; Godbout, Natacha; Hébert, Martine; Sabourin, Stéphane

    2017-03-01

    Patients consulting for sexual difficulties frequently present additional personal or relational disorders and symptoms. This is especially the case when they have experienced cumulative adverse childhood experiences (CACEs), which are associated with symptom complexity. CACEs refer to the extent to which an individual has experienced an accumulation of different types of adverse childhood experiences including sexual, physical, and psychological abuse; neglect; exposure to inter-parental violence; and bullying. However, past studies have not examined how symptom complexity might relate to CACEs and sexual satisfaction and even less so in samples of adults consulting for sex therapy. To document the presence of CACEs in a sample of individuals consulting for sexual difficulties and its potential association with sexual satisfaction through the development of symptom complexity operationalized through well-established clinically significant indicators of individual and relationship distress. Men and women (n = 307) aged 18 years and older consulting for sexual difficulties completed a set of questionnaires during their initial assessment. (i) Global Measure of Sexual Satisfaction Scale, (ii) Dyadic Adjustment Scale-4, (iii) Experiences in Close Relationships-12, (iv) Beck Depression Inventory-13, (v) Trauma Symptom Inventory-2, and (vi) Psychiatric Symptom Inventory-14. Results showed that 58.1% of women and 51.9% of men reported at least four forms of childhood adversity. The average number of CACEs was 4.10 (SD = 2.23) in women and 3.71 (SD = 2.08) in men. Structural equation modeling showed that CACEs contribute directly and indirectly to sexual satisfaction in adults consulting for sex therapy through clinically significant individual and relational symptom complexities. The findings underscore the relevance of addressing clinically significant psychological and relational symptoms that can stem from CACEs when treating sexual difficulties in adults seeking sex

  16. Some atmospheric tracer experiments in complex terrain at LASL: experimental design and data

    International Nuclear Information System (INIS)

    Archuleta, J.; Barr, S.; Clements, W.E.; Gedayloo, T.; Wilson, S.K.

    1978-03-01

    Two series of atmospheric tracer experiments were conducted in complex terrain situations in and around the Los Alamos Scientific Laboratory. Fluorescent particle tracers were used to investigate nighttime drainage flow in Los Alamos Canyon and daytime flow across the local canyon-mesa complex. This report describes the details of these experiments and presents a summary of the data collected. A subsequent report will discuss the analysis of these data

  17. Socio-Environmental Resilience and Complex Urban Systems Modeling

    Science.gov (United States)

    Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir

    2017-04-01

    The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water

  18. Real-time modeling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    1994-08-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models

  19. Real-time modelling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    2000-01-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralised dispersion modelling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models. (author)

  20. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    DEFF Research Database (Denmark)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming

    2012-01-01

    , suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed......Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer......(III) while PO43− and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment...

  1. Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions

    Directory of Open Access Journals (Sweden)

    Camaren Peter

    2014-03-01

    Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.

  2. Experimental and Numerical Modelling of Flow over Complex Terrain: The Bolund Hill

    Science.gov (United States)

    Conan, Boris; Chaudhari, Ashvinkumar; Aubrun, Sandrine; van Beeck, Jeroen; Hämäläinen, Jari; Hellsten, Antti

    2016-02-01

    In the wind-energy sector, wind-power forecasting, turbine siting, and turbine-design selection are all highly dependent on a precise evaluation of atmospheric wind conditions. On-site measurements provide reliable data; however, in complex terrain and at the scale of a wind farm, local measurements may be insufficient for a detailed site description. On highly variable terrain, numerical models are commonly used but still constitute a challenge regarding simulation and interpretation. We propose a joint state-of-the-art study of two approaches to modelling atmospheric flow over the Bolund hill: a wind-tunnel test and a large-eddy simulation (LES). The approach has the particularity of describing both methods in parallel in order to highlight their similarities and differences. The work provides a first detailed comparison between field measurements, wind-tunnel experiments and numerical simulations. The systematic and quantitative approach used for the comparison contributes to a better understanding of the strengths and weaknesses of each model and, therefore, to their enhancement. Despite fundamental modelling differences, both techniques result in only a 5 % difference in the mean wind speed and 15 % in the turbulent kinetic energy (TKE). The joint comparison makes it possible to identify the most difficult features to model: the near-ground flow and the wake of the hill. When compared to field data, both models reach 11 % error for the mean wind speed, which is close to the best performance reported in the literature. For the TKE, a great improvement is found using the LES model compared to previous studies (20 % error). Wind-tunnel results are in the low range of error when compared to experiments reported previously (40 % error). This comparison highlights the potential of such approaches and gives directions for the improvement of complex flow modelling.

  3. Hohlraum modeling for opacity experiments on the National Ignition Facility

    Science.gov (United States)

    Dodd, E. S.; DeVolder, B. G.; Martin, M. E.; Krasheninnikova, N. S.; Tregillis, I. L.; Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Moore, A. S.; Kline, J. L.; Johns, H. M.; Liedahl, D. A.; Cardenas, T.; Olson, R. E.; Wilde, B. H.; Urbatsch, T. J.

    2018-06-01

    This paper discusses the modeling of experiments that measure iron opacity in local thermodynamic equilibrium (LTE) using laser-driven hohlraums at the National Ignition Facility (NIF). A previous set of experiments fielded at Sandia's Z facility [Bailey et al., Nature 517, 56 (2015)] have shown up to factors of two discrepancies between the theory and experiment, casting doubt on the validity of the opacity models. The purpose of the new experiments is to make corroborating measurements at the same densities and temperatures, with the initial measurements made at a temperature of 160 eV and an electron density of 0.7 × 1022 cm-3. The X-ray hot spots of a laser-driven hohlraum are not in LTE, and the iron must be shielded from a direct line-of-sight to obtain the data [Perry et al., Phys. Rev. B 54, 5617 (1996)]. This shielding is provided either with the internal structure (e.g., baffles) or external wall shapes that divide the hohlraum into a laser-heated portion and an LTE portion. In contrast, most inertial confinement fusion hohlraums are simple cylinders lacking complex gold walls, and the design codes are not typically applied to targets like those for the opacity experiments. We will discuss the initial basis for the modeling using LASNEX, and the subsequent modeling of five different hohlraum geometries that have been fielded on the NIF to date. This includes a comparison of calculated and measured radiation temperatures.

  4. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  6. Design and implementation of new design of numerical experiments for non linear models

    International Nuclear Information System (INIS)

    Gazut, St.

    2007-03-01

    This thesis addresses the problem of the construction of surrogate models in numerical simulation. Whenever numerical experiments are costly, the simulation model is complex and difficult to use. It is important then to select the numerical experiments as efficiently as possible in order to minimize their number. In statistics, the selection of experiments is known as optimal experimental design. In the context of numerical simulation where no measurement uncertainty is present, we describe an alternative approach based on statistical learning theory and re-sampling techniques. The surrogate models are constructed using neural networks and the generalization error is estimated by leave-one-out, cross-validation and bootstrap. It is shown that the bootstrap can control the over-fitting and extend the concept of leverage for non linear in their parameters surrogate models. The thesis describes an iterative method called LDR for Learner Disagreement from experiment Re-sampling, based on active learning using several surrogate models constructed on bootstrap samples. The method consists in adding new experiments where the predictors constructed from bootstrap samples disagree most. We compare the LDR method with other methods of experimental design such as D-optimal selection. (author)

  7. Software complex AS (automation of spectrometry). User interface of experiment automation system implementation

    International Nuclear Information System (INIS)

    Astakhova, N.V.; Beskrovnyj, A.I.; Bogdzel', A.A.; Butorin, P.E.; Vasilovskij, S.G.; Gundorin, N.A.; Zlokazov, V.B.; Kutuzov, S.A.; Salamatin, I.M.; Shvetsov, V.N.

    2003-01-01

    An instrumental software complex for automation of spectrometry (AS) that enables prompt realization of experiment automation systems for spectrometers, which use data buferisation, has been developed. In the development new methods of programming and building of automation systems together with novel net technologies were employed. It is suggested that programs to schedule and conduct experiments should be based on the parametric model of the spectrometer, the approach that will make it possible to write programs suitable for any FLNP (Frank Laboratory of Neutron Physics) spectrometer and experimental technique applied and use different hardware interfaces for introducing the spectrometric data into the data acquisition system. The article describes the possibilities provided to the user in the field of scheduling and control of the experiment, data viewing, and control of the spectrometer parameters. The possibility of presenting the current spectrometer state, programs and the experimental data in the Internet in the form of dynamically formed protocols and graphs, as well as of the experiment control via the Internet is realized. To use the means of the Internet on the side of the client, applied programs are not needed. It suffices to know how to use the two programs to carry out experiments in the automated mode. The package is designed for experiments in condensed matter and nuclear physics and is ready for using. (author)

  8. Physical approach to air pollution climatological modelling in a complex site

    Energy Technology Data Exchange (ETDEWEB)

    Bonino, G [Torino, Universita; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Longhetto, A [Ente Nazionale per l' Energia Elettrica, Centro di Ricerca Termica e Nucleare, Milan; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Runca, E [International Institute for Applied Systems Analysis, Laxenburg, Austria

    1980-09-01

    A Gaussian climatological model which takes into account physical factors affecting air pollutant dispersion, such as nocturnal radiative inversion and mixing height evolution, associated with land breeze and sea breeze regimes, has been applied to the topographically complex area of La Spezia. The measurements of the dynamic and thermodynamic structure of the lower atmosphere obtained by field experiments are utilized in the model to calculate the SO/sub 2/ seasonal average concentrations. The model has been tested on eight three-monthly periods by comparing the simulated values with the ones measured at the SO/sub 2/ stations of the local air pollution monitoring network. Comparison of simulated and measured values was satisfactory and proved the applicability of the model for urban planning and establishment of air quality strategies.

  9. Elements of complexity in subsurface modeling, exemplified with three case studies

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  10. Historical and idealized climate model experiments: an EMIC intercomparison

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2012-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures...... the Medieval Climate Anomaly and the Little Ice Age estimated from paleoclimate reconstructions. This in turn could be a result of errors in the reconstructions of volcanic and/or solar radiative forcing used to drive the models or the incomplete representation of certain processes or variability within...

  11. Cumulative complexity: a functional, patient-centered model of patient complexity can improve research and practice.

    Science.gov (United States)

    Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M

    2012-10-01

    To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. On spin and matrix models in the complex plane

    International Nuclear Information System (INIS)

    Damgaard, P.H.; Heller, U.M.

    1993-01-01

    We describe various aspects of statistical mechanics defined in the complex temperature or coupling-constant plane. Using exactly solvable models, we analyse such aspects as renormalization group flows in the complex plane, the distribution of partition function zeros, and the question of new coupling-constant symmetries of complex-plane spin models. The double-scaling form of matrix models is shown to be exactly equivalent to finite-size scaling of two-dimensional spin systems. This is used to show that the string susceptibility exponents derived from matrix models can be obtained numerically with very high accuracy from the scaling of finite-N partition function zeros in the complex plane. (orig.)

  13. Multi-scale modelling for HEDP experiments on Orion

    Science.gov (United States)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  14. Deterministic ripple-spreading model for complex networks.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  15. Modeling OPC complexity for design for manufacturability

    Science.gov (United States)

    Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong

    2005-11-01

    Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data

  16. Theory Meets Experiment: Metal Ion Effects in HCV Genomic RNA Kissing Complex Formation

    Directory of Open Access Journals (Sweden)

    Li-Zhen Sun

    2017-12-01

    Full Text Available The long-range base pairing between the 5BSL3. 2 and 3′X domains in hepatitis C virus (HCV genomic RNA is essential for viral replication. Experimental evidence points to the critical role of metal ions, especially Mg2+ ions, in the formation of the 5BSL3.2:3′X kissing complex. Furthermore, NMR studies suggested an important ion-dependent conformational switch in the kissing process. However, for a long time, mechanistic understanding of the ion effects for the process has been unclear. Recently, computational modeling based on the Vfold RNA folding model and the partial charge-based tightly bound ion (PCTBI model, in combination with the NMR data, revealed novel physical insights into the role of metal ions in the 5BSL3.2-3′X system. The use of the PCTBI model, which accounts for the ion correlation and fluctuation, gives reliable predictions for the ion-dependent electrostatic free energy landscape and ion-induced population shift of the 5BSL3.2:3′X kissing complex. Furthermore, the predicted ion binding sites offer insights about how ion-RNA interactions shift the conformational equilibrium. The integrated theory-experiment study shows that Mg2+ ions may be essential for HCV viral replication. Moreover, the observed Mg2+-dependent conformational equilibrium may be an adaptive property of the HCV genomic RNA such that the equilibrium is optimized to the intracellular Mg2+ concentration in liver cells for efficient viral replication.

  17. Complex terrain and wind lidars

    Energy Technology Data Exchange (ETDEWEB)

    Bingoel, F.

    2009-08-15

    This thesis includes the results of a PhD study about complex terrain and wind lidars. The study mostly focuses on hilly and forested areas. Lidars have been used in combination with cups, sonics and vanes, to reach the desired vertical measurement heights. Several experiments are performed in complex terrain sites and the measurements are compared with two different flow models; a linearised flow model LINCOM and specialised forest model SCADIS. In respect to the lidar performance in complex terrain, the results showed that horizontal wind speed errors measured by a conically scanning lidar can be of the order of 3-4% in moderately-complex terrain and up to 10% in complex terrain. The findings were based on experiments involving collocated lidars and meteorological masts, together with flow calculations over the same terrains. The lidar performance was also simulated with the commercial software WAsP Engineering 2.0 and was well predicted except for some sectors where the terrain is particularly steep. Subsequently, two experiments were performed in forested areas; where the measurements are recorded at a location deep-in forest and at the forest edge. Both sites were modelled with flow models and the comparison of the measurement data with the flow model outputs showed that the mean wind speed calculated by LINCOM model was only reliable between 1 and 2 tree height (h) above canopy. The SCADIS model reported better correlation with the measurements in forest up to approx6h. At the forest edge, LINCOM model was used by allocating a slope half-in half out of the forest based on the suggestions of previous studies. The optimum slope angle was reported as 17 deg.. Thus, a suggestion was made to use WAsP Engineering 2.0 for forest edge modelling with known limitations and the applied method. The SCADIS model worked better than the LINCOM model at the forest edge but the model reported closer results to the measurements at upwind than the downwind and this should be

  18. Modeling the Propagation of Mobile Phone Virus under Complex Network

    Science.gov (United States)

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  19. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  20. Wind Tunnel Modeling Of Wind Flow Over Complex Terrain

    Science.gov (United States)

    Banks, D.; Cochran, B.

    2010-12-01

    This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation

  1. NHL and RCGA Based Multi-Relational Fuzzy Cognitive Map Modeling for Complex Systems

    Directory of Open Access Journals (Sweden)

    Zhen Peng

    2015-11-01

    Full Text Available In order to model multi-dimensions and multi-granularities oriented complex systems, this paper firstly proposes a kind of multi-relational Fuzzy Cognitive Map (FCM to simulate the multi-relational system and its auto construct algorithm integrating Nonlinear Hebbian Learning (NHL and Real Code Genetic Algorithm (RCGA. The multi-relational FCM fits to model the complex system with multi-dimensions and multi-granularities. The auto construct algorithm can learn the multi-relational FCM from multi-relational data resources to eliminate human intervention. The Multi-Relational Data Mining (MRDM algorithm integrates multi-instance oriented NHL and RCGA of FCM. NHL is extended to mine the causal relationships between coarse-granularity concept and its fined-granularity concepts driven by multi-instances in the multi-relational system. RCGA is used to establish high-quality high-level FCM driven by data. The multi-relational FCM and the integrating algorithm have been applied in complex system of Mutagenesis. The experiment demonstrates not only that they get better classification accuracy, but it also shows the causal relationships among the concepts of the system.

  2. Development and assessment of modular models of calculation for the interpretation of rod-melting experiments

    International Nuclear Information System (INIS)

    Tuerk, W.

    1980-01-01

    By the example of recalculations of rod-melting experiment it is shown how a modular simulation model for complex systems can be formulated within the scope of RSYST1. The procedure of code development as well as the physical and numerical methods and approximations of the simulation model are described. To each important physical process a code module is assigned. The individual moduls describe heat production, rod heat-up, rod oxidation, rod environment, rod deformation by thermal expansion and can buckling, melting of the rod, rod failure, and flowing off of the melted mass. A comparison of the results for the overall model with the result of different experiments indicates that the phenomena during heat-up and melting of the rod are treated in agreement with the experiments. The results of the calculation model and its submodels are thus largely supported by experiments. Therefore further predictions with a high level of confidence can be made with the model within the scope of reactor safety research. (orig.) [de

  3. Modeling the Structure and Complexity of Engineering Routine Design Problems

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.

    2011-01-01

    This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development

  4. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  5. Historical and idealized climate model experiments: an EMIC intercomparison

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2012-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE......, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows considerable synergy between land-use change and CO2... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures...

  6. Primary care providers' experiences caring for complex patients in primary care: a qualitative study.

    Science.gov (United States)

    Loeb, Danielle F; Bayliss, Elizabeth A; Candrian, Carey; deGruy, Frank V; Binswanger, Ingrid A

    2016-03-22

    Complex patients are increasingly common in primary care and often have poor clinical outcomes. Healthcare system barriers to effective care for complex patients have been previously described, but less is known about the potential impact and meaning of caring for complex patients on a daily basis for primary care providers (PCPs). Our objective was to describe PCPs' experiences providing care for complex patients, including their experiences of health system barriers and facilitators and their strategies to enhance provision of effective care. Using a general inductive approach, our qualitative research study was guided by an interpretive epistemology, or way of knowing. Our method for understanding included semi-structured in-depth interviews with internal medicine PCPs from two university-based and three community health clinics. We developed an interview guide, which included questions on PCPs' experiences, perceived system barriers and facilitators, and strategies to improve their ability to effectively treat complex patients. To focus interviews on real cases, providers were asked to bring de-identified clinical notes from patients they considered complex to the interview. Interview transcripts were coded and analyzed to develop categories from the raw data, which were then conceptualized into broad themes after team-based discussion. PCPs (N = 15) described complex patients with multidimensional needs, such as socio-economic, medical, and mental health. A vision of optimal care emerged from the data, which included coordinating care, preventing hospitalizations, and developing patient trust. PCPs relied on professional values and individual care strategies to overcome local and system barriers. Team based approaches were endorsed to improve the management of complex patients. Given the barriers to effective care described by PCPs, individual PCP efforts alone are unlikely to meet the needs of complex patients. To fulfill PCP's expressed concepts of

  7. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    Science.gov (United States)

    Zerkle, Ronald D.; Prakash, Chander

    1995-01-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  8. Modelling of information processes management of educational complex

    Directory of Open Access Journals (Sweden)

    Оксана Николаевна Ромашкова

    2014-12-01

    Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.

  9. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  10. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  11. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  12. Fiction and reality in the modelling world - Balance between simplicity and complexity, calibration and identifiability, verification and falsification

    DEFF Research Database (Denmark)

    Harremoës, P.; Madsen, H.

    1999-01-01

    Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibr......Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable...... by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty...

  13. Extensive video-game experience alters cortical networks for complex visuomotor transformations.

    Science.gov (United States)

    Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E

    2010-10-01

    Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.

  14. Foundations for Streaming Model Transformations by Complex Event Processing.

    Science.gov (United States)

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  15. Different Epidemic Models on Complex Networks

    International Nuclear Information System (INIS)

    Zhang Haifeng; Small, Michael; Fu Xinchu

    2009-01-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  16. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Directory of Open Access Journals (Sweden)

    Samreen Laghari

    Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  17. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Science.gov (United States)

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  18. PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments

    Science.gov (United States)

    Gaede, F.; Hegner, B.; Mato, P.

    2017-10-01

    PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.

  19. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  20. Smart modeling and simulation for complex systems practice and theory

    CERN Document Server

    Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin

    2015-01-01

    This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.

  1. Universal correlators for multi-arc complex matrix models

    International Nuclear Information System (INIS)

    Akemann, G.

    1997-01-01

    The correlation functions of the multi-arc complex matrix model are shown to be universal for any finite number of arcs. The universality classes are characterized by the support of the eigenvalue density and are conjectured to fall into the same classes as the ones recently found for the Hermitian model. This is explicitly shown to be true for the case of two arcs, apart from the known result for one arc. The basic tool is the iterative solution of the loop equation for the complex matrix model with multiple arcs, which provides all multi-loop correlators up to an arbitrary genus. Explicit results for genus one are given for any number of arcs. The two-arc solution is investigated in detail, including the double-scaling limit. In addition universal expressions for the string susceptibility are given for both the complex and Hermitian model. (orig.)

  2. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  3. Modelling of turbulence and combustion for simulation of gas explosions in complex geometries

    Energy Technology Data Exchange (ETDEWEB)

    Arntzen, Bjoern Johan

    1998-12-31

    This thesis analyses and presents new models for turbulent reactive flows for CFD (Computational Fluid Dynamics) simulation of gas explosions in complex geometries like offshore modules. The course of a gas explosion in a complex geometry is largely determined by the development of turbulence and the accompanying increased combustion rate. To be able to model the process it is necessary to use a CFD code as a starting point, provided with a suitable turbulence and combustion model. The modelling and calculations are done in a three-dimensional finite volume CFD code, where complex geometries are represented by a porosity concept, which gives porosity on the grid cell faces, depending on what is inside the cell. The turbulent flow field is modelled with a k-{epsilon} turbulence model. Subgrid models are used for production of turbulence from geometry not fully resolved on the grid. Results from laser doppler anemometry measurements around obstructions in steady and transient flows have been analysed and the turbulence models have been improved to handle transient, subgrid and reactive flows. The combustion is modelled with a burning velocity model and a flame model which incorporates the burning velocity into the code. Two different flame models have been developed: SIF (Simple Interface Flame model), which treats the flame as an interface between reactants and products, and the {beta}-model where the reaction zone is resolved with about three grid cells. The flame normally starts with a quasi laminar burning velocity, due to flame instabilities, modelled as a function of flame radius and laminar burning velocity. As the flow field becomes turbulent, the flame uses a turbulent burning velocity model based on experimental data and dependent on turbulence parameters and laminar burning velocity. The laminar burning velocity is modelled as a function of gas mixture, equivalence ratio, pressure and temperature in reactant. Simulations agree well with experiments. 139

  4. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  5. Verification of atmospheric diffusion models with data of atmospheric diffusion experiments

    International Nuclear Information System (INIS)

    Hato, Shinji; Homma, Toshimitsu

    2009-02-01

    The atmospheric diffusion experiments were implemented by Japan Atomic Energy Research Institute (JAERI) around Mount Tsukuba in 1989 and 1990, and the tracer gas concentration were monitored. In this study, the Gauss Plume Model and RAMS/HYPACT that are meteorological forecast code and atmospheric diffusion code with detailed physical law are made a comparison between monitored concentration. In conclusion, the Gauss Plume Model is better than RAM/HYPACT even complex topography if the estimation is around tens of kilometer form release point and the change in weather is constant for short time. This reason is difference of wind between RAMS and observation. (author)

  6. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  7. Design of experiments and springback prediction for AHSS automotive components with complex geometry

    International Nuclear Information System (INIS)

    Asgari, A.; Pereira, M.; Rolfe, B.; Dingle, M.; Hodgson, P.

    2005-01-01

    With the drive towards implementing Advanced High Strength Steels (AHSS) in the automotive industry; stamping engineers need to quickly answer questions about forming these strong materials into elaborate shapes. Commercially available codes have been successfully used to accurately predict formability, thickness and strains in complex parts. However, springback and twisting are still challenging subjects in numerical simulations of AHSS components. Design of Experiments (DOE) has been used in this paper to study the sensitivity of the implicit and explicit numerical results with respect to certain arrays of user input parameters in the forming of an AHSS component. Numerical results were compared to experimental measurements of the parts stamped in an industrial production line. The forming predictions of the implicit and explicit codes were in good agreement with the experimental measurements for the conventional steel grade, while lower accuracies were observed for the springback predictions. The forming predictions of the complex component with an AHSS material were also in good correlation with the respective experimental measurements. However, much lower accuracies were observed in its springback predictions. The number of integration points through the thickness and tool offset were found to be of significant importance, while coefficient of friction and Young's modulus (modeling input parameters) have no significant effect on the accuracy of the predictions for the complex geometry

  8. Geometric Modelling with a-Complexes

    NARCIS (Netherlands)

    Gerritsen, B.H.M.; Werff, K. van der; Veltkamp, R.C.

    2001-01-01

    The shape of real objects can be so complicated, that only a sampling data point set can accurately represent them. Analytic descriptions are too complicated or impossible. Natural objects, for example, can be vague and rough with many holes. For this kind of modelling, a-complexes offer advantages

  9. Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.

    Science.gov (United States)

    Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C

    2010-05-01

    Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.

  10. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    Science.gov (United States)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  11. Surface complexation models for uranium adsorption in the sub-surface environment

    International Nuclear Information System (INIS)

    Payne, T.E.

    2007-01-01

    Adsorption experiments with soil component minerals under a range of conditions are being used to develop models of uranium(VI) uptake in the sub-surface environment. The results show that adsorption of U on iron oxides and clay minerals is influenced by chemical factors including the pH, partial pressure of CO 2 , and the presence of ligands such as phosphate. Surface complexation models (SCMs) can be used to simulate U adsorption on these minerals. The SCMs are based on plausible mechanistic assumptions and describe the experimental data more adequately than Kd values or sorption isotherms. It is conceptually possible to simulate U sorption data on complex natural samples by combining SCMs for individual component minerals. This approach was used to develop a SCM for U adsorption to mineral assemblages from Koongarra (Australia), and produced a reasonable description of U uptake. In order to assess the applicability of experimental data to the field situation, in-situ measurements of U distributions between solid and liquid phases were undertaken at the Koongarra U deposit. This field partitioning data showed a satisfactory agreement with laboratory sorption data obtained under comparable conditions. (author)

  12. Coping with Complexity Model Reduction and Data Analysis

    CERN Document Server

    Gorban, Alexander N

    2011-01-01

    This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.

  13. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    Energy Technology Data Exchange (ETDEWEB)

    Bonten, Luc T.C., E-mail: luc.bonten@wur.nl [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Groenenberg, Jan E. [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Meesenburg, Henning [Northwest German Forest Research Station, Abt. Umweltkontrolle, Sachgebiet Intensives Umweltmonitoring, Goettingen (Germany); Vries, Wim de [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2011-10-15

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: > Surface complexation models can be well applied in field studies. > Soil chemistry under a forest site is adequately modelled using generic parameters. > The model is easily extended with extra elements within the existing framework. > Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  14. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    International Nuclear Information System (INIS)

    Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de

    2011-01-01

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  15. A marketing mix model for a complex and turbulent environment

    Directory of Open Access Journals (Sweden)

    R. B. Mason

    2007-12-01

    Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the company’s external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

  16. Design of an intermediate-scale experiment to validate unsaturated- zone transport models

    International Nuclear Information System (INIS)

    Siegel, M.D.; Hopkins, P.L.; Glass, R.J.; Ward, D.B.

    1991-01-01

    An intermediate-scale experiment is being carried out to evaluate instrumentation and models that might be used for transport-model validation for the Yucca Mountain Site Characterization Project. The experimental test bed is a 6-m high x 3-m diameter caisson filled with quartz sand with a sorbing layer at an intermediate depth. The experiment involves the detection and prediction of the migration of fluid and tracers through an unsaturated porous medium. Pre-test design requires estimation of physical properties of the porous medium such as the relative permeability, saturation/pressure relations, porosity, and saturated hydraulic conductivity as well as geochemical properties such as surface complexation constants and empircial K d 'S. The pre-test characterization data will be used as input to several computer codes to predict the fluid flow and tracer migration. These include a coupled chemical-reaction/transport model, a stochastic model, and a deterministic model using retardation factors. The calculations will be completed prior to elution of the tracers, providing a basis for validation by comparing the predictions to observed moisture and tracer behavior

  17. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    Science.gov (United States)

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  18. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    Science.gov (United States)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  19. Morphogenesis and pattern formation in biological systems experiments and models

    CERN Document Server

    Noji, Sumihare; Ueno, Naoto; Maini, Philip

    2003-01-01

    A central goal of current biology is to decode the mechanisms that underlie the processes of morphogenesis and pattern formation. Concerned with the analysis of those phenomena, this book covers a broad range of research fields, including developmental biology, molecular biology, plant morphogenesis, ecology, epidemiology, medicine, paleontology, evolutionary biology, mathematical biology, and computational biology. In Morphogenesis and Pattern Formation in Biological Systems: Experiments and Models, experimental and theoretical aspects of biology are integrated for the construction and investigation of models of complex processes. This collection of articles on the latest advances by leading researchers not only brings together work from a wide spectrum of disciplines, but also provides a stepping-stone to the creation of new areas of discovery.

  20. Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption

    Directory of Open Access Journals (Sweden)

    Giuliano Marchi

    2015-10-01

    Full Text Available ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Visual MINTEQ software failed to predict observed data accurately. However, FITEQL raw output data rendered good results when predicted values were directly compared with observed values, instead of incorporating the estimated constants into Visual MINTEQ. Intrinsic equilibrium constants optimized by hand calculation and incorporated in Visual MINTEQ reliably predicted Cd adsorption reactions on soil surfaces under changing environmental conditions.

  1. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    Science.gov (United States)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  2. Polystochastic Models for Complexity

    CERN Document Server

    Iordache, Octavian

    2010-01-01

    This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...

  3. Dynamic and impact contact mechanics of geologic materials: Grain-scale experiments and modeling

    International Nuclear Information System (INIS)

    Cole, David M.; Hopkins, Mark A.; Ketcham, Stephen A.

    2013-01-01

    High fidelity treatments of the generation and propagation of seismic waves in naturally occurring granular materials is becoming more practical given recent advancements in our ability to model complex particle shapes and their mechanical interaction. Of particular interest are the grain-scale processes that are activated by impact events and the characteristics of force transmission through grain contacts. To address this issue, we have developed a physics based approach that involves laboratory experiments to quantify the dynamic contact and impact behavior of granular materials and incorporation of the observed behavior indiscrete element models. The dynamic experiments do not involve particle damage and emphasis is placed on measured values of contact stiffness and frictional loss. The normal stiffness observed in dynamic contact experiments at low frequencies (e.g., 10 Hz) are shown to be in good agreement with quasistatic experiments on quartz sand. The results of impact experiments – which involve moderate to extensive levels of particle damage – are presented for several types of naturally occurring granular materials (several quartz sands, magnesite and calcium carbonate ooids). Implementation of the experimental findings in discrete element models is discussed and the results of impact simulations involving up to 5 × 105 grains are presented.

  4. Dynamic and impact contact mechanics of geologic materials: Grain-scale experiments and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Cole, David M.; Hopkins, Mark A.; Ketcham, Stephen A. [Engineer Research and Development Center - Cold Regions Research and Engineering Laboratory, 72 Lyme Rd., Hanover, NH 03755 (United States)

    2013-06-18

    High fidelity treatments of the generation and propagation of seismic waves in naturally occurring granular materials is becoming more practical given recent advancements in our ability to model complex particle shapes and their mechanical interaction. Of particular interest are the grain-scale processes that are activated by impact events and the characteristics of force transmission through grain contacts. To address this issue, we have developed a physics based approach that involves laboratory experiments to quantify the dynamic contact and impact behavior of granular materials and incorporation of the observed behavior indiscrete element models. The dynamic experiments do not involve particle damage and emphasis is placed on measured values of contact stiffness and frictional loss. The normal stiffness observed in dynamic contact experiments at low frequencies (e.g., 10 Hz) are shown to be in good agreement with quasistatic experiments on quartz sand. The results of impact experiments - which involve moderate to extensive levels of particle damage - are presented for several types of naturally occurring granular materials (several quartz sands, magnesite and calcium carbonate ooids). Implementation of the experimental findings in discrete element models is discussed and the results of impact simulations involving up to 5 Multiplication-Sign 105 grains are presented.

  5. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    Science.gov (United States)

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The 'model omnitron' proposed experiment

    International Nuclear Information System (INIS)

    Sestero, A.

    1997-05-01

    The Model Omitron is a compact tokamak experiment which is designed by the Fusion Engineering Unit of ENEA and CITIF CONSORTIUM. The building of Model Omitron would allow for full testing of Omitron engineering, and partial testing of Omitron physics -at about 1/20 of the cost that has been estimated for the larger parent machine. In particular, due to the unusually large ohmic power densities (up to 100 times the nominal value in the Frascati FTU experiment), in Model Omitron the radial energy flux is reaching values comparable or higher than envisaged of the larger ignition experiments Omitron, Ignitor and Iter. Consequently, conditions are expected to occur at the plasma border in the scrape-off layer of Model Omitron, which are representative of the quoted larger experiments. Moreover, since all this will occur under ohmic heating alone, one will hopefully be able to derive an energy transport model fo the ohmic heating regime that is valid over a range of plasma parameters (in particular, of the temperature parameter) wider than it was possible before. In the Model Omitron experiment, finally - by reducing the plasma current and/or the toroidal field down to, say, 1/3 or 1/4 of the nominal values -additional topics can be tackled, such as: large safety-factor configurations (of interest for improving confinement), large aspect-ratio configurations (of interest for the investigation of advanced concepts in tokamaks), high beta (with RF heating -also of interest for the investigation of advanced concepts in tokamaks), long pulse discharges (of interest for demonstrating stationary conditions in the current profile)

  7. Usage of a statistical method of designing factorial experiments in the mechanical activation of a complex CuPbZn sulphide concentrate

    Directory of Open Access Journals (Sweden)

    BalហPeter

    2003-09-01

    Full Text Available Mechanical activation belongs to innovative procedures which intensify technological processes by creating new surfaces and making a defective structure of solid phase. Mechanical impact on the solid phase is a suitable procedure to ensure the mobility of its structure elements and to accumulate the mechanical energy that is later used in the processes of leaching.The aim of this study was to realize the mechanical activation of a complex CuPbZn sulphide concentrate (Slovak deposit in an attritor by using of statistical methods for the design of factorial experiments and to determine the conditions for preparing the optimum mechanically activated sample of studied concentrate.The following parameters of the attritor were studied as variables:the weight of sample/steel balls (degree of mill filling, the number of revolutions of the milling shaft and the time of mechanical activation. Interpretation of the chosen variables inducing the mechanical activation of the complex CuPbZn concentrate was also carried out by using statistical methods of factorial design experiments. The presented linear model (23 factorial experiment does not support directly the optimum search, therefore this model was extended to the nonlinear model by the utilization of second order ortogonal polynom. This nonlinear model does not describe adequately the process of new surface formation by the mechanical activation of the studied concentrate. It would be necessary to extend the presented nonlinear model to the nonlinear model of the third order or choose another model. In regard to the economy with the aspect of minimal energy input consumption, the sample with the value of 524 kWht-1 and with the maximum value of specific surface area 8.59 m2g-1 (as a response of the factorial experiment was chosen as the optimum mechanically activated sample of the studied concentrate. The optimum mechanically activated sample of the complex CuPbZn sulphide concentrate was prepared

  8. Meeting the Next Generation Science Standards Through "Rediscovered" Climate Model Experiments

    Science.gov (United States)

    Sohl, L. E.; Chandler, M. A.; Zhou, J.

    2013-12-01

    Since the Educational Global Climate Model (EdGCM) Project made its debut in January 2005, over 150 institutions have employed EdGCM software for a variety of uses ranging from short lab exercises to semester-long and year-long thesis projects. The vast majority of these EdGCM adoptees have been at the undergraduate and graduate levels, with few users at the K-12 level. The K-12 instructors who have worked with EdGCM in professional development settings have commented that, although EdGCM can be used to illustrate a number of the Disciplinary Core Ideas and connects to many of the Common Core State Standards across subjects and grade levels, significant hurdles preclude easy integration of EdGCM into their curricula. Time constraints, a scarcity of curriculum materials, and classroom technology are often mentioned as obstacles in providing experiences to younger grade levels in realistic climate modeling research. Given that the NGSS incorporates student performance expectations relating to Earth System Science, and to climate science and the human dimension in particular, we feel that a streamlined version of EdGCM -- one that eliminates the need to run the climate model on limited computing resources, and provides a more guided climate modeling experience -- would be highly beneficial for the K-12 community. This new tool currently under development, called EzGCM, functions through a browser interface, and presents "rediscovery experiments" that allow students to do their own exploration of model output from published climate experiments, or from sensitivity experiments designed to illustrate how climate models as well as the climate system work. The experiments include background information and sample questions, with more extensive notes for instructors so that the instructors can design their own reflection questions or follow-on activities relating to physical or human impacts, as they choose. An added benefit of the EzGCM tool is that, like EdGCM, it helps

  9. Simulation and Analysis of Complex Biological Processes: an Organisation Modelling Perspective

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2005-01-01

    This paper explores how the dynamics of complex biological processes can be modelled and simulated as an organisation of multiple agents. This modelling perspective identifies organisational structure occurring in complex decentralised processes and handles complexity of the analysis of the dynamics

  10. Elements of a pragmatic approach for dealing with bias and uncertainty in experiments through predictions : experiment design and data conditioning; %22real space%22 model validation and conditioning; hierarchical modeling and extrapolative prediction.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose

    2011-11-01

    This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.

  11. The database for reaching experiments and models.

    Directory of Open Access Journals (Sweden)

    Ben Walker

    Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  12. Dynamic complexities in a parasitoid-host-parasitoid ecological model

    International Nuclear Information System (INIS)

    Yu Hengguo; Zhao Min; Lv Songjuan; Zhu Lili

    2009-01-01

    Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model

  13. Dynamic complexities in a parasitoid-host-parasitoid ecological model

    Energy Technology Data Exchange (ETDEWEB)

    Yu Hengguo [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China); Zhao Min [School of Life and Environmental Science, Wenzhou University, Wenzhou, Zhejiang 325027 (China)], E-mail: zmcn@tom.com; Lv Songjuan; Zhu Lili [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China)

    2009-01-15

    Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model.

  14. Reducing Spatial Data Complexity for Classification Models

    International Nuclear Information System (INIS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-01-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  15. Reducing Spatial Data Complexity for Classification Models

    Science.gov (United States)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  16. Modeling a High Explosive Cylinder Experiment

    Science.gov (United States)

    Zocher, Marvin A.

    2017-06-01

    Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.

  17. Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome

    Science.gov (United States)

    Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.

    2018-03-01

    The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.

  18. Design and implementation of new design of numerical experiments for non linear models; Conception et mise en oeuvre de nouvelles methodes d'elaboration de plans d'experiences pour l'apprentissage de modeles non lineaires

    Energy Technology Data Exchange (ETDEWEB)

    Gazut, St

    2007-03-15

    This thesis addresses the problem of the construction of surrogate models in numerical simulation. Whenever numerical experiments are costly, the simulation model is complex and difficult to use. It is important then to select the numerical experiments as efficiently as possible in order to minimize their number. In statistics, the selection of experiments is known as optimal experimental design. In the context of numerical simulation where no measurement uncertainty is present, we describe an alternative approach based on statistical learning theory and re-sampling techniques. The surrogate models are constructed using neural networks and the generalization error is estimated by leave-one-out, cross-validation and bootstrap. It is shown that the bootstrap can control the over-fitting and extend the concept of leverage for non linear in their parameters surrogate models. The thesis describes an iterative method called LDR for Learner Disagreement from experiment Re-sampling, based on active learning using several surrogate models constructed on bootstrap samples. The method consists in adding new experiments where the predictors constructed from bootstrap samples disagree most. We compare the LDR method with other methods of experimental design such as D-optimal selection. (author)

  19. In situ SAXS experiment during DNA and liposome complexation

    Energy Technology Data Exchange (ETDEWEB)

    Gasperini, A.A.; Cavalcanti, L.P. [Laboratorio Nacional de Luz Sincrotron (LNLS), Campinas, SP (Brazil); Balbino, T.A.; Torre, L.G. de la [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Oliveira, C.L.P. [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil)

    2012-07-01

    Full text: Gene therapy is an exciting research area that allows the treatment of different diseases. Basically, an engineered DNA that codes a protein is the therapeutic drug that has to be delivered to the cell nucleus. After that, the DNA transfection process allows the protein production using the cell machinery. However, the efficient delivery needs DNA protection against nucleases and interstitial fluids. In this context, the use of cationic liposome/DNA complexes is a promising strategy for non-viral gene therapy. Liposomes are lipid systems that self-aggregate in bilayers and the use of cationic lipids allows the electrostatic complexation with DNA. In this work, we used SAXS technique to study the complexation kinetics between cationic liposomes and plasmid DNA and evaluate the liposome structural modifications in the presence of DNA. Liposomes were prepared according to [1] using as plasmid DNA vector model a modified version of pVAX1-GFP with luciferase as reporter gene [2]. The complexation was promoted in a SAXS sample holder containing a microchannel to get access to the compartment between two mica windows where the X-ray beam could cross through [3]. We obtained in situ complexation using such sample holder coupled to a fed-batch reactor through a peristaltic pump. The scattering curves were recorded each 30 seconds during the cycles. The DNA was added until a certain final ratio between surface charges previously determined. We studied the form and structure factor model for the liposome bilayer to fit the scattering curves [4]. Structural information such as the bilayer electronic density profiles, number of bilayers and fluidity were determined as a function of the complexation with DNA. These differences can reflect in singular in vitro and in vivo effects. [1] L. G. de la Torre et al. Colloids and Surfaces B: Biointerfaces, 73, 175 (2009) [2] A. R. Azzoni et al. The Journal of Gene Medicine, 9, 392 (2007) [3] L. P. Cavalcanti et al. Review of

  20. What do we gain from simplicity versus complexity in species distribution models?

    Science.gov (United States)

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species

  1. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  2. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  3. Intrinsic Uncertainties in Modeling Complex Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  4. Fatigue modeling of materials with complex microstructures

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2011-01-01

    with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...

  5. One-year experience of a regional service model of teleconsultation for planning and treatment of complex thoracoabdominal aortic disease.

    Science.gov (United States)

    Chisci, Emiliano; de Donato, Gianmarco; Fargion, Aaron; Ventoruzzo, Giorgio; Parlani, Gianbattista; Setacci, Carlo; Ercolini, Leonardo; Michelagnoli, Stefano

    2018-03-01

    The objective of this study was to report the methodology and 1-year experience of a regional service model of teleconsultation for planning and treatment of complex thoracoabdominal aortic disease (TAAD). Complex TAADs without a feasible conventional surgical repair were prospectively evaluated by vascular surgeons of the same public health service (National Health System) located in a huge area of 22,994 km 2 with 3.7 million inhabitants and 11 tertiary hospitals. Surgeons evaluated computed tomography scans and clinical details that were placed on a web platform (Google Drive; Google, Mountain View, Calif) and shared by all surgeons. Patients gave informed consent for the teleconsultation. The surgeon who submits a case discusses in detail his or her case and proposes a possible therapeutic strategy. The other surgeons suggest other solutions and options in terms of grafts, techniques, or access to be used. Computed tomography angiography, angiography, and clinical outcomes of cases are then presented at the following telemeetings, and a final agreement of the operative strategy is evaluated. Teleconsultation is performed using a web conference service (WebConference.com; Avaya Inc, Basking Ridge, NJ) every month. An inter-rater agreement statistic was calculated, and the κ value was interpreted according to Altman's criteria for computed tomography angiography measurements. The rate of participation was constant (mean number of surgeons, 11; range, 9-15). Twenty-four complex TAAD cases were discussed for planning and operation during the study period. The interobserver reliability recorded was moderate (κ = 0.41-0.60) to good (κ = 0.61-0.80) for measurements of proximal and distal sealing and very good (κ = 0.81-1) for detection of any target vessel angulation >60 degrees, significant calcification (circumferential), and thrombus presence (>50%). The concordance for planning and therapeutic strategy among all participants was complete in 16 cases. In

  6. Stability of Rotor Systems: A Complex Modelling Approach

    DEFF Research Database (Denmark)

    Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

    1996-01-01

    A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...

  7. Complex Systems and Self-organization Modelling

    CERN Document Server

    Bertelle, Cyrille; Kadri-Dahmani, Hakima

    2009-01-01

    The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.

  8. Experience economy meets business model design

    DEFF Research Database (Denmark)

    Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig

    2012-01-01

    Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...

  9. Understanding complex urban systems integrating multidisciplinary data in urban models

    CERN Document Server

    Gebetsroither-Geringer, Ernst; Atun, Funda; Werner, Liss

    2016-01-01

    This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers—including municipal politicians, spatial planners, and citizen groups—in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches—and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Age...

  10. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  11. Vibration behavior of PWR reactor internals Model experiments and analysis

    International Nuclear Information System (INIS)

    Assedo, R.; Dubourg, M.; Livolant, M.; Epstein, A.

    1975-01-01

    In the late 1971, the CEA and FRAMATOME decided to undertake a comprehensive joint program of studying the vibration behavior of PWR internals of the 900 MWe, 50 cycle, 3 loop reactor series being built by FRAMATOME in France. The PWR reactor internals are submitted to several sources of excitation during normal operation. Two main sources of excitation may effect the internals behavior: the large flow turbulences which could generate various instabilities such as: vortex shedding: the pump pressure fluctuations which could generate acoustic noise in the circuit at frequencies corresponding to shaft speed frequencies or blade passing frequencies, and their respective harmonics. The flow induced vibrations are of complex nature and the approach selected, for this comprehensive program, is semi-empirical and based on both theoretical analysis and experiments on a reduced scale model and full scale internals. The experimental support of this program consists of: the SAFRAN test loop which consists of an hydroelastic similitude of a 1/8 scale model of a PWR; harmonic vibration tests in air performed on full scale reactor internals in the manufacturing shop; the GENNEVILLIERS facilities which is a full flow test facility of primary pump; the measurements carried out during start up on the Tihange reactor. This program will be completed in April 1975. The results of this program, the originality of which consists of studying separately the effects of random excitations and acoustic noises, on the internals behavior, and by establishing a comparison between experiments and analysis, will bring a major contribution for explaining the complex vibration phenomena occurring in a PWR

  12. Elastic Network Model of a Nuclear Transport Complex

    Science.gov (United States)

    Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.

    2010-05-01

    The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.

  13. In Situ Experiment and Numerical Model Validation of a Borehole Heat Exchanger in Shallow Hard Crystalline Rock

    Directory of Open Access Journals (Sweden)

    Mateusz Janiszewski

    2018-04-01

    Full Text Available Accurate and fast numerical modelling of the borehole heat exchanger (BHE is required for simulation of long-term thermal energy storage in rocks using boreholes. The goal of this study was to conduct an in situ experiment to validate the proposed numerical modelling approach. In the experiment, hot water was circulated for 21 days through a single U-tube BHE installed in an underground research tunnel located at a shallow depth in crystalline rock. The results of the simulations using the proposed model were validated against the measurements. The numerical model simulated the BHE’s behaviour accurately and compared well with two other modelling approaches from the literature. The model is capable of replicating the complex geometrical arrangement of the BHE and is considered to be more appropriate for simulations of BHE systems with complex geometries. The results of the sensitivity analysis of the proposed model have shown that low thermal conductivity, high density, and high heat capacity of rock are essential for maximising the storage efficiency of a borehole thermal energy storage system. Other characteristics of BHEs, such as a high thermal conductivity of the grout, a large radius of the pipe, and a large distance between the pipes, are also preferred for maximising efficiency.

  14. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....

  15. Complex Networks in Psychological Models

    Science.gov (United States)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  16. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  17. Higher genus correlators for the complex matrix model

    International Nuclear Information System (INIS)

    Ambjorn, J.; Kristhansen, C.F.; Makeenko, Y.M.

    1992-01-01

    In this paper, the authors describe an iterative scheme which allows us to calculate any multi-loop correlator for the complex matrix model to any genus using only the first in the chain of loop equations. The method works for a completely general potential and the results contain no explicit reference to the couplings. The genus g contribution to the m-loop correlator depends on a finite number of parameters, namely at most 4g - 2 + m. The authors find the generating functional explicitly up to genus three. The authors show as well that the model is equivalent to an external field problem for the complex matrix model with a logarithmic potential

  18. The unique field experiments on the assessment of accident consequences at industrial enterprises of gas-chemical complexes

    International Nuclear Information System (INIS)

    Belov, N.S.; Trebin, I.S.; Sorokovikova, O.

    1998-01-01

    Sour natural gas fields are the unique raw material base for setting up such large enterprises as gas chemical complexes. The presence of high toxic H 2 S in natural gas results in widening a range of dangerous and harmful factors for biosphere. Emission of such gases into atmosphere during accidents at gas wells and gas pipelines is of especial danger for environment and first of all for people. Development of mathematical forecast models for assessment of accidents progression and consequences is one of the main elements of works on safety analysis and risk assessment. The critical step in development of such models is their validation using the experimental material. Full-scale experiments have been conducted by the All-Union Scientific-Research institute of Natural Gases and Gas Technology (VNIIGAZ) for grounding of sizes of hazard zones in case of the severe accidents with the gas pipelines. The source of emergency gas release was the working gas pipelines with 100 mm dia. And 110 km length. This pipeline was used for transportation of natural gas with significant amount of hydrogen sulphide. During these experiments significant quantities of the gas including H 2 S were released into the atmosphere and then concentrations of gas and H 2 S were measured in the accident region. The results of these experiments are used for validation of atmospheric dispersion models including the new Lagrangian trace stochastic model that takes into account a wide range of meteorological factors. This model was developed as a part of computer system for decision-making support in case of accident release of toxic gases into atmosphere at the enterprises of Russian gas industry. (authors)

  19. Complex Dynamics in Nonequilibrium Economics and Chemistry

    Science.gov (United States)

    Wen, Kehong

    Complex dynamics provides a new approach in dealing with economic complexity. We study interactively the empirical and theoretical aspects of business cycles. The way of exploring complexity is similar to that in the study of an oscillatory chemical system (BZ system)--a model for modeling complex behavior. We contribute in simulating qualitatively the complex periodic patterns observed from the controlled BZ experiments to narrow the gap between modeling and experiment. The gap between theory and reality is much wider in economics, which involves studies of human expectations and decisions, the essential difference from natural sciences. Our empirical and theoretical studies make substantial progress in closing this gap. With the help from the new development in nonequilibrium physics, i.e., the complex spectral theory, we advance our technique in detecting characteristic time scales from empirical economic data. We obtain correlation resonances, which give oscillating modes with decays for correlation decomposition, from different time series including S&P 500, M2, crude oil spot prices, and GNP. The time scales found are strikingly compatible with business experiences and other studies in business cycles. They reveal the non-Markovian nature of coherent markets. The resonances enhance the evidence of economic chaos obtained by using other tests. The evolving multi-humped distributions produced by the moving-time -window technique reveal the nonequilibrium nature of economic behavior. They reproduce the American economic history of booms and busts. The studies seem to provide a way out of the debate on chaos versus noise and unify the cyclical and stochastic approaches in explaining business fluctuations. Based on these findings and new expectation formulation, we construct a business cycle model which gives qualitatively compatible patterns to those found empirically. The soft-bouncing oscillator model provides a better alternative than the harmonic oscillator

  20. Simulation - modeling - experiment; Simulation - modelisation - experience

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  1. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    Science.gov (United States)

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  2. Spectroscopic studies of molybdenum complexes as models for nitrogenase

    International Nuclear Information System (INIS)

    Walker, T.P.

    1981-05-01

    Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo 2 O 4 (cysteine) 2 2- and trans-Mo(N 2 ) 2 (dppe) 2 (dppe = 1,2-bis(diphenylphosphino)ethane). The H 1 and C 13 NMR of solutions of Mo 2 O 4 (cys) 2 2- are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N 2 ) 2 (dppe) 2 is described and compared to the EXAFS of MoH 4 (dppe) 2 . The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo 2 O 4 (cys) 2 2- and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction

  3. Quantum interference experiments with complex organic molecules

    International Nuclear Information System (INIS)

    Eibenberger, S. I.

    2015-01-01

    Matter-wave interference with complex particles is a thriving field in experimental quantum physics. The quest for testing the quantum superposition principle with highly complex molecules has motivated the development of the Kapitza-Dirac-Talbot-Lau interferometer (KDTLI). This interferometer has enabled quantum interference with large organic molecules in an unprecedented mass regime. In this doctoral thesis I describe quantum superposition experiments which we were able to successfully realize with molecules of masses beyond 10 000 amu and consisting of more than 800 atoms. The typical de Broglie wavelengths of all particles in this thesis are in the order of 0.3-5 pm. This is significantly smaller than any molecular extension (nanometers) or the delocalization length in our interferometer (hundreds of nanometers). Many vibrational and rotational states are populated since the molecules are thermally highly excited (300-1000 K). And yet, high-contrast quantum interference patterns could be observed. The visibility and position of these matter-wave interference patterns is highly sensitive to external perturbations. This sensitivity has opened the path to extensive studies of the influence of internal molecular properties on the coherence of their associated matter waves. In addition, it enables a new approach to quantum-assisted metrology. Quantum interference imprints a high-contrast nano-structured density pattern onto the molecular beam which allows us to resolve tiny shifts and dephasing of the molecular beam. I describe how KDTL interferometry can be used to investigate a number of different molecular properties. We have studied vibrationally-induced conformational changes of floppy molecules and permanent electric dipole moments using matter-wave deflectometry in an external electric field. We have developed a new method for optical absorption spectroscopy which uses the recoil of the molecules upon absorption of individual photons. This allows us to

  4. Mathematical Models to Determine Stable Behavior of Complex Systems

    Science.gov (United States)

    Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

    2018-05-01

    The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

  5. Facing urban complexity : towards cognitive modelling. Part 1. Modelling as a cognitive mediator

    Directory of Open Access Journals (Sweden)

    Sylvie Occelli

    2002-03-01

    Full Text Available Over the last twenty years, complexity issues have been a central theme of enquiry for the modelling field. Whereas contributing to both a critical revisiting of the existing methods and opening new ways of reasoning, the effectiveness (and sense of modelling activity was rarely questioned. Acknowledgment of complexity however has been a fruitful spur new and more sophisticated methods in order to improve understanding and advance geographical sciences. However its contribution to tackle urban problems in everyday life has been rather poor and mainly limited to rhetorical claims about the potentialities of the new approach. We argue that although complexity has put the classical modelling activity in serious distress, it is disclosing new potentialities, which are still largely unnoticed. These are primarily related to what the authors has called the structural cognitive shift, which involves both the contents and role of modelling activity. This paper is a first part of a work aimed to illustrate the main features of this shift and discuss its main consequences on the modelling activity. We contend that a most relevant aspect of novelty lies in the new role of modelling as a cognitive mediator, i.e. as a kind of interface between the various components of a modelling process and the external environment to which a model application belongs.

  6. BlenX-based compositional modeling of complex reaction mechanisms

    Directory of Open Access Journals (Sweden)

    Judit Zámborszky

    2010-02-01

    Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.

  7. Barrier experiment: Shock initiation under complex loading

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-12

    The barrier experiments are a variant of the gap test; a detonation wave in a donor HE impacts a barrier and drives a shock wave into an acceptor HE. The question we ask is: What is the trade-off between the barrier material and threshold barrier thickness to prevent the acceptor from detonating. This can be viewed from the perspective of shock initiation of the acceptor subject to a complex pressure drive condition. Here we consider key factors which affect whether or not the acceptor undergoes a shock-to-detonation transition. These include the following: shock impedance matches for the donor detonation wave into the barrier and then the barrier shock into the acceptor, the pressure gradient behind the donor detonation wave, and the curvature of detonation front in the donor. Numerical simulations are used to illustrate how these factors affect the reaction in the acceptor.

  8. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  9. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

    Energy Technology Data Exchange (ETDEWEB)

    Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

    2012-10-07

    Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

  10. Modeling complex flow structures and drag around a submerged plant of varied posture

    Science.gov (United States)

    Boothroyd, Richard J.; Hardy, Richard J.; Warburton, Jeff; Marjoribanks, Timothy I.

    2017-04-01

    Although vegetation is present in many rivers, the bulk of past work concerned with modeling the influence of vegetation on flow has considered vegetation to be morphologically simple and has generally neglected the complexity of natural plants. Here we report on a combined flume and numerical model experiment which incorporates time-averaged plant posture, collected through terrestrial laser scanning, into a computational fluid dynamics model to predict flow around a submerged riparian plant. For three depth-limited flow conditions (Reynolds number = 65,000-110,000), plant dynamics were recorded through high-definition video imagery, and the numerical model was validated against flow velocities collected with an acoustic Doppler velocimeter. The plant morphology shows an 18% reduction in plant height and a 14% increase in plant length, compressing and reducing the volumetric canopy morphology as the Reynolds number increases. Plant shear layer turbulence is dominated by Kelvin-Helmholtz type vortices generated through shear instability, the frequency of which is estimated to be between 0.20 and 0.30 Hz, increasing with Reynolds number. These results demonstrate the significant effect that the complex morphology of natural plants has on in-stream drag, and allow a physically determined, species-dependent drag coefficient to be calculated. Given the importance of vegetation in river corridor management, the approach developed here demonstrates the necessity to account for plant motion when calculating vegetative resistance.

  11. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  12. Liquid jets for experiments on complex fluids

    International Nuclear Information System (INIS)

    Steinke, Ingo

    2015-02-01

    The ability of modern storage rings and free-electron lasers to produce intense X-ray beams that can be focused down to μm and nm sizes offers the possibility to study soft condensed matter systems on small length and short time scales. Gas dynamic virtual nozzles (GDVN) offer the unique possibility to investigate complex fluids spatially confined in a μm sized liquid jet with high flow rates, high pressures and shear stress distributions. In this thesis two different applications of liquid jet injection systems have been studied. The influence of the shear flow present in a liquid jet on colloidal dispersions was investigated via small angle X-ray scattering and a coherent wide angle X-ray scattering experiment on a liquid water jet was performed. For these purposes, liquid jet setups that are capable for X-ray scattering experiments have been developed and the manufacturing of gas dynamic virtual nozzles was realized. The flow properties of a liquid jet and their influences on the liquid were studied with two different colloidal dispersions at beamline P10 at the storage ring PETRA III. The results show that high shear flows present in a liquid jet lead to compressions and expansions of the particle structure and to particle alignments. The shear rate in the used liquid jet could be estimated to γ ≥ 5.4 . 10 4 Hz. The feasibility of rheology studies with a liquid jet injection system and the combined advantages is discussed. The coherent X-ray scattering experiment on a water jet was performed at the XCS instrument at the free-electron laser LCLS. First coherent single shot diffraction patterns from water were taken to investigate the feasibility of measuring speckle patterns from water.

  13. Atmospheric statistical dynamic models. Climate experiments: albedo experiments with a zonal atmospheric model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts

  14. Modelling, Estimation and Control of Networked Complex Systems

    CERN Document Server

    Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

    2009-01-01

    The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

  15. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  16. Generalized complex geometry, generalized branes and the Hitchin sigma model

    International Nuclear Information System (INIS)

    Zucchini, Roberto

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  17. Narrowing the gap between network models and real complex systems

    OpenAIRE

    Viamontes Esquivel, Alcides

    2014-01-01

    Simple network models that focus only on graph topology or, at best, basic interactions are often insufficient to capture all the aspects of a dynamic complex system. In this thesis, I explore those limitations, and some concrete methods of resolving them. I argue that, in order to succeed at interpreting and influencing complex systems, we need to take into account  slightly more complex parts, interactions and information flows in our models.This thesis supports that affirmation with five a...

  18. Validation of the containment code Sirius: interpretation of an explosion experiment on a scale model

    International Nuclear Information System (INIS)

    Blanchet, Y.; Obry, P.; Louvet, J.; Deshayes, M.; Phalip, C.

    1979-01-01

    The explicit 2-D axisymmetric Langrangian code SIRIUS, developed at the CEA/DRNR, Cadarache, deals with transient compressive flows in deformable primary tanks with more or less complex internal component geometries. This code has been subjected to a two-year intensive validation program on scale model experiments and a number of improvements have been incorporated. This paper presents a recent calculation of one of these experiments using the SIRIUS code, and the comparison with experimental results shows the encouraging possibilities of this Lagrangian code

  19. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    Directory of Open Access Journals (Sweden)

    M. Schnaiter

    2016-04-01

    Full Text Available This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere cloud chamber of the Karlsruhe Institute of Technology (KIT. A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the −40 to −60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3. It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN probe of Laboratoire de Métérologie et Physique (LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  20. Amblypygids: Model Organisms for the Study of Arthropod Navigation Mechanisms in Complex Environments?

    Directory of Open Access Journals (Sweden)

    Daniel D Wiegmann

    2016-03-01

    Full Text Available Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as a model system for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms.

  1. Modeling geophysical complexity: a case for geometric determinism

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2007-01-01

    Full Text Available It has been customary in the last few decades to employ stochastic models to represent complex data sets encountered in geophysics, particularly in hydrology. This article reviews a deterministic geometric procedure to data modeling, one that represents whole data sets as derived distributions of simple multifractal measures via fractal functions. It is shown how such a procedure may lead to faithful holistic representations of existing geophysical data sets that, while complementing existing representations via stochastic methods, may also provide a compact language for geophysical complexity. The implications of these ideas, both scientific and philosophical, are stressed.

  2. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  3. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  4. Knowledge-based inspection:modelling complex processes with the integrated Safeguards Modelling Method (iSMM)

    International Nuclear Information System (INIS)

    Abazi, F.

    2011-01-01

    Increased level of complexity in almost every discipline and operation today raises the demand for knowledge in order to successfully run an organization whether to generate profit or to attain a non-profit mission. Traditional way of transferring knowledge to information systems rich in data structures and complex algorithms continue to hinder the ability to swiftly turnover concepts into operations. Diagrammatic modelling commonly applied in engineering in order to represent concepts or reality remains to be an excellent way of converging knowledge from domain experts. The nuclear verification domain represents ever more a matter which has great importance to the World safety and security. Demand for knowledge about nuclear processes and verification activities used to offset potential misuse of nuclear technology will intensify with the growth of the subject technology. This Doctoral thesis contributes with a model-based approach for representing complex process such as nuclear inspections. The work presented contributes to other domains characterized with knowledge intensive and complex processes. Based on characteristics of a complex process a conceptual framework was established as the theoretical basis for creating a number of modelling languages to represent the domain. The integrated Safeguards Modelling Method (iSMM) is formalized through an integrated meta-model. The diagrammatic modelling languages represent the verification domain and relevant nuclear verification aspects. Such a meta-model conceptualizes the relation between practices of process management, knowledge management and domain specific verification principles. This fusion is considered as necessary in order to create quality processes. The study also extends the formalization achieved through a meta-model by contributing with a formalization language based on Pattern Theory. Through the use of graphical and mathematical constructs of the theory, process structures are formalized enhancing

  5. Network-oriented modeling addressing complexity of cognitive, affective and social interactions

    CERN Document Server

    Treur, Jan

    2016-01-01

    This book presents a new approach that can be applied to complex, integrated individual and social human processes. It provides an alternative means of addressing complexity, better suited for its purpose than and effectively complementing traditional strategies involving isolation and separation assumptions. Network-oriented modeling allows high-level cognitive, affective and social models in the form of (cyclic) graphs to be constructed, which can be automatically transformed into executable simulation models. The modeling format used makes it easy to take into account theories and findings about complex cognitive and social processes, which often involve dynamics based on interrelating cycles. Accordingly, it makes it possible to address complex phenomena such as the integration of emotions within cognitive processes of all kinds, of internal simulations of the mental processes of others, and of social phenomena such as shared understandings and collective actions. A variety of sample models – including ...

  6. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology.

    Science.gov (United States)

    Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu

    2017-06-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).

  7. Building a pseudo-atomic model of the anaphase-promoting complex

    International Nuclear Information System (INIS)

    Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; Fonseca, Paula C. A. da; Barford, David

    2013-01-01

    This article describes an example of molecular replacement in which atomic models are used to interpret electron-density maps determined using single-particle electron-microscopy data. The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14–15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex

  8. Modelling the complex dynamics of vegetation, livestock and rainfall ...

    African Journals Online (AJOL)

    Open Access DOWNLOAD FULL TEXT ... In this paper, we present mathematical models that incorporate ideas from complex systems theory to integrate several strands of rangeland theory in a hierarchical framework. ... Keywords: catastrophe theory; complexity theory; disequilibrium; hysteresis; moving attractors

  9. Modeling Complex Nesting Structures in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Nielsen, Sabina

    2013-01-01

    hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...

  10. Surface complexation modeling of U(VI) sorption on GMZ bentonite in the presence of fulvic acid

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Jie [Lanzhou Univ. (China). Radiochemistry Laboratory; Ministry of Industry and Information Technology, Guangzhou (China). The 5th Electronics Research Inst.; Luo, Daojun [Ministry of Industry and Information Technology, Guangzhou (China). The 5th Electronics Research Inst.; Qiao, Yahua; Wang, Liang; Zhang, Chunming [Ministry of Environmental Protection, Beijing (China). Nuclear and Radiation Safety Center; Wu, Wangsuo [Lanzhou Univ. (China). Radiochemistry Laboratory; Ye, Yuanlv [Ministry of Environmental Protection, Beijing (China). Nuclear and Radiation Safety Center; Lanzhou Univ. (China). Radiochemistry Laboratory

    2017-03-01

    In this work, experiments and modeling for the interactions between uranyl ion and GMZ bentonite in the presence of fulvic acid are presented. The results demonstrated that FA is strongly bound to GMZ bentonite, and these molecules have a very large effect on the U(VI) sorption. The results also demonstrated that U(VI) sorption to GMZ bentonite in the presence and absence of sorbed FA can be well predicted by combining SHM and DLM. According to the model calculations, the nature of the interactions between FA with U(VI) at GMZ bentonite surface is mainly surface complex. The first attempt to simulate clay interaction with humus by the SHM model.

  11. Atmospheric dispersion experiments over complex terrain in a spanish valley site (Guardo-90)

    International Nuclear Information System (INIS)

    Ibarra, J.I.

    1991-01-01

    An intensive field experimental campaign was conducted in Spain to quantify atmospheric diffusion within a deep, steep-walled valley in rough, mountainous terrain. The program has been sponsored by the spanish companies of electricity and is intended to validate existing plume models and to provide the scientific basis for future model development. The atmospheric dispersion and transport processes in a 40x40 km domain were studied in order to evaluate SO 2 and SF 6 releases from an existing 185 m chimney and ground level sources in a complex terrain valley site. Emphasis was placed on the local mesoscale flows and light wind stable conditions. Although the measuring program was intensified during daytime for dual tracking of SO 2 /SF 6 from an elevated source, nighttime experiments were conducted for mountain-valley flows characterization. Two principle objectives were pursued: impaction of plumes upon elevated terrain, and diffusion of gases within the valley versus diffusion over flat, open terrain. Artificial smoke flows visualizations provided qualitative information: quantitative diffusion measurements were obtained using sulfur hexafluoride gas with analysis by highly sensitive electron capture gas chromatographs systems. Fourteen 2 hours gaseous tracer releases were conducted

  12. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  13. Surface-complexation models for sorption onto heterogeneous surfaces

    International Nuclear Information System (INIS)

    Harvey, K.B.

    1997-10-01

    This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)

  14. The Complexity of Constructing Evolutionary Trees Using Experiments

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm

    2001-01-01

    We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(nd logd n) using at most n⌈d/2⌉(log2⌈d/2⌉-1 n+O(1)) experiments for d > 2, and at most n(log n......+O(1)) experiments for d = 2, where d is the degree of the tree. This improves the previous best upper bound by a factor θ(log d). For d = 2 the previously best algorithm with running time O(n log n) had a bound of 4n log n on the number of experiments. By an explicit adversary argument, we show an Ω......(nd logd n) lower bound, matching our upper bounds and improving the previous best lower bound by a factor θ(logd n). Central to our algorithm is the construction and maintenance of separator trees of small height, which may be of independent interest....

  15. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  16. Simulation - modeling - experiment

    International Nuclear Information System (INIS)

    2004-01-01

    After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F

  17. Research of experience of leading foreign countries in the management by a build complex

    OpenAIRE

    Borovik, Yu

    2010-01-01

    In the article the experience of leading foreign countries is explored in the management by build industry and possibilities of his application in the management by the transport build complex of Ukraine.

  18. Validation and Analysis of Forward Osmosis CFD Model in Complex 3D Geometries

    Science.gov (United States)

    Gruber, Mathias F.; Johnson, Carl J.; Tang, Chuyang; Jensen, Mogens H.; Yde, Lars; Hélix-Nielsen, Claus

    2012-01-01

    In forward osmosis (FO), an osmotic pressure gradient generated across a semi-permeable membrane is used to generate water transport from a dilute feed solution into a concentrated draw solution. This principle has shown great promise in the areas of water purification, wastewater treatment, seawater desalination and power generation. To ease optimization and increase understanding of membrane systems, it is desirable to have a comprehensive model that allows for easy investigation of all the major parameters in the separation process. Here we present experimental validation of a computational fluid dynamics (CFD) model developed to simulate FO experiments with asymmetric membranes. Simulations are compared with experimental results obtained from using two distinctly different complex three-dimensional membrane chambers. It is found that the CFD model accurately describes the solute separation process and water permeation through membranes under various flow conditions. It is furthermore demonstrated how the CFD model can be used to optimize membrane geometry in such as way as to promote the mass transfer. PMID:24958428

  19. Validation and Analysis of Forward Osmosis CFD Model in Complex 3D Geometries

    Directory of Open Access Journals (Sweden)

    Lars Yde

    2012-11-01

    Full Text Available In forward osmosis (FO, an osmotic pressure gradient generated across a semi-permeable membrane is used to generate water transport from a dilute feed solution into a concentrated draw solution. This principle has shown great promise in the areas of water purification, wastewater treatment, seawater desalination and power generation. To ease optimization and increase understanding of membrane systems, it is desirable to have a comprehensive model that allows for easy investigation of all the major parameters in the separation process. Here we present experimental validation of a computational fluid dynamics (CFD model developed to simulate FO experiments with asymmetric membranes. Simulations are compared with experimental results obtained from using two distinctly different complex three-dimensional membrane chambers. It is found that the CFD model accurately describes the solute separation process and water permeation through membranes under various flow conditions. It is furthermore demonstrated how the CFD model can be used to optimize membrane geometry in such as way as to promote the mass transfer.

  20. Design and Validation of 3D Printed Complex Bone Models with Internal Anatomic Fidelity for Surgical Training and Rehearsal.

    Science.gov (United States)

    Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan

    2014-01-01

    Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.

  1. Model-Based Approach to the Evaluation of Task Complexity in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ham, Dong Han

    2007-02-01

    This study developed a model-based method for evaluating task complexity and examined the ways of evaluating the complexity of tasks designed for abnormal situations and daily task situations in NPPs. The main results of this study can be summarised as follows. First, this study developed a conceptual framework for studying complexity factors and a model of complexity factors that classifies complexity factors according to the types of knowledge that human operators use. Second, this study developed a more practical model of task complexity factors and identified twenty-one complexity factors based on the model. The model emphasizes that a task is a system to be designed and its complexity has several dimensions. Third, we developed a method of identifying task complexity factors and evaluating task complexity qualitatively based on the developed model of task complexity factors. This method can be widely used in various task situations. Fourth, this study examined the applicability of TACOM to abnormal situations and daily task situations, such as maintenance and confirmed that it can be reasonably used in those situations. Fifth, we developed application examples to demonstrate the use of the theoretical results of this study. Lastly, this study reinterpreted well-know principles for designing information displays in NPPs in terms of task complexity and suggested a way of evaluating the conceptual design of displays in an analytical way by using the concept of task complexity. All of the results of this study will be used as a basis when evaluating the complexity of tasks designed on procedures or information displays and designing ways of improving human performance in NPPs

  2. Entropies from Markov Models as Complexity Measures of Embedded Attractors

    Directory of Open Access Journals (Sweden)

    Julián D. Arias-Londoño

    2015-06-01

    Full Text Available This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.

  3. A new decision sciences for complex systems

    OpenAIRE

    Lempert, Robert J.

    2002-01-01

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an...

  4. Analogue experiments as benchmarks for models of lava flow emplacement

    Science.gov (United States)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.

  5. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  6. Membrane-elasticity model of Coatless vesicle budding induced by ESCRT complexes.

    Directory of Open Access Journals (Sweden)

    Bartosz Różycki

    Full Text Available The formation of vesicles is essential for many biological processes, in particular for the trafficking of membrane proteins within cells. The Endosomal Sorting Complex Required for Transport (ESCRT directs membrane budding away from the cytosol. Unlike other vesicle formation pathways, the ESCRT-mediated budding occurs without a protein coat. Here, we propose a minimal model of ESCRT-induced vesicle budding. Our model is based on recent experimental observations from direct fluorescence microscopy imaging that show ESCRT proteins colocalized only in the neck region of membrane buds. The model, cast in the framework of membrane elasticity theory, reproduces the experimentally observed vesicle morphologies with physically meaningful parameters. In this parameter range, the minimum energy configurations of the membrane are coatless buds with ESCRTs localized in the bud neck, consistent with experiment. The minimum energy configurations agree with those seen in the fluorescence images, with respect to both bud shapes and ESCRT protein localization. On the basis of our model, we identify distinct mechanistic pathways for the ESCRT-mediated budding process. The bud size is determined by membrane material parameters, explaining the narrow yet different bud size distributions in vitro and in vivo. Our membrane elasticity model thus sheds light on the energetics and possible mechanisms of ESCRT-induced membrane budding.

  7. Effect on tracer concentrations of ABL depth models in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Galmarini, S.; Salin, P. [Joint Research Center Ispra (Italy); Anfossi, D.; Trini-Castelli, S. [CNR-ICGF, Turin (Italy); Schayes, G. [Univ. Louvain-la-Neuve, Louvain (Belgium)

    1997-10-01

    In the present preliminary study we use different ABL (atmospheric boundary layer) depth formulations to study atmospheric dispersion in complex-terrain conditions. The flow in an Alpine valley during the tracer experiment TRANSALP is simulated by means of a mesoscale model and a tracer dispersion is reproduced using a Lagrangian particle model. The ABL dept enters as key parameter in particle model turbulent-dispersion formulation. The preliminary results reveal that the ABL depth parameter can influence the dispersion process but that in the case of a dispersion in a valley-daytime flow the results depend much more strongly on the model horizontal and vertical resolution. A relatively coarse horizontal resolution implies a considerable smoothing of the topography that largely affects the dispersion characteristics. The vertical resolution does not allow on to resolve with sufficient details the rapid and large variation of the flow characteristic as the terrain feature vary. Two of the methods used to determine the ABL depth depend strongly on the resolution. The method that instead depends only on surface parameters like heat flux and surface based stability allowed us to obtain results to be considered satisfactory for what concerns the dispersion process, quite consistent with the flow model results, less numeric dependent and more physically sound. (LN)

  8. Modeling the propagation of mobile malware on complex networks

    Science.gov (United States)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  9. Nostradamus 2014 prediction, modeling and analysis of complex systems

    CERN Document Server

    Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

    2014-01-01

    The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

  10. Glass Durability Modeling, Activated Complex Theory (ACT)

    International Nuclear Information System (INIS)

    CAROL, JANTZEN

    2005-01-01

    The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple

  11. Stability of rotor systems: A complex modelling approach

    DEFF Research Database (Denmark)

    Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

    1998-01-01

    The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...

  12. Theoretical Simulations and Ultrafast Pump-probe Spectroscopy Experiments in Pigment-protein Photosynthetic Complexes

    Energy Technology Data Exchange (ETDEWEB)

    Buck, D. R. [Iowa State Univ., Ames, IA (United States)

    2000-09-12

    Theoretical simulations and ultrafast pump-probe laser spectroscopy experiments were used to study photosynthetic pigment-protein complexes and antennae found in green sulfur bacteria such as Prosthecochloris aestuarii, Chloroflexus aurantiacus, and Chlorobium tepidum. The work focused on understanding structure-function relationships in energy transfer processes in these complexes through experiments and trying to model that data as we tested our theoretical assumptions with calculations. Theoretical exciton calculations on tubular pigment aggregates yield electronic absorption spectra that are superimpositions of linear J-aggregate spectra. The electronic spectroscopy of BChl c/d/e antennae in light harvesting chlorosomes from Chloroflexus aurantiacus differs considerably from J-aggregate spectra. Strong symmetry breaking is needed if we hope to simulate the absorption spectra of the BChl c antenna. The theory for simulating absorption difference spectra in strongly coupled photosynthetic antenna is described, first for a relatively simple heterodimer, then for the general N-pigment system. The theory is applied to the Fenna-Matthews-Olson (FMO) BChl a protein trimers from Prosthecochloris aestuarii and then compared with experimental low-temperature absorption difference spectra of FMO trimers from Chlorobium tepidum. Circular dichroism spectra of the FMO trimer are unusually sensitive to diagonal energy disorder. Substantial differences occur between CD spectra in exciton simulations performed with and without realistic inhomogeneous distribution functions for the input pigment diagonal energies. Anisotropic absorption difference spectroscopy measurements are less consistent with 21-pigment trimer simulations than 7-pigment monomer simulations which assume that the laser-prepared states are localized within a subunit of the trimer. Experimental anisotropies from real samples likely arise from statistical averaging over states with diagonal energies shifted by

  13. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    Science.gov (United States)

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  14. Surface Complexation Modeling of Fluoride Adsorption by Soil and the Role of Dissolved Aluminum on Adsorption

    Science.gov (United States)

    Padhi, S.; Tokunaga, T.

    2017-12-01

    Adsorption of fluoride (F) on soil can control the mobility of F and subsequent contamination of groundwater. Hence, accurate evaluation of adsorption equilibrium is a prerequisite for understanding transport and fate of F in the subsurface. While there have been studies for the adsorption behavior of F with respect to single mineral constituents based on surface complexation models (SCM), F adsorption to natural soil in the presence of complexing agents needs much investigation. We evaluated the adsorption processes of F on a natural granitic soil from Tsukuba, Japan, as a function of initial F concentration, ionic strength, and initial pH. A SCM was developed to model F adsorption behavior. Four possible surface complexation reactions were postulated with and without including dissolved aluminum (Al) and Al-F complex sorption. Decrease in F adsorption with the increase in initial pH was observed in between the initial pH range of 4 to 9, and a decrease in the rate of the reduction of adsorbed F with respect to the increase in the initial pH was observed in the initial pH range of 5 to 7. Ionic strength variation in the range of 0 to 100mM had insignificant effect on F removal. Changes in solution pH were observed by comparing the solution before and after F adsorption experiments. At acidic pH, the solution pH increased, whereas at alkaline pH, the solution pH decreased after equilibrium. The SCM including dissolved Al and the adsorption of Al-F complex can simulate the experimental results quite successfully. Also, including dissolved Al and the adsorption of Al-F complex to the model explained the change in solution pH after F adsorption.

  15. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    Science.gov (United States)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  16. BRISENT: An Entropy-Based Model for Bridge-Pier Scour Estimation under Complex Hydraulic Scenarios

    Directory of Open Access Journals (Sweden)

    Alonso Pizarro

    2017-11-01

    Full Text Available The goal of this paper is to introduce the first clear-water scour model based on both the informational entropy concept and the principle of maximum entropy, showing that a variational approach is ideal for describing erosional processes under complex situations. The proposed bridge–pier scour entropic (BRISENT model is capable of reproducing the main dynamics of scour depth evolution under steady hydraulic conditions, step-wise hydrographs, and flood waves. For the calibration process, 266 clear-water scour experiments from 20 precedent studies were considered, where the dimensionless parameters varied widely. Simple formulations are proposed to estimate BRISENT’s fitting coefficients, in which the ratio between pier-diameter and sediment-size was the most critical physical characteristic controlling scour model parametrization. A validation process considering highly unsteady and multi-peaked hydrographs was carried out, showing that the proposed BRISENT model reproduces scour evolution with high accuracy.

  17. A multi-element cosmological model with a complex space-time topology

    Science.gov (United States)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  18. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  19. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  20. DEVELOPING INDUSTRIAL ROBOT SIMULATION MODEL TUR10-K USING “UNIVERSAL MECHANISM” SOFTWARE COMPLEX

    Directory of Open Access Journals (Sweden)

    Vadim Vladimirovich Chirkov

    2018-02-01

    Full Text Available Manipulation robots are complex spatial mechanical systems having five or six degrees of freedom, and sometimes more. For this reason, modeling manipulative robots movement, even in the kinematic formulation, is a complex mathematical task. If one moves from kinematic modeling of motion to dynamic modeling then there must be taken into account the inertial properties of the modeling object. In this case, analytical constructing of such a complex object mathematical model as a manipulation robot becomes practically impossible. Therefore, special computer-aided design systems, called CAE-systems, are used for modeling complex mechanical systems. The purpose of the paper is simulation model construction of a complex mechanical system, such as the industrial robot TUR10-K, to obtain its dynamic characteristics. Developing such models makes it possible to reduce the complexity of designing complex systems process and to obtain the necessary characteristics. Purpose. Developing the simulation model of the industrial robot TUR10-K and obtaining dynamic characteristics of the mechanism. Methodology: the article is used a computer simulation method. Results: There is obtained the simulation model of the robot and its dynamic characteristics. Practical implications: the results can be used in the mechanical systems design and various simulation models.

  1. User Defined Data in the New Analysis Model of the BaBar Experiment

    Energy Technology Data Exchange (ETDEWEB)

    De Nardo, G.

    2005-04-06

    The BaBar experiment has recently revised its Analysis Model. One of the key ingredient of BaBar new Analysis Model is the support of the capability to add to the Event Store user defined data, which can be the output of complex computations performed at an advanced stage of a physics analysis, and are associated to analysis objects. In order to provide flexibility and extensibility with respect to object types, template generic programming has been adopted. In this way the model is non-intrusive with respect to reconstruction and analysis objects it manages, not requiring changes in their interfaces and implementations. Technological details are hidden as much as possible to the user, providing a simple interface. In this paper we present some of the limitations of the old model and how they are addressed by the new Analysis Model.

  2. Robotic general surgery experience: a gradual progress from simple to more complex procedures.

    Science.gov (United States)

    Al-Naami, M; Anjum, M N; Aldohayan, A; Al-Khayal, K; Alkharji, H

    2013-12-01

    Robotic surgery was introduced at our institution in 2003, and we used a progressive approach advancing from simple to more complex procedures. A retrospective chart review. Cases included totalled 129. Set-up and operative times have improved over time and with experience. Conversion rates to standard laparoscopic or open techniques were 4.7% and 1.6%, respectively. Intraoperative complications (6.2%), blood loss and hospital stay were directly proportional to complexity. There were no mortalities and the postoperative complication rate (13.2%) was within accepted norms. Our findings suggest that robot technology is presently most useful in cases tailored toward its advantages, i.e. those confined to a single space, those that require performance of complex tasks, and re-do procedures. Copyright © 2013 John Wiley & Sons, Ltd.

  3. New approaches in agent-based modeling of complex financial systems

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  4. Modeling of laser-driven hydrodynamics experiments

    Science.gov (United States)

    di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul

    2017-10-01

    Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.

  5. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  6. Advances in dynamic network modeling in complex transportation systems

    CERN Document Server

    Ukkusuri, Satish V

    2013-01-01

    This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.

  7. Developing an agent-based model on how different individuals solve complex problems

    Directory of Open Access Journals (Sweden)

    Ipek Bozkurt

    2015-01-01

    Full Text Available Purpose: Research that focuses on the emotional, mental, behavioral and cognitive capabilities of individuals has been abundant within disciplines such as psychology, sociology, and anthropology, among others. However, when facing complex problems, a new perspective to understand individuals is necessary. The main purpose of this paper is to develop an agent-based model and simulation to gain understanding on the decision-making and problem-solving abilities of individuals. Design/Methodology/approach: The micro-level analysis modeling and simulation paradigm Agent-Based Modeling Through the use of Agent-Based Modeling, insight is gained on how different individuals with different profiles deal with complex problems. Using previous literature from different bodies of knowledge, established theories and certain assumptions as input parameters, a model is built and executed through a computer simulation. Findings: The results indicate that individuals with certain profiles have better capabilities to deal with complex problems. Moderate profiles could solve the entire complex problem, whereas profiles within extreme conditions could not. This indicates that having a strong predisposition is not the ideal way when approaching complex problems, and there should always be a component from the other perspective. The probability that an individual may use these capabilities provided by the opposite predisposition provides to be a useful option. Originality/value: The originality of the present research stems from how individuals are profiled, and the model and simulation that is built to understand how they solve complex problems. The development of the agent-based model adds value to the existing body of knowledge within both social sciences, and modeling and simulation.

  8. Ride comfort optimization of a multi-axle heavy motorized wheel dump truck based on virtual and real prototype experiment integrated Kriging model

    Directory of Open Access Journals (Sweden)

    Bian Gong

    2015-06-01

    Full Text Available The optimization of hydro-pneumatic suspension parameters of a multi-axle heavy motorized wheel dump truck is carried out based on virtual and real prototype experiment integrated Kriging model in this article. The root mean square of vertical vibration acceleration, in the center of sprung mass, is assigned as the optimization objective. The constraints are the natural frequency, the working stroke, and the dynamic load of wheels. The suspension structure for the truck is the adjustable hydro-pneumatic suspension with ideal vehicle nonlinear characteristics, integrated with elastic and damping elements. Also, the hydraulic systems of two adjacent hydro-pneumatic suspension are interconnected. Considering the high complexity of the engineering model, a novel kind of meta-model called virtual and real prototype experiment integrated Kriging is proposed in this article. The interpolation principle and the construction of virtual and real prototype experiment integrated Kriging model were elucidated. Being different from traditional Kriging, virtual and real prototype experiment integrated Kriging combines the respective advantages of actual test and Computer Aided Engineering simulation. Based on the virtual and real prototype experiment integrated Kriging model, the optimization results, obtained by experimental verification, showed significant improvement in the ride comfort by 12.48% for front suspension and 11.79% for rear suspension. Compared with traditional Kriging, the optimization effect was improved by 3.05% and 3.38% respectively. Virtual and real prototype experiment integrated Kriging provides an effective way to approach the optimal solution for the optimization of high-complexity engineering problems.

  9. Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off

    DEFF Research Database (Denmark)

    Nielsen, Helle Ørsted; Malawska, Anna Katarzyna

    2013-01-01

    to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...

  10. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    Directory of Open Access Journals (Sweden)

    L. A. Lee

    2011-12-01

    Full Text Available Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process

  11. Complex groundwater flow systems as traveling agent models

    Directory of Open Access Journals (Sweden)

    Oliver López Corona

    2014-10-01

    Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.

  12. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  13. Modeling Air-Quality in Complex Terrain Using Mesoscale and ...

    African Journals Online (AJOL)

    Air-quality in a complex terrain (Colorado-River-Valley/Grand-Canyon Area, Southwest U.S.) is modeled using a higher-order closure mesoscale model and a higher-order closure dispersion model. Non-reactive tracers have been released in the Colorado-River valley, during winter and summer 1992, to study the ...

  14. Cement/clay interactions: feedback on the increasing complexity of modeling assumptions

    International Nuclear Information System (INIS)

    Marty, Nicolas C.M.; Gaucher, Eric C.; Tournassat, Christophe; Gaboreau, Stephane; Vong, Chan Quang; Claret, F.; Munier, Isabelle; Cochepin, Benoit

    2012-01-01

    Document available in extended abstract form only. Cementitious materials will be widely used in French concept of radioactive waste repositories. During their degradation over time, in contact with geological pore water, they will release hyper-alkaline fluids rich in calcium and alkaline cations. This chemical gradient likely to develop at the cement/clay interfaces will induce geochemical transformations. The first simplified calculations based mainly on simple mass balance calculation led to a very pessimistic understanding of the real expansion mechanism of the alkaline plume. However, geochemical and migration processes are much more complex because of the dissolution of the barrier's accessory phases and the precipitation of secondary minerals. To describe and to understand this complexity, coupled geochemistry and transport calculations are a useful and a mandatory tool. Furthermore, such sets of modeling when properly calibrated on experimental results are able to give insights on larger time scale unreachable with experiments. Since approximately 20 years, numerous papers have described the results of reactive transport modeling of cement/clay interactions with various numerical assumptions. For example, some authors selected a purely thermodynamic approach while others preferred a coupled thermodynamic/kinetic approach. Unfortunately, most of these studies used different and not comparable parameters as space discretization, initial and boundary conditions, thermodynamic databases, clayey and cementitious materials, etc... This study revisits the types of simulations proposed in the past to represent the effect of an alkaline perturbation with regard to the degree of complexity that was considered. The main goal of the study is to perform simulations with a consistent set of data and an increasing complexity. In doing so, the analysis of numerical results will give a clear vision of key parameters driving the expansion of alteration fronts and

  15. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  16. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    Full Text Available 3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes and the final results (a more detailed and complex mesh versus an approximate and more simple solid model. Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and

  17. Atomic level insights into realistic molecular models of dendrimer-drug complexes through MD simulations

    Science.gov (United States)

    Jain, Vaibhav; Maiti, Prabal K.; Bharatam, Prasad V.

    2016-09-01

    Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH2) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH2) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH2) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an

  18. CFD and FEM modeling of PPOOLEX experiments

    Energy Technology Data Exchange (ETDEWEB)

    Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))

    2011-01-15

    Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)

  19. A cognitive model for software architecture complexity

    NARCIS (Netherlands)

    Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.

    2010-01-01

    Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In

  20. Model Simulations of a Field Experiment on Cation Exchange-affected Multicomponent Solute Transport in a Sandy Aquifer

    DEFF Research Database (Denmark)

    Bjerg, Poul Løgstrup; Ammentorp, Hans Christian; Christensen, Thomas Højlund

    1993-01-01

    A large-scale and long-term field experiment on cation exchange in a sandy aquifer has been modelled by a three-dimensional geochemical transport model. The geochemical model includes cation-exchange processes using a Gaines-Thomas expression, the closed carbonate system and the effects of ionic...... by batch experiments and by the composition of the cations on the exchange complex. Potassium showed a non-ideal exchange behaviour with K&z.sbnd;Ca selectivity coefficients indicating dependency on equivalent fraction and K+ concentration in the aqueous phase. The model simulations over a distance of 35 m...... and a period of 250 days described accurately the observed attenuation of Na and the expelled amounts of Ca and Mg. Also, model predictions of plateau zones, formed by interaction with the background groundwater, in general agreed satisfactorily with the observations. Transport of K was simulated over a period...

  1. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    Directory of Open Access Journals (Sweden)

    Borbala Mifsud

    Full Text Available Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html.

  2. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    Science.gov (United States)

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  3. Is there Complex Trauma Experience typology for Australian's experiencing extreme social disadvantage and low housing stability?

    Science.gov (United States)

    Keane, Carol A; Magee, Christopher A; Kelly, Peter J

    2016-11-01

    Traumatic childhood experiences predict many adverse outcomes in adulthood including Complex-PTSD. Understanding complex trauma within socially disadvantaged populations has important implications for policy development and intervention implementation. This paper examined the nature of complex trauma experienced by disadvantaged individuals using a latent class analysis (LCA) approach. Data were collected through the large-scale Journeys Home Study (N=1682), utilising a representative sample of individuals experiencing low housing stability. Data on adverse childhood experiences, adulthood interpersonal trauma and relevant covariates were collected through interviews at baseline (Wave 1). Latent class analysis (LCA) was conducted to identify distinct classes of childhood trauma history, which included physical assault, neglect, and sexual abuse. Multinomial logistic regression investigated childhood relevant factors associated with class membership such as biological relationship of primary carer at age 14 years and number of times in foster care. Of the total sample (N=1682), 99% reported traumatic adverse childhood experiences. The most common included witnessing of violence, threat/experience of physical abuse, and sexual assault. LCA identified six distinct childhood trauma history classes including high violence and multiple traumas. Significant covariate differences between classes included: gender, biological relationship of primary carer at age 14 years, and time in foster care. Identification of six distinct childhood trauma history profiles suggests there might be unique treatment implications for individuals living in extreme social disadvantage. Further research is required to examine the relationship between these classes of experience, consequent impact on adulthood engagement, and future transitions though homelessness. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Merging experiences and perspectives in the complexity of cross-cultural design

    DEFF Research Database (Denmark)

    Winschiers-Theophilus, Heike; Bidwell, Nicola; Blake, Edward

    2010-01-01

    While our cross-cultural IT research continuously strives to contribute towards the development of HCI appropriate cross-cultural models and best practices, we are aware of the specificity of each development context and the influence of each participant. Uncovering the complexity within our...

  5. Reduced Complexity Volterra Models for Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Hacıoğlu Rıfat

    2001-01-01

    Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

  6. Approaches to surface complexation modeling of Uranium(VI) adsorption on aquifer sediments

    Science.gov (United States)

    Davis, J.A.; Meece, D.E.; Kohler, M.; Curtis, G.P.

    2004-01-01

    Uranium(VI) adsorption onto aquifer sediments was studied in batch experiments as a function of pH and U(VI) and dissolved carbonate concentrations in artificial groundwater solutions. The sediments were collected from an alluvial aquifer at a location upgradient of contamination from a former uranium mill operation at Naturita, Colorado (USA). The ranges of aqueous chemical conditions used in the U(VI) adsorption experiments (pH 6.9 to 7.9; U(VI) concentration 2.5 ?? 10-8 to 1 ?? 10-5 M; partial pressure of carbon dioxide gas 0.05 to 6.8%) were based on the spatial variation in chemical conditions observed in 1999-2000 in the Naturita alluvial aquifer. The major minerals in the sediments were quartz, feldspars, and calcite, with minor amounts of magnetite and clay minerals. Quartz grains commonly exhibited coatings that were greater than 10 nm in thickness and composed of an illite-smectite clay with occluded ferrihydrite and goethite nanoparticles. Chemical extractions of quartz grains removed from the sediments were used to estimate the masses of iron and aluminum present in the coatings. Various surface complexation modeling approaches were compared in terms of the ability to describe the U(VI) experimental data and the data requirements for model application to the sediments. Published models for U(VI) adsorption on reference minerals were applied to predict U(VI) adsorption based on assumptions about the sediment surface composition and physical properties (e.g., surface area and electrical double layer). Predictions from these models were highly variable, with results overpredicting or underpredicting the experimental data, depending on the assumptions used to apply the model. Although the models for reference minerals are supported by detailed experimental studies (and in ideal cases, surface spectroscopy), the results suggest that errors are caused in applying the models directly to the sediments by uncertain knowledge of: 1) the proportion and types of

  7. Infinite Multiple Membership Relational Modeling for Complex Networks

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai

    Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...

  8. A methodology for the design of experiments in computational intelligence with multiple regression models.

    Science.gov (United States)

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  9. A methodology for the design of experiments in computational intelligence with multiple regression models

    Directory of Open Access Journals (Sweden)

    Carlos Fernandez-Lozano

    2016-12-01

    Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  10. The Bolund experiment: Overview and background; Wind conditions in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bechmann, A.; Berg, J.; Courtney, M.S.; Joergensen, Hans E.; Mann, J.; Soerensen, Niels N.

    2009-07-15

    The Bolund experiment is a measuring campaign performed in 2007 and 2008. The aim of the experiment is to measure the flow field around the Bolund hill in order to provide a dataset for validating numerical flow models. The present report gives an overview of the whole experiment including a description of the orography, the instrumentation used and of the data processing. The Actual measurements are available from a database also described. (au)

  11. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    Science.gov (United States)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus

    2012-12-01

    Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43- but failed in the prediction of Si desorption. These results indicate that the prediction of As mobility, by using SCMs for synthetic Fe-oxides, will be strongly dependent on the model chosen. The SCM based on the Pleistocene aquifer sediment predicts the desorption of As(III), PO43- and Si quite superiorly, as compared to the SCMs for ferrihydrite and goethite, even though Si desorption is still somewhat under-predicted. The observation that a SCM calibrated on a different sediment can predict our field results so well suggests that sediment based SCMs may be a

  12. The Complex Trauma Questionnaire (ComplexTQ:Development and preliminary psychometric properties of an instrument for measuring early relational trauma

    Directory of Open Access Journals (Sweden)

    Carola eMaggiora Vergano

    2015-09-01

    Full Text Available Research on the etiology of adult psychopathology and its relationship with childhood trauma has focused primarily on specific forms of maltreatment. This study developed an instrument for the assessment of childhood and adolescence trauma that would aid in identifying the role of co-occurring childhood stressors and chronic adverse conditions. The Complex Trauma Questionnaire (ComplexTQ, in both clinician and self-report versions, is a measure for the assessment of multi-type maltreatment: physical, psychological, and sexual abuse; physical and emotional neglect as well as other traumatic experiences, such rejection, role reversal, witnessing domestic violence, separations, and losses. The four-point Likert scale allows to specifically indicate with which caregiver the traumatic experience has occurred. A total of 229 participants, a sample of 79 nonclinical and that of 150 high-risk and clinical participants, were assessed with the ComplexTQ clinician version applied to Adult Attachment Interview (AAI transcripts. Initial analyses indicate acceptable inter-rater reliability. A good fit to a 6-factor model regarding the experience with the mother and to a 5-factor model with the experience with the father was obtained; the internal consistency of factors derived was good. Convergent validity was provided with the AAI scales. ComplexTQ factors discriminated normative from high-risk and clinical samples. The findings suggest a promising, reliable, and valid measurement of early relational trauma that is reported; furthermore, it is easy to complete and is useful for both research and clinical practice.

  13. Dynamical phase separation using a microfluidic device: experiments and modeling

    Science.gov (United States)

    Aymard, Benjamin; Vaes, Urbain; Radhakrishnan, Anand; Pradas, Marc; Gavriilidis, Asterios; Kalliadasis, Serafim; Complex Multiscale Systems Team

    2017-11-01

    We study the dynamical phase separation of a binary fluid by a microfluidic device both from the experimental and from the modeling points of view. The experimental device consists of a main channel (600 μm wide) leading into an array of 276 trapezoidal capillaries of 5 μm width arranged on both sides and separating the lateral channels from the main channel. Due to geometrical effects as well as wetting properties of the substrate, and under well chosen pressure boundary conditions, a multiphase flow introduced into the main channel gets separated at the capillaries. Understanding this dynamics via modeling and numerical simulation is a crucial step in designing future efficient micro-separators. We propose a diffuse-interface model, based on the classical Cahn-Hilliard-Navier-Stokes system, with a new nonlinear mobility and new wetting boundary conditions. We also propose a novel numerical method using a finite-element approach, together with an adaptive mesh refinement strategy. The complex geometry is captured using the same computer-aided design files as the ones adopted in the fabrication of the actual device. Numerical simulations reveal a very good qualitative agreement between model and experiments, demonstrating also a clear separation of phases.

  14. Decision dynamics of departure times: Experiments and modeling

    Science.gov (United States)

    Sun, Xiaoyan; Han, Xiao; Bao, Jian-Zhang; Jiang, Rui; Jia, Bin; Yan, Xiaoyong; Zhang, Boyu; Wang, Wen-Xu; Gao, Zi-You

    2017-10-01

    A fundamental problem in traffic science is to understand user-choice behaviors that account for the emergence of complex traffic phenomena. Despite much effort devoted to theoretically exploring departure time choice behaviors, relatively large-scale and systematic experimental tests of theoretical predictions are still lacking. In this paper, we aim to offer a more comprehensive understanding of departure time choice behaviors in terms of a series of laboratory experiments under different traffic conditions and feedback information provided to commuters. In the experiment, the number of recruited players is much larger than the number of choices to better mimic the real scenario, in which a large number of commuters will depart simultaneously in a relatively small time window. Sufficient numbers of rounds are conducted to ensure the convergence of collective behavior. Experimental results demonstrate that collective behavior is close to the user equilibrium, regardless of different scales and traffic conditions. Moreover, the amount of feedback information has a negligible influence on collective behavior but has a relatively stronger effect on individual choice behaviors. Reinforcement learning and Fermi learning models are built to reproduce the experimental results and uncover the underlying mechanism. Simulation results are in good agreement with the experimentally observed collective behaviors.

  15. Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2011-01-01

    Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

  16. Refining Grasp Affordance Models by Experience

    DEFF Research Database (Denmark)

    Detry, Renaud; Kraft, Dirk; Buch, Anders Glent

    2010-01-01

    We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...... is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs...

  17. Design of Computer Experiments

    DEFF Research Database (Denmark)

    Dehlendorff, Christian

    The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...

  18. Agent-Based and Macroscopic Modeling of the Complex Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Aleksejus Kononovičius

    2013-08-01

    Full Text Available Purpose – The focus of this contribution is the correspondence between collective behavior and inter-individual interactions in the complex socio-economic systems. Currently there is a wide selection of papers proposing various models for the both collective behavior and inter-individual interactions in the complex socio-economic systems. Yet the papers directly relating these two concepts are still quite rare. By studying this correspondence we discuss a cutting edge approach to the modeling of complex socio-economic systems. Design/methodology/approach – The collective behavior is often modeled using stochastic and ordinary calculus, while the inter-individual interactions are modeled using agent-based models. In order to obtain the ideal model, one should start from these frameworks and build a bridge to reach another. This is a formidable task, if we consider the top-down approach, namely starting from the collective behavior and moving towards inter-individual interactions. The bottom-up approach also fails, if complex inter-individual interaction models are considered, yet in this case we can start with simple models and increase the complexity as needed. Findings – The bottom-up approach, considering simple agent-based herding model as a model for the inter-individual interactions, allows us to derive certain macroscopic models of the complex socio-economic systems from the agent-based perspective. This provides interesting insights into the collective behavior patterns observed in the complex socio-economic systems. Research limitations/implications –The simplicity of the agent-based herding model might be considered to be somewhat limiting. Yet this simplicity implies that the model is highly universal. It reproduces universal features of social behavior and also can be further extended to fit different socio-economic scenarios. Practical implications – Insights provided in this contribution might be used to modify existing

  19. Diffusion and sorption on hardened cement pastes - experiments and modelling results

    Energy Technology Data Exchange (ETDEWEB)

    Jakob, A.; Sarott, F.-A.; Spieler, P.

    1999-08-01

    Large parts of repositories for low and intermediate level radioactive waste consist of cementitious materials. Radionuclides are transported by diffusion in the cement matrix or, in case of fractured or highly permeable cement, by advection and dispersion. In this work we aim at a mechanistic understanding of diffusion processes of some reactive tracers. On the laboratory scale, ten through-diffusion experiments were performed to study these processes for Cl{sup -}, I{sup -}, Cs{sup +} and Ni{sup 2+} ions in a Sulphate Resisting Portland Cement (SRPC) equilibrated with an artificial pore water. Some of the experiments continued up to nearly three years with daily measurements. In all the experiments, a cement disk initially saturated with an artificial pore water was exposed on one side to a highly diluted solution containing the species of interest. On the second side, a near-zero concentration boundary was maintained to drive through-diffusion of the tracer. The changes of concentrations on both sides of the samples were monitored, allowing careful mass balances. From these data, values of the diffusive flux and the mass of tracer taken up by the cementitious material were determined as a function of time. In the subsequent modelling, the time histories of these tracer breakthroughs were fitted using five different models. The simplest model neglects all retarding mechanisms except pure diffusion. More complex models either account for instantaneous equilibrium sorption in form of linear or non-linear (Freundlich) sorption or for first-order sorption kinetics where the forward reaction may be linear or non-linear according to the Freundlich isotherm, while the back-reaction is linear. Hence, the analysis allows the extraction of the diffusion coefficient and parameter values for the sorption isotherm or rate-constants for sorption and desorption. The fits to the experimental data were carried out by an automated Marquardt-Levenberg procedure yielding error

  20. Observation-Driven Configuration of Complex Software Systems

    Science.gov (United States)

    Sage, Aled

    2010-06-01

    The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.

  1. A low-complexity interacting multiple model filter for maneuvering target tracking

    KAUST Repository

    Khalid, Syed Safwan; Abrar, Shafayat

    2017-01-01

    In this work, we address the target tracking problem for a coordinate-decoupled Markovian jump-mean-acceleration based maneuvering mobility model. A novel low-complexity alternative to the conventional interacting multiple model (IMM) filter is proposed for this class of mobility models. The proposed tracking algorithm utilizes a bank of interacting filters where the interactions are limited to the mixing of the mean estimates, and it exploits a fixed off-line computed Kalman gain matrix for the entire filter bank. Consequently, the proposed filter does not require matrix inversions during on-line operation which significantly reduces its complexity. Simulation results show that the performance of the low-complexity proposed scheme remains comparable to that of the traditional (highly-complex) IMM filter. Furthermore, we derive analytical expressions that iteratively evaluate the transient and steady-state performance of the proposed scheme, and establish the conditions that ensure the stability of the proposed filter. The analytical findings are in close accordance with the simulated results.

  2. A low-complexity interacting multiple model filter for maneuvering target tracking

    KAUST Repository

    Khalid, Syed Safwan

    2017-01-22

    In this work, we address the target tracking problem for a coordinate-decoupled Markovian jump-mean-acceleration based maneuvering mobility model. A novel low-complexity alternative to the conventional interacting multiple model (IMM) filter is proposed for this class of mobility models. The proposed tracking algorithm utilizes a bank of interacting filters where the interactions are limited to the mixing of the mean estimates, and it exploits a fixed off-line computed Kalman gain matrix for the entire filter bank. Consequently, the proposed filter does not require matrix inversions during on-line operation which significantly reduces its complexity. Simulation results show that the performance of the low-complexity proposed scheme remains comparable to that of the traditional (highly-complex) IMM filter. Furthermore, we derive analytical expressions that iteratively evaluate the transient and steady-state performance of the proposed scheme, and establish the conditions that ensure the stability of the proposed filter. The analytical findings are in close accordance with the simulated results.

  3. Modeling of anaerobic digestion of complex substrates

    International Nuclear Information System (INIS)

    Keshtkar, A. R.; Abolhamd, G.; Meyssami, B.; Ghaforian, H.

    2003-01-01

    A structured mathematical model of anaerobic conversion of complex organic materials in non-ideally cyclic-batch reactors for biogas production has been developed. The model is based on multiple-reaction stoichiometry (enzymatic hydrolysis, acidogenesis, aceto genesis and methano genesis), microbial growth kinetics, conventional material balances in the liquid and gas phases for a cyclic-batch reactor, liquid-gas interactions, liquid-phase equilibrium reactions and a simple mixing model which considers the reactor volume in two separate sections: the flow-through and the retention regions. The dynamic model describes the effects of reactant's distribution resulting from the mixing conditions, time interval of feeding, hydraulic retention time and mixing parameters on the process performance. The model is applied in the simulation of anaerobic digestion of cattle manure under different operating conditions. The model is compared with experimental data and good correlations are obtained

  4. CALCULATION OF CHEMICAL ATMOSPHERE ESTIMATION GIVEN THE COMPLEX TERRAIN

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2010-06-01

    Full Text Available The 3D numerical model was used to simulate the toxic gas dispersion over a complex terrain after an accident spillage. The model is based on the K-gradient transport model and the model of potential flow. The results of numerical experiment are presented.

  5. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Directory of Open Access Journals (Sweden)

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  6. Gallium sorption on montmorillonite and illite colloids: Experimental study and modelling by ionic exchange and surface complexation

    International Nuclear Information System (INIS)

    Benedicto, Ana; Degueldre, Claude; Missana, Tiziana

    2014-01-01

    Highlights: • Ga sorption onto illite and montmorillonite was studied and modelled for the first time. • The developed sorption model was able to well explain Ga sorption in both clays. • Number of free parameters was reduced applying the linear free energy relationship. • Cationic exchange dominate sorption at pH < 4.5; surface complexation at higher pH. - Abstract: The migration of metals as gallium (Ga) in the environment is highly influenced by their sorption on clay minerals, as montmorillonite and illite. Given the increased usage of gallium in the industry and the medicine, the Ga-associated waste may result in environmental problems. Ga sorption experiments were carried out on montmorillonite and illite colloids in a wide range of pH, ionic strength and Ga concentration. A Ga sorption model was developed combining ionic exchange and surface complexation on the edge sites (silanol and aluminol-like) of the clay sheets. The complexation constants were estimated as far as possible from the Ga hydrolysis constants applying the linear free energy relationship (LFER), which allowed to reduce the number of free parameters in the model. The Ga sorption behaviour was very similar on illite and montmorillonite: decreasing tendency with pH and dependency on ionic strength at very acidic conditions. The experimental data modelling suggests that the Ga sorption reactions avoid the Ga precipitation, which is predicted in absence of clay colloids between pH 3.5 and 5.5. Assuming this hypothesis, clay colloids would affect Ga aqueous speciation, preventing precipitation in favour of sorption. Ga sorption on montmorillonite and illite can be explained on the basis of three main reactions: Ga 3+ exchange at very acidic conditions (pH < ∼3.8); Ga(OH) 4 - complexation on protonated weak sites in acidic-neutral conditions (between pH ∼5.2 and pH ∼7.9); and Ga(OH) 3 complexation on strong sites at basic conditions (pH > ∼7.9)

  7. Argonne Bubble Experiment Thermal Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  8. Modeling and complexity of stochastic interacting Lévy type financial price dynamics

    Science.gov (United States)

    Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao

    2018-06-01

    In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series

  9. Surface complexation modeling of zinc sorption onto ferrihydrite.

    Science.gov (United States)

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength

  10. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  11. Passengers, Crowding and Complexity : Models for passenger oriented public transport

    NARCIS (Netherlands)

    P.C. Bouman (Paul)

    2017-01-01

    markdownabstractPassengers, Crowding and Complexity was written as part of the Complexity in Public Transport (ComPuTr) project funded by the Netherlands Organisation for Scientific Research (NWO). This thesis studies in three parts how microscopic data can be used in models that have the potential

  12. FRAM Modelling Complex Socio-technical Systems

    CERN Document Server

    Hollnagel, Erik

    2012-01-01

    There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.

  13. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Science.gov (United States)

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  14. Complexation of metal ions with humic acid: charge neutralization model

    International Nuclear Information System (INIS)

    Kim, J.I.; Czerwinski, K.R.

    1995-01-01

    A number of different approaches are being used for describing the complexation equilibrium of actinide ions with humic or fulvic acid. The approach chosen and verified experimentally by Tu Muenchen will be discussed with notable examples from experiment. This approach is based on the conception that a given actinide ion is neutralized upon complexation with functional groups of humic or fulvic acid, e.g. carboxylic and phenolic groups, which are known as heterogeneously cross-linked polyelectrolytes. The photon energy transfer experiment with laser light excitation has shown that the actinide ion binding with the functional groups is certainly a chelation process accompanied by metal ion charge neutralization. This fact is in accordance with the experimental evidence of the postulated thermodynamic equilibrium reaction. The experimental results are found to be independent of origin of humic or fulvic acid and applicable for a broad range of pH. (authors). 23 refs., 7 figs., 1 tab

  15. Case study method and problem-based learning: utilizing the pedagogical model of progressive complexity in nursing education.

    Science.gov (United States)

    McMahon, Michelle A; Christopher, Kimberly A

    2011-08-19

    As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.

  16. A SIMULATION MODEL OF THE GAS COMPLEX

    Directory of Open Access Journals (Sweden)

    Sokolova G. E.

    2016-06-01

    Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.

  17. Understanding large multiprotein complexes: applying a multiple allosteric networks model to explain the function of the Mediator transcription complex.

    Science.gov (United States)

    Lewis, Brian A

    2010-01-15

    The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.

  18. The effects of model complexity and calibration period on groundwater recharge simulations

    Science.gov (United States)

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  19. Sensitivity of the Gravity Recovery and Climate Experiment (GRACE) to the complexity of aquifer systems for monitoring of groundwater

    Science.gov (United States)

    Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.

    2018-05-01

    The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.

  20. Entropy, complexity, and Markov diagrams for random walk cancer models.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  1. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    International Nuclear Information System (INIS)

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-01-01

    A powerful and easy-to-use workflow environment has been developed at the ESRF for combining experiment control with online data analysis on synchrotron beamlines. This tool provides the possibility of automating complex experiments without the need for expertise in instrumentation control and programming, but rather by accessing defined beamline services. The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE

  2. Micro-meteorological data from the Guardo dispersion experiment in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, M.; Mikkelsen, T.

    1992-11-01

    The present report contains micrometeorological data from an atmospheric dispersion experiment in complex terrain. The experiment took place near the Guardo power plant, Palencia, Spain under various atmospheric conditions during the month of November 1990. It consisted of 14 tracer releases either from the power plant chimney or from the valley floor north of the town. Two kinds of observations are presented: (1) The 25 m meteorological mast at the Vivero site in the central part of the experimental area measured surface-layer profiles of wind velocity, wind direction, temperature and thermal stability together with turbulent wind and temperature fluctuations at the top level. (2) A radiosonde on a tethered balloon was launched at Camporredondo de Alba in the northern part of the area and measured boundary-layer profiles of pressure, temperature, humidity, wind speed and wind direction. (au) (4 tabs., 227 ills., 7 refs.).

  3. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  4. Assessment of wear dependence parameters in complex model of cutting tool wear

    Science.gov (United States)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  5. Designing Experiments to Discriminate Families of Logic Models.

    Science.gov (United States)

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  6. Large scale FCI experiments in subassembly geometry. Test facility and model experiments

    International Nuclear Information System (INIS)

    Beutel, H.; Gast, K.

    A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975

  7. Mathematical model and coordination algorithms for ensuring complex security of an organization

    Science.gov (United States)

    Novoseltsev, V. I.; Orlova, D. E.; Dubrovin, A. S.; Irkhin, V. P.

    2018-03-01

    The mathematical model of coordination when ensuring complex security of the organization is considered. On the basis of use of a method of casual search three types of algorithms of effective coordination adequate to mismatch level concerning security are developed: a coordination algorithm at domination of instructions of the coordinator; a coordination algorithm at domination of decisions of performers; a coordination algorithm at parity of interests of the coordinator and performers. Assessment of convergence of the algorithms considered above it was made by carrying out a computing experiment. The described algorithms of coordination have property of convergence in the sense stated above. And, the following regularity is revealed: than more simply in the structural relation the algorithm, for the smaller number of iterations is provided to those its convergence.

  8. Strain Localization and Weakening Processes in Viscously Deforming Rocks: Numerical Modeling Based on Laboratory Torsion Experiments

    Science.gov (United States)

    Doehmann, M.; Brune, S.; Nardini, L.; Rybacki, E.; Dresen, G.

    2017-12-01

    Strain localization is an ubiquitous process in earth materials observed over a broad range of scales in space and time. Localized deformation and the formation of shear zones and faults typically involves material softening by various processes, like shear heating and grain size reduction. Numerical modeling enables us to study the complex physical and chemical weakening processes by separating the effect of individual parameters and boundary conditions. Using simple piece-wise linear functions for the parametrization of weakening processes allows studying a system at a chosen (lower) level of complexity (e.g. Cyprych et al., 2016). In this study, we utilize a finite element model to test two weakening laws that reduce the strength of the material depending on either the I) amount of accumulated strain or II) deformational work. Our 2D Cartesian models are benchmarked to single inclusion torsion experiments performed at elevated temperatures of 900 °C and pressures of up to 400 MPa (Rybacki et al., 2014). The experiments were performed on Carrara marble samples containing a weak Solnhofen limestone inclusion at a maximum strain rate of 2.0*10-4 s-1. Our models are designed to reproduce shear deformation of a hollow cylinder equivalent to the laboratory setup, such that material leaving one side of the model in shear direction enters again on the opposite side using periodic boundary conditions. Similar to the laboratory tests, we applied constant strain rate and constant stress boundary conditions.We use our model to investigate the time-dependent distribution of stress and strain and the effect of different parameters. For instance, inclusion rotation is shown to be strongly dependent on the viscosity ratio between matrix and inclusion and stronger ductile weakening increases the localization rate while decreasing shear zone width. The most suitable weakening law for representation of ductile rock is determined by combining the results of parameter tests with

  9. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  10. Complex Road Intersection Modelling Based on Low-Frequency GPS Track Data

    Science.gov (United States)

    Huang, J.; Deng, M.; Zhang, Y.; Liu, H.

    2017-09-01

    It is widely accepted that digital map becomes an indispensable guide for human daily traveling. Traditional road network maps are produced in the time-consuming and labour-intensive ways, such as digitizing printed maps and extraction from remote sensing images. At present, a large number of GPS trajectory data collected by floating vehicles makes it a reality to extract high-detailed and up-to-date road network information. Road intersections are often accident-prone areas and very critical to route planning and the connectivity of road networks is mainly determined by the topological geometry of road intersections. A few studies paid attention on detecting complex road intersections and mining the attached traffic information (e.g., connectivity, topology and turning restriction) from massive GPS traces. To the authors' knowledge, recent studies mainly used high frequency (1 s sampling rate) trajectory data to detect the crossroads regions or extract rough intersection models. It is still difficult to make use of low frequency (20-100 s) and easily available trajectory data to modelling complex road intersections geometrically and semantically. The paper thus attempts to construct precise models for complex road intersection by using low frequency GPS traces. We propose to firstly extract the complex road intersections by a LCSS-based (Longest Common Subsequence) trajectory clustering method, then delineate the geometry shapes of complex road intersections by a K-segment principle curve algorithm, and finally infer the traffic constraint rules inside the complex intersections.

  11. Theory and experiments in model-based space system anomaly management

    Science.gov (United States)

    Kitts, Christopher Adam

    This research program consists of an experimental study of model-based reasoning methods for detecting, diagnosing and resolving anomalies that occur when operating a comprehensive space system. Using a first principles approach, several extensions were made to the existing field of model-based fault detection and diagnosis in order to develop a general theory of model-based anomaly management. Based on this theory, a suite of algorithms were developed and computationally implemented in order to detect, diagnose and identify resolutions for anomalous conditions occurring within an engineering system. The theory and software suite were experimentally verified and validated in the context of a simple but comprehensive, student-developed, end-to-end space system, which was developed specifically to support such demonstrations. This space system consisted of the Sapphire microsatellite which was launched in 2001, several geographically distributed and Internet-enabled communication ground stations, and a centralized mission control complex located in the Space Technology Center in the NASA Ames Research Park. Results of both ground-based and on-board experiments demonstrate the speed, accuracy, and value of the algorithms compared to human operators, and they highlight future improvements required to mature this technology.

  12. Trispyrazolylborate Complexes: An Advanced Synthesis Experiment Using Paramagnetic NMR, Variable-Temperature NMR, and EPR Spectroscopies

    Science.gov (United States)

    Abell, Timothy N.; McCarrick, Robert M.; Bretz, Stacey Lowery; Tierney, David L.

    2017-01-01

    A structured inquiry experiment for inorganic synthesis has been developed to introduce undergraduate students to advanced spectroscopic techniques including paramagnetic nuclear magnetic resonance and electron paramagnetic resonance. Students synthesize multiple complexes with unknown first row transition metals and identify the unknown metals by…

  13. Structured analysis and modeling of complex systems

    Science.gov (United States)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  14. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    Science.gov (United States)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  15. Variation in predicted internal concentrations in relation to PBPK model complexity for rainbow trout

    Energy Technology Data Exchange (ETDEWEB)

    Salmina, E.S.; Wondrousch, D. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Institute for Organic Chemistry, Technical University Bergakademie Freiberg, Leipziger Str. 29, 09596 Freiberg (Germany); Kühne, R. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Potemkin, V.A. [Department of Chemistry, South Ural State Medical University, Vorovskogo 64, 454048, Chelyabinsk (Russian Federation); Schüürmann, G. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Institute for Organic Chemistry, Technical University Bergakademie Freiberg, Leipziger Str. 29, 09596 Freiberg (Germany)

    2016-04-15

    The present study is motivated by the increasing demand to consider internal partitioning into tissues instead of exposure concentrations for the environmental toxicity assessment. To this end, physiologically based pharmacokinetic (PBPK) models can be applied. We evaluated the variation in accuracy of PBPK model outcomes depending on tissue constituents modeled as sorptive phases and chemical distribution tendencies addressed by molecular descriptors. The model performance was examined using data from 150 experiments for 28 chemicals collected from US EPA databases. The simplest PBPK model is based on the “K{sub ow}-lipid content” approach as being traditional for environmental toxicology. The most elaborated one considers five biological sorptive phases (polar and non-polar lipids, water, albumin and the remaining proteins) and makes use of LSER (linear solvation energy relationship) parameters to describe the compound partitioning behavior. The “K{sub ow}-lipid content”-based PBPK model shows more than one order of magnitude difference in predicted and measured values for 37% of the studied exposure experiments while for the most elaborated model this happens only for 7%. It is shown that further improvements could be achieved by introducing corrections for metabolic biotransformation and compound transmission hindrance through a cellular membrane. The analysis of the interface distribution tendencies shows that polar tissue constituents, namely water, polar lipids and proteins, play an important role in the accumulation behavior of polar compounds with H-bond donating functional groups. For compounds without H-bond donating fragments preferable accumulation phases are storage lipids and water depending on compound polarity. - Highlights: • For reliable predictions, models of a certain complexity should be compared. • For reliable predictions non-lipid fish tissue constituents should be considered. • H-donor compounds preferably accumulate in water

  16. Variation in predicted internal concentrations in relation to PBPK model complexity for rainbow trout

    International Nuclear Information System (INIS)

    Salmina, E.S.; Wondrousch, D.; Kühne, R.; Potemkin, V.A.; Schüürmann, G.

    2016-01-01

    The present study is motivated by the increasing demand to consider internal partitioning into tissues instead of exposure concentrations for the environmental toxicity assessment. To this end, physiologically based pharmacokinetic (PBPK) models can be applied. We evaluated the variation in accuracy of PBPK model outcomes depending on tissue constituents modeled as sorptive phases and chemical distribution tendencies addressed by molecular descriptors. The model performance was examined using data from 150 experiments for 28 chemicals collected from US EPA databases. The simplest PBPK model is based on the “K_o_w-lipid content” approach as being traditional for environmental toxicology. The most elaborated one considers five biological sorptive phases (polar and non-polar lipids, water, albumin and the remaining proteins) and makes use of LSER (linear solvation energy relationship) parameters to describe the compound partitioning behavior. The “K_o_w-lipid content”-based PBPK model shows more than one order of magnitude difference in predicted and measured values for 37% of the studied exposure experiments while for the most elaborated model this happens only for 7%. It is shown that further improvements could be achieved by introducing corrections for metabolic biotransformation and compound transmission hindrance through a cellular membrane. The analysis of the interface distribution tendencies shows that polar tissue constituents, namely water, polar lipids and proteins, play an important role in the accumulation behavior of polar compounds with H-bond donating functional groups. For compounds without H-bond donating fragments preferable accumulation phases are storage lipids and water depending on compound polarity. - Highlights: • For reliable predictions, models of a certain complexity should be compared. • For reliable predictions non-lipid fish tissue constituents should be considered. • H-donor compounds preferably accumulate in water, polar

  17. Erratum to : Modeling of complex interfaces for pendant drop experiments (Rheologica Acta, , 55, 10, (801-822), 10.1007/s00397-016-0956-1)

    NARCIS (Netherlands)

    Balemans, C.; Hulsen, M.A.; Tervoort, T.A.; Anderson, P.D.

    2017-01-01

    The original version of this article unfortunately contained mistakes. Theo A. Tervoort was not listed among the authors. The correct information is given above. In Balemans et al. (2016), an axisymmetric finite element model is presented to study the behaviour of complex interfaces in pendant drop

  18. A computational framework for modeling targets as complex adaptive systems

    Science.gov (United States)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  19. Model-based safety architecture framework for complex systems

    NARCIS (Netherlands)

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  20. Modeling and simulation for fewer-axis grinding of complex surface

    Science.gov (United States)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  1. Nonlinear model of epidemic spreading in a complex social network.

    Science.gov (United States)

    Kosiński, Robert A; Grabowski, A

    2007-10-01

    The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.

  2. Rapid prototyping of a complex model for the manufacture of plaster molds for slip casting ceramic

    Directory of Open Access Journals (Sweden)

    D. P. C. Velazco

    2014-12-01

    Full Text Available Computer assisted designing (CAD is well known for several decades and employed for ceramic manufacturing almost since the beginning, but usually employed in the first part of the projectual ideation processes, neither in the prototyping nor in the manufacturing stages. The rapid prototyping machines, also known as 3D printers, have the capacity to produce in a few hours real pieces using plastic materials of high resistance, with great precision and similarity with respect to the original, based on unprecedented digital models produced by means of modeling with specific design software or from the digitalization of existing parts using the so-called 3D scanners. The main objective of the work is to develop the methodology used in the entire process of building a part in ceramics from the interrelationship between traditional techniques and new technologies for the manufacture of prototypes. And to take advantage of the benefits that allow us this new reproduction technology. The experience was based on the generation of a complex piece, in digital format, which served as the model. A regular 15 cm icosahedron presented features complex enough not to advise the production of the model by means of the traditional techniques of ceramics (manual or mechanical. From this digital model, a plaster mold was made in the traditional way in order to slip cast clay based slurries, freely dried in air and fired and glazed in the traditional way. This experience has shown the working hypothesis and opens up the possibility of new lines of work to academic and technological levels that will be explored in the near future. This technology provides a wide range of options to address the formal aspect of a part to be performed for the field of design, architecture, industrial design, the traditional pottery, ceramic art, etc., which allow you to amplify the formal possibilities, save time and therefore costs when drafting the necessary and appropriate matrixes

  3. Service user experiences of REFOCUS: a process evaluation of a pro-recovery complex intervention.

    Science.gov (United States)

    Wallace, Genevieve; Bird, Victoria; Leamy, Mary; Bacon, Faye; Le Boutillier, Clair; Janosik, Monika; MacPherson, Rob; Williams, Julie; Slade, Mike

    2016-09-01

    Policy is increasingly focused on implementing a recovery-orientation within mental health services, yet the subjective experience of individuals receiving a pro-recovery intervention is under-studied. The aim of this study was to explore the service user experience of receiving a complex, pro-recovery intervention (REFOCUS), which aimed to encourage the use of recovery-supporting tools and support recovery-promoting relationships. Interviews (n = 24) and two focus groups (n = 13) were conducted as part of a process evaluation and included a purposive sample of service users who received the complex, pro-recovery intervention within the REFOCUS randomised controlled trial (ISRCTN02507940). Thematic analysis was used to analyse the data. Participants reported that the intervention supported the development of an open and collaborative relationship with staff, with new conversations around values, strengths and goals. This was experienced as hope-inspiring and empowering. However, others described how the recovery tools were used without context, meaning participants were unclear of their purpose and did not see their benefit. During the interviews, some individuals struggled to report any new tasks or conversations occurring during the intervention. Recovery-supporting tools can support the development of a recovery-promoting relationship, which can contribute to positive outcomes for individuals. The tools should be used in a collaborative and flexible manner. Information exchanged around values, strengths and goals should be used in care-planning. As some service users struggled to report their experience of the intervention, alternative evaluation approaches need to be considered if the service user experience is to be fully captured.

  4. TF insert experiment log book. 2nd Experiment of CS model coil

    International Nuclear Information System (INIS)

    Sugimoto, Makoto; Isono, Takaaki; Matsui, Kunihiro

    2001-12-01

    The cool down of CS model coil and TF insert was started on August 20, 2001. It took almost one month and immediately started coil charge since September 17, 2001. The charge test of TF insert and CS model coil was completed on October 19, 2001. In this campaign, total shot numbers were 88 and the size of the data file in the DAS (Data Acquisition System) was about 4 GB. This report is a database that consists of the log list and the log sheets of every shot. This is an experiment logbook for 2nd experiment of CS model coil and TF insert for charge test. (author)

  5. Modelling of laboratory high-pressure infiltration experiments

    International Nuclear Information System (INIS)

    Smith, P.A.

    1992-02-01

    This report describes the modelling of break-through curves from a series of two-tracer dynamic infiltration experiments, which are intended to complement larger scale experiments at the Nagra Grimsel Test Site. The tracers are 82 Br, which is expected to be non-sorbing, and 24 Na, which is weakly sorbing. The 24 Na concentration is well below the natural Na concentration in the infiltration fluid, so that sorption on the rock is governed by isotopic exchange, exhibiting a linear isotherm. The rock specimens are sub-samples (cores) of granodiorite from the Grimsel Test Site, each containing a distinct shear zone. Best-fits to the break-through curves using single-porosity and dual-porosity transport models are compared and several physical parameters are extracted. It is shown that the dual-porosity model is required in order to reproduce the tailing part of the break-through curves for the non-sorbing tracer. The single-porosity model is sufficient to reproduce the break-through curves for the sorbing tracer within the estimated experimental errors. Extracted K d values are shown to agree well with a field rock-water interaction experiment and in situ migration experiments. Static, laboratory batch-sorption experiments give a larger K d , but this difference could be explained by the larger surface area available for sorption in the artificially crushed samples used in the laboratory and by a slightly different water chemistry. (author) 13 figs., tabs., 19 refs

  6. Inverse Modeling of Water-Rock-CO2 Batch Experiments: Potential Impacts on Groundwater Resources at Carbon Sequestration Sites.

    Science.gov (United States)

    Yang, Changbing; Dai, Zhenxue; Romanak, Katherine D; Hovorka, Susan D; Treviño, Ramón H

    2014-01-01

    This study developed a multicomponent geochemical model to interpret responses of water chemistry to introduction of CO2 into six water-rock batches with sedimentary samples collected from representative potable aquifers in the Gulf Coast area. The model simulated CO2 dissolution in groundwater, aqueous complexation, mineral reactions (dissolution/precipitation), and surface complexation on clay mineral surfaces. An inverse method was used to estimate mineral surface area, the key parameter for describing kinetic mineral reactions. Modeling results suggested that reductions in groundwater pH were more significant in the carbonate-poor aquifers than in the carbonate-rich aquifers, resulting in potential groundwater acidification. Modeled concentrations of major ions showed overall increasing trends, depending on mineralogy of the sediments, especially carbonate content. The geochemical model confirmed that mobilization of trace metals was caused likely by mineral dissolution and surface complexation on clay mineral surfaces. Although dissolved inorganic carbon and pH may be used as indicative parameters in potable aquifers, selection of geochemical parameters for CO2 leakage detection is site-specific and a stepwise procedure may be followed. A combined study of the geochemical models with the laboratory batch experiments improves our understanding of the mechanisms that dominate responses of water chemistry to CO2 leakage and also provides a frame of reference for designing monitoring strategy in potable aquifers.

  7. Deciphering the complexity of acute inflammation using mathematical models.

    Science.gov (United States)

    Vodovotz, Yoram

    2006-01-01

    Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.

  8. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.

    Science.gov (United States)

    Transtrum, Mark K; Qiu, Peng

    2016-05-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior.

  9. Green IT engineering concepts, models, complex systems architectures

    CERN Document Server

    Kondratenko, Yuriy; Kacprzyk, Janusz

    2017-01-01

    This volume provides a comprehensive state of the art overview of a series of advanced trends and concepts that have recently been proposed in the area of green information technologies engineering as well as of design and development methodologies for models and complex systems architectures and their intelligent components. The contributions included in the volume have their roots in the authors’ presentations, and vivid discussions that have followed the presentations, at a series of workshop and seminars held within the international TEMPUS-project GreenCo project in United Kingdom, Italy, Portugal, Sweden and the Ukraine, during 2013-2015 and at the 1st - 5th Workshops on Green and Safe Computing (GreenSCom) held in Russia, Slovakia and the Ukraine. The book presents a systematic exposition of research on principles, models, components and complex systems and a description of industry- and society-oriented aspects of the green IT engineering. A chapter-oriented structure has been adopted for this book ...

  10. An Ontology for Modeling Complex Inter-relational Organizations

    Science.gov (United States)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  11. Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex

    International Nuclear Information System (INIS)

    Gregory, Michael V.; Paul, Pran K.

    2000-01-01

    An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes

  12. Dynamic crack initiation toughness : experiments and peridynamic modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Foster, John T.

    2009-10-01

    This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model

  13. Modeling data irregularities and structural complexities in data envelopment analysis

    CERN Document Server

    Zhu, Joe

    2007-01-01

    In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. This book deals with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling of both qualitative and quantitative data. This handbook treatment deals with specific data problems including: imprecise or inaccurate data; missing data; qualitative data; outliers; undesirable outputs; quality data; statistical analysis; software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.

  14. Firn Model Intercomparison Experiment (FirnMICE)

    DEFF Research Database (Denmark)

    Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert

    2017-01-01

    Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe...

  15. Model-based identification and use of task complexity factors of human integrated systems

    International Nuclear Information System (INIS)

    Ham, Dong-Han; Park, Jinkyun; Jung, Wondea

    2012-01-01

    Task complexity is one of the conceptual constructs that are critical to explain and predict human performance in human integrated systems. A basic approach to evaluating the complexity of tasks is to identify task complexity factors and measure them. Although a great deal of task complexity factors have been studied, there is still a lack of conceptual frameworks for identifying and organizing them analytically, which can be generally used irrespective of the types of domains and tasks. This study proposes a model-based approach to identifying and using task complexity factors, which has two facets—the design aspects of a task and complexity dimensions. Three levels of design abstraction, which are functional, behavioral, and structural aspects of a task, characterize the design aspect of a task. The behavioral aspect is further classified into five cognitive processing activity types. The complexity dimensions explain a task complexity from different perspectives, which are size, variety, and order/organization. Twenty-one task complexity factors are identified by the combination of the attributes of each facet. Identification and evaluation of task complexity factors based on this model is believed to give insights for improving the design quality of tasks. This model for complexity factors can also be used as a referential framework for allocating tasks and designing information aids. The proposed approach is applied to procedure-based tasks of nuclear power plants (NPPs) as a case study to demonstrate its use. Last, we compare the proposed approach with other studies and then suggest some future research directions.

  16. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    Science.gov (United States)

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  17. Low-complexity Behavioral Model for Predictive Maintenance of Railway Turnouts

    DEFF Research Database (Denmark)

    Barkhordari, Pegah; Galeazzi, Roberto; Tejada, Alejandro de Miguel

    2017-01-01

    together with the Eigensystem Realization Algorithm – a type of subspace identification – to identify a fourth order model of the infrastructure. The robustness and predictive capability of the low-complexity behavioral model to reproduce track responses under different types of train excitations have been......Maintenance of railway infrastructures represents a major cost driver for any infrastructure manager since reliability and dependability must be guaranteed at all times. Implementation of predictive maintenance policies relies on the availability of condition monitoring systems able to assess...... the infrastructure health state. The core of any condition monitoring system is the a-priori knowledge about the process to be monitored, in the form of either mathematical models of different complexity or signal features characterizing the healthy/faulty behavior. This study investigates the identification...

  18. Evaluation of integrated assessment model hindcast experiments: a case study of the GCAM 3.0 land use module

    Science.gov (United States)

    Snyder, Abigail C.; Link, Robert P.; Calvin, Katherine V.

    2017-11-01

    Hindcasting experiments (conducting a model forecast for a time period in which observational data are available) are being undertaken increasingly often by the integrated assessment model (IAM) community, across many scales of models. When they are undertaken, the results are often evaluated using global aggregates or otherwise highly aggregated skill scores that mask deficiencies. We select a set of deviation-based measures that can be applied on different spatial scales (regional versus global) to make evaluating the large number of variable-region combinations in IAMs more tractable. We also identify performance benchmarks for these measures, based on the statistics of the observational dataset, that allow a model to be evaluated in absolute terms rather than relative to the performance of other models at similar tasks. An ideal evaluation method for hindcast experiments in IAMs would feature both absolute measures for evaluation of a single experiment for a single model and relative measures to compare the results of multiple experiments for a single model or the same experiment repeated across multiple models, such as in community intercomparison studies. The performance benchmarks highlight the use of this scheme for model evaluation in absolute terms, providing information about the reasons a model may perform poorly on a given measure and therefore identifying opportunities for improvement. To demonstrate the use of and types of results possible with the evaluation method, the measures are applied to the results of a past hindcast experiment focusing on land allocation in the Global Change Assessment Model (GCAM) version 3.0. The question of how to more holistically evaluate models as complex as IAMs is an area for future research. We find quantitative evidence that global aggregates alone are not sufficient for evaluating IAMs that require global supply to equal global demand at each time period, such as GCAM. The results of this work indicate it is

  19. Modelling the self-organization and collapse of complex networks

    Indian Academy of Sciences (India)

    Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.

  20. Modeling the Nab Experiment Electronics in SPICE

    Science.gov (United States)

    Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.

  1. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  2. Mathematical modeling of complexing in the scandium-salicylic acid-isoamyl alcohol system

    International Nuclear Information System (INIS)

    Evseev, A.M.; Smirnova, N.S.; Fadeeva, V.I.; Tikhomirova, T.I.; Kir'yanov, Yu.A.

    1984-01-01

    Mathematical modeling of an equilibrium multicomponent physicochemical system for extraction of Sc salicylate complexes by isoamyl alcohol was conducted. To calculate the equilibrium concentrations of Sc complexes different with respect to the content and composition, the system of nonlinear algebraic mass balance equations was solved. Experimental data on the extraction of Sc salicylates by isoamyl alcohol versus the pH of the solution at a constant Sc concentration and different concentration of salicylate-ions were used for construction of the mathematical model. The stability constants of ScHSal 2+ , Sc(HSal) 3 , ScOH(HSal) 2 , ScoH(HSal) 2 complexes were calculated

  3. Numerical modeling of optical coherent transient processes with complex configurations - II. Angled beams with arbitrary phase modulations

    International Nuclear Information System (INIS)

    Chang Tiejun; Tian Mingzhen; Barber, Zeb W.; Randall Babbitt, Wm.

    2004-01-01

    This work is a continuation of the development of the theoretical model for optical coherent transient (OCT) processes with complex configurations. A theoretical model for angled beams with arbitrary phase modulation has been developed based on the model presented in our previous work for the angled beam geometry. A numerical tool has been devised to simulate the OCT processes involving angled beams with the frequency detuning, chirped, and phase-modulated laser pulses. The simulations for pulse shaping and arbitrary waveform generation (AWG) using OCT processes have been performed. The theoretical analysis of programming and probe schemes for pulse shaper and AWG is also presented including the discussions on the rephasing condition and the phase compensation. The results from the analysis, the simulation, and the experiment show very good agreement

  4. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    Science.gov (United States)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  5. A new decision sciences for complex systems.

    Science.gov (United States)

    Lempert, Robert J

    2002-05-14

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.

  6. On the yield stress of complex materials

    Science.gov (United States)

    Calderas, F.; Herrera-Valencia, E. E.; Sanchez-Solis, A.; Manero, O.; Medina-Torres, L.; Renteria, A.; Sanchez-Olivares, G.

    2013-11-01

    In the present work, the yield stress of complex materials is analyzed and modeled using the Bautista-Manero-Puig (BMP) constitutive equation, consisting of the upper-convected Maxwell equation coupled to a kinetic equation to account for the breakdown and reformation of the fluid structure. BMP model predictions for a complex fluid in different flow situations are analyzed and compared with yield stress predictions of other rheological models, and with experiments on fluids that exhibit yield stresses. It is shown that one of the main features of the BMP model is that it predicts a real yield stress (elastic solid or Hookean behavior) as one of the material parameters, the zero shear-rate fluidity, is zero. In addition, the transition to fluid-like behavior is continuous, as opposed to predictions of more empirical models.

  7. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  8. Tapping to a slow tempo in the presence of simple and complex meters reveals experience-specific biases for processing music.

    Directory of Open Access Journals (Sweden)

    Sangeeta Ullal-Gupta

    Full Text Available Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians or a complex meter (familiar only to Indians. A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase. When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.

  9. Tapping to a slow tempo in the presence of simple and complex meters reveals experience-specific biases for processing music.

    Science.gov (United States)

    Ullal-Gupta, Sangeeta; Hannon, Erin E; Snyder, Joel S

    2014-01-01

    Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters.

  10. Coupled economic-ecological models for ecosystem-based fishery management: Exploration of trade-offs between model complexity and management needs

    DEFF Research Database (Denmark)

    Thunberg, Eric; Holland, Dan; Nielsen, J. Rasmus

    2012-01-01

    Ecosystem based fishery management has moved beyond rhetorical statements calling for a more holistic approach to resource management, to implementing decisions on resource use that are compatible with goals of maintaining ecosystem health and resilience. Coupled economic-ecological models...... are a primary tool for informing these decisions. Recognizing the importance of these models, the International Council for the Exploration of the Seas (ICES) formed a Study Group on Integration of Economics, Stock Assessment and Fisheries Management (SGIMM) to explore alternative modelling approaches...... and ecological systems are inherently complex, models are abstractions of these systems incorporating varying levels of complexity depending on available data and the management issues to be addressed. The objective of this special session was to assess the pros and cons of increasing model complexity...

  11. Experiments and modeling of single plastic particle conversion in suspension

    DEFF Research Database (Denmark)

    Nakhaei, Mohammadhadi; Wu, Hao; Grévain, Damien

    2018-01-01

    Conversion of single high density polyethylene (PE) particles has been studied by experiments and modeling. The experiments were carried out in a single particle combustor for five different shapes and masses of particles at temperature conditions of 900 and 1100°C. Each experiment was recorded...... against the experiments as well as literature data. Furthermore, a simplified isothermal model appropriate for CFD applications was developed, in order to model the combustion of plastic particles in cement calciners. By comparing predictions with the isothermal and the non–isothermal models under typical...

  12. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  13. A coupled mass transfer and surface complexation model for uranium (VI) removal from wastewaters

    International Nuclear Information System (INIS)

    Lenhart, J.; Figueroa, L.A.; Honeyman, B.D.

    1994-01-01

    A remediation technique has been developed for removing uranium (VI) from complex contaminated groundwater using flake chitin as a biosorbent in batch and continuous flow configurations. With this system, U(VI) removal efficiency can be predicted using a model that integrates surface complexation models, mass transport limitations and sorption kinetics. This integration allows the reactor model to predict removal efficiencies for complex groundwaters with variable U(VI) concentrations and other constituents. The system has been validated using laboratory-derived kinetic data in batch and CSTR systems to verify the model predictions of U(VI) uptake from simulated contaminated groundwater

  14. Structure and reactivity of oxalate surface complexes on lepidocrocite derived from infrared spectroscopy, DFT-calculations, adsorption, dissolution and photochemical experiments

    Science.gov (United States)

    Borowski, Susan C.; Biswakarma, Jagannath; Kang, Kyounglim; Schenkeveld, Walter D. C.; Hering, Janet G.; Kubicki, James D.; Kraemer, Stephan M.; Hug, Stephan J.

    2018-04-01

    Oxalate, together with other ligands, plays an important role in the dissolution of iron(hdyr)oxides and the bio-availability of iron. The formation and properties of oxalate surface complexes on lepidocrocite were studied with a combination of infrared spectroscopy (IR), density functional theory (DFT) calculations, dissolution, and photochemical experiments. IR spectra measured as a function of time, concentration, and pH (50-200 μM oxalate, pH 3-7) showed that several surface complexes are formed at different rates and in different proportions. Measured spectra could be separated into three contributions described by Gaussian line shapes, with frequencies that agreed well with the theoretical frequencies of three different surface complexes: an outer-sphere complex (OS), an inner-sphere monodentate mononuclear complex (MM), and a bidentate mononuclear complex (BM) involving one O atom from each carboxylate group. At pH 6, OS was formed at the highest rate. The contribution of BM increased with decreasing pH. In dissolution experiments, lepidocrocite was dissolved at rates proportional to the surface concentration of BM, rather than to the total adsorbed concentration. Under UV-light (365 nm), BM was photolyzed at a higher rate than MM and OS. Although the comparison of measured spectra with calculated frequencies cannot exclude additional possible structures, the combined results allowed the assignment of three main structures with different reactivities consistent with experiments. The results illustrate the importance of the surface speciation of adsorbed ligands in dissolution and photochemical reactions.

  15. A Framework for Modeling and Analyzing Complex Distributed Systems

    National Research Council Canada - National Science Library

    Lynch, Nancy A; Shvartsman, Alex Allister

    2005-01-01

    Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...

  16. Grimsel Test Site: modelling radionuclide migration field experiments

    International Nuclear Information System (INIS)

    Heer, W.; Hadermann, J.

    1994-09-01

    In the migration field experiments at Nagra's Grimsel Test Site, the processes of nuclide transport through a well defined fractured shear-zone in crystalline rock are being investigated. For these experiments, model calculations have been performed to obtain indications on validity and limitation of the model applied and the data deduced under field conditions. The model consists of a hydrological part, where the dipole flow fields of the experiments are determined, and a nuclide transport part, where the flow field driven nuclide propagation through the shear-zone is calculated. In addition to the description of the model, analytical expressions are given to guide the interpretation of experimental results. From the analysis of experimental breakthrough curves for conservative uranine, weakly sorbing sodium and more stronger sorbing strontium tracers, the following main results can be derived: i) The model is able to represent the breakthrough curves of the migration field experiments to a high degree of accuracy, ii) The process of matrix diffusion is manifest through the tails of the breakthrough curves decreasing with time as t -3/2 and through the special shape of the tail ends, both confirmed by the experiments, iii) For nuclide sorbing rapidly, not too strongly, linearly, and exhibiting a reversible cation exchange process on fault gouge, the laboratory sorption coefficient can reasonably well be extrapolated to field conditions. Adequate care in selecting and preparing the rock samples is, of course, a necessary requirement. Using the parameters determined in the previous analysis, predictions are made for experiments in a smaller an faster flow field. For conservative uranine and weakly sorbing sodium, the agreement of predicted and measured breakthrough curves is good, for the more stronger sorbing strontium reasonable, confirming that the model describes the main nuclide transport processes adequately. (author) figs., tabs., 29 refs

  17. Modelling the dynamics of the health-production complex in livestock herds

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten

    1992-01-01

    This paper reviews how the dynamics of the health-production complex in livestock herds is mimicked by livestock herd simulation models. Twelve models simulating the dynamics of dairy, beef, sheep and sow herds were examined. All models basically included options to alter input and output...

  18. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Science.gov (United States)

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  19. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  20. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  1. A comprehensive model of anaerobic bioconversion of complex substrates to biogas

    DEFF Research Database (Denmark)

    Angelidaki, Irini; Ellegaard, Lars; Ahring, Birgitte Kiær

    1999-01-01

    A dynamic model describing the anaerobic degradation of complex material, and codigestion of different types of wastes, was developed based on a model previously described (Angelidaki et al., 1993). in the model, the substrate is described by its composition of basic organic components, i.e., car...

  2. Incorporating soil variability in continental soil water modelling: a trade-off between data availability and model complexity

    Science.gov (United States)

    Peeters, L.; Crosbie, R. S.; Doble, R.; van Dijk, A. I. J. M.

    2012-04-01

    Developing a continental land surface model implies finding a balance between the complexity in representing the system processes and the availability of reliable data to drive, parameterise and calibrate the model. While a high level of process understanding at plot or catchment scales may warrant a complex model, such data is not available at the continental scale. This data sparsity is especially an issue for the Australian Water Resources Assessment system, AWRA-L, a land-surface model designed to estimate the components of the water balance for the Australian continent. This study focuses on the conceptualization and parametrization of the soil drainage process in AWRA-L. Traditionally soil drainage is simulated with Richards' equation, which is highly non-linear. As general analytic solutions are not available, this equation is usually solved numerically. In AWRA-L however, we introduce a simpler function based on simulation experiments that solve Richards' equation. In the simplified function soil drainage rate, the ratio of drainage (D) over storage (S), decreases exponentially with relative water content. This function is controlled by three parameters, the soil water storage at field capacity (SFC), the drainage fraction at field capacity (KFC) and a drainage function exponent (β). [ ] D- -S- S = KF C exp - β (1 - SFC ) To obtain spatially variable estimates of these three parameters, the Atlas of Australian Soils is used, which lists soil hydraulic properties for each soil profile type. For each soil profile type in the Atlas, 10 days of draining an initially fully saturated, freely draining soil is simulated using HYDRUS-1D. With field capacity defined as the volume of water in the soil after 1 day, the remaining parameters can be obtained by fitting the AWRA-L soil drainage function to the HYDRUS-1D results. This model conceptualisation fully exploits the data available in the Atlas of Australian Soils, without the need to solve the non

  3. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    Science.gov (United States)

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  4. Fission gas release modelling: developments arising from instrumented fuel assemblies, out-of-pile experiments and microstructural observations

    International Nuclear Information System (INIS)

    Leech, N.A.; Smith, M.R.; Pearce, J.H.; Ellis, W.E.; Beatham, N.

    1990-01-01

    This paper reviews the development of fission gas release modelling in thermal reactor fuel (both steady-state and transient) and in particular, illustrates the way in which experimental data have been, and continue to be, the main driving force behind model development. To illustrate this point various aspects of fuel performance are considered: temperature calculation, steady-state and transient fission gas release, grain boundary gas atom capacity and microstructural phenomena. The sources of experimental data discussed include end-of-life fission gas release measurements, instrumented fuel assemblies (e.g. rods with internal pressure transducers, fuel centre thermocouples), swept capsule experiments, out-of-pile annealing experiments and microstructural techniques applied during post-irradiation evaluation. In the case of the latter, the benefit of applying many observation and analysis techniques on the same fuel samples (the approach adopted at NRL Windscale) is emphasized. This illustrates a shift of emphasis in the modelling field from the development of large, complex thermo-mechanical computer codes to the assessment of key experimental data in order to develop and evaluate sub-models which correctly predict the observed behaviour. (author)

  5. Simulation-based modeling of building complexes construction management

    Science.gov (United States)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  6. Testing complex networks of interaction at the onset of the Near Eastern Neolithic using modelling of obsidian exchange.

    Science.gov (United States)

    Ibáñez, Juan José; Ortega, David; Campos, Daniel; Khalidi, Lamya; Méndez, Vicenç

    2015-06-06

    In this paper, we explore the conditions that led to the origins and development of the Near Eastern Neolithic using mathematical modelling of obsidian exchange. The analysis presented expands on previous research, which established that the down-the-line model could not explain long-distance obsidian distribution across the Near East during this period. Drawing from outcomes of new simulations and their comparison with archaeological data, we provide results that illuminate the presence of complex networks of interaction among the earliest farming societies. We explore a network prototype of obsidian exchange with distant links which replicates the long-distance movement of ideas, goods and people during the Early Neolithic. Our results support the idea that during the first (Pre-Pottery Neolithic A) and second (Pre-Pottery Neolithic B) phases of the Early Neolithic, the complexity of obsidian exchange networks gradually increased. We propose then a refined model (the optimized distant link model) whereby long-distance exchange was largely operated by certain interconnected villages, resulting in the appearance of a relatively homogeneous Neolithic cultural sphere. We hypothesize that the appearance of complex interaction and exchange networks reduced risks of isolation caused by restricted mobility as groups settled and argue that these networks partially triggered and were crucial for the success of the Neolithic Revolution. Communities became highly dynamic through the sharing of experiences and objects, while the networks that developed acted as a repository of innovations, limiting the risk of involution. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. Kinetics of the reactions of hydrated electrons with metal complexes

    International Nuclear Information System (INIS)

    Korsse, J.

    1983-01-01

    The reactivity of the hydrated electron towards metal complexes is considered. Experiments are described involving metal EDTA and similar complexes. The metal ions studied are mainly Ni 2+ , Co 2+ and Cu 2+ . Rates of the reactions of the complexes with e - (aq) were measured using the pulse radiolysis technique. It is shown that the reactions of e - (aq) with the copper complexes display unusually small kinetic salt effects. The results suggest long-range electron transfer by tunneling. A tunneling model is presented and the experimental results are discussed in terms of this model. Results of approximate molecular orbital calculations of some redox potentials are given, for EDTA chelates as well as for series of hexacyano and hexaquo complexes. Finally, equilibrium constants for the formation of ternary complexes are reported. (Auth./G.J.P.)

  8. RATING MODELS AND INFORMATION TECHNOLOGIES APPLICATION FOR MANAGEMENT OF ADMINISTRATIVE-TERRITORIAL COMPLEXES

    Directory of Open Access Journals (Sweden)

    O. M. Pshinko

    2016-12-01

    Full Text Available Purpose. The paper aims to develop rating models and related information technologies designed to resolve the tasks of strategic planning of the administrative and territorial units’ development, as well as the tasks of multi-criteria control of inhomogeneous multiparameter objects operation. Methodology. When solving problems of strategic planning of administrative and territorial development and heterogeneous classes management of objects under control, a set of agreed methods is used. Namely the multi-criteria properties analysis for objects of planning and management, diagnostics of the state parameters, forecasting and management of complex systems of different classes. Their states are estimated by sets of different quality indicators, as well as represented by the individual models of operation process. A new information technology is proposed and created to implement the strategic planning and management tasks. This technology uses the procedures for solving typical tasks, that are implemented in MS SQL Server. Findings. A new approach to develop models of analyze and management of complex systems classes based on the ratings has been proposed. Rating models development for analysis of multicriteria and multiparameter systems has been obtained. The management of these systems is performed on the base of parameters of the current and predicted state by non-uniform distribution of resources. The procedure of sensitivity analysis of the changes in the rating model of inhomogeneous distribution of resources parameters has been developed. The information technology of strategic planning and management of heterogeneous classes of objects based on the rating model has been created. Originality. This article proposes a new approach of the rating indicators’ using as a general model for strategic planning of the development and management of heterogeneous objects that can be characterized by the sets of parameters measured on different scales

  9. The complex formation-partition and partition-association models of solvent extraction of ions

    International Nuclear Information System (INIS)

    Siekierski, S.

    1976-01-01

    Two models of the extraction process have been proposed. In the first model it is assumed that the partitioning neutral species is at first formed in the aqueous phase and then transferred into the organic phase. The second model is based on the assumption that equivalent amounts of cations are at first transferred from the aqueous into the organic phase and then associated to form a neutral molecule. The role of the solubility parameter in extraction and the relation between the solubility of liquid organic substances in water and the partition of complexes have been discussed. The extraction of simple complexes and complexes with organic ligands has been discussed using the first model. Partition coefficients have been calculated theoretically and compared with experimental values in some very simple cases. The extraction of ion pairs has been discussed using the partition-association model and the concept of single-ion partition coefficients. (author)

  10. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    Science.gov (United States)

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  11. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    Science.gov (United States)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  12. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  13. Complex singlet extension of the standard model

    International Nuclear Information System (INIS)

    Barger, Vernon; McCaskey, Mathew; Langacker, Paul; Ramsey-Musolf, Michael; Shaughnessy, Gabe

    2009-01-01

    We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider.

  14. Complex negotiations: the lived experience of enacting agency after a stroke.

    Science.gov (United States)

    Bergström, Aileen L; Eriksson, Gunilla; Asaba, Eric; Erikson, Anette; Tham, Kerstin

    2015-01-01

    This qualitative, longitudinal, descriptive study aimed to understand the lived experience of enacting agency, and to describe the phenomenon of agency and the meaning structure of the phenomenon during the year after a stroke. Agency is defined as making things happen in everyday life through one's actions. This study followed six persons (three men and three women, ages 63 to 89), interviewed on four separate occasions. Interview data were analysed using the Empirical Phenomenological Psychological method. The main findings showed that the participants experienced enacting agency in their everyday lives after stroke as negotiating different characteristics over a span of time, a range of difficulty, and in a number of activities, making these negotiations complex. The four characteristics described how the participants made things happen in their everyday lives through managing their disrupted bodies, taking into account their past and envisioning their futures, dealing with the world outside themselves, and negotiating through internal dialogues. This empirical evidence regarding negotiations challenges traditional definitions of agency and a new definition of agency is proposed. Understanding clients' complex negotiations and offering innovative solutions to train in real-life situations may help in the process of enabling occupations after a stroke.

  15. A structural model of the E. coli PhoB Dimer in the transcription initiation complex

    Directory of Open Access Journals (Sweden)

    Tung Chang-Shung

    2012-03-01

    Full Text Available Abstract Background There exist > 78,000 proteins and/or nucleic acids structures that were determined experimentally. Only a small portion of these structures corresponds to those of protein complexes. While homology modeling is able to exploit knowledge-based potentials of side-chain rotomers and backbone motifs to infer structures for new proteins, no such general method exists to extend our understanding of protein interaction motifs to novel protein complexes. Results We use a Motif Binding Geometries (MBG approach, to infer the structure of a protein complex from the database of complexes of homologous proteins taken from other contexts (such as the helix-turn-helix motif binding double stranded DNA, and demonstrate its utility on one of the more important regulatory complexes in biology, that of the RNA polymerase initiating transcription under conditions of phosphate starvation. The modeled PhoB/RNAP/σ-factor/DNA complex is stereo-chemically reasonable, has sufficient interfacial Solvent Excluded Surface Areas (SESAs to provide adequate binding strength, is physically meaningful for transcription regulation, and is consistent with a variety of known experimental constraints. Conclusions Based on a straightforward and easy to comprehend concept, "proteins and protein domains that fold similarly could interact similarly", a structural model of the PhoB dimer in the transcription initiation complex has been developed. This approach could be extended to enable structural modeling and prediction of other bio-molecular complexes. Just as models of individual proteins provide insight into molecular recognition, catalytic mechanism, and substrate specificity, models of protein complexes will provide understanding into the combinatorial rules of cellular regulation and signaling.

  16. A framework for modelling the complexities of food and water security under globalisation

    Science.gov (United States)

    Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.

    2018-01-01

    We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  17. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  18. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  19. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches

    Energy Technology Data Exchange (ETDEWEB)

    Walke, Russell C. [Quintessa Limited, The Hub, 14 Station Road, Henley-on-Thames (United Kingdom); Kirchner, Gerald [University of Hamburg, ZNF, Beim Schlump 83, 20144 Hamburg (Germany); Xu, Shulan; Dverstorp, Bjoern [Swedish Radiation Safety Authority, SE-171 16 Stockholm (Sweden)

    2014-07-01

    Geological facilities are the preferred option for disposal of high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long time scales. Safety cases developed in support of geological disposal include assessment of potential impacts on humans and wildlife in order to demonstrate compliance with regulatory criteria. As disposal programmes move from site-independent/generic assessments through site selection to applications for construction/operation and closure, the degree of understanding of the present-day site increases, together with increased site-specific information. Assessments need to strike a balance between simple models and more complex approaches that draw more extensively on this site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The complex biosphere model was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's model is built on a landscape evolution model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. The site is located on the Baltic coast with a terrestrial landscape including lakes, mires, forest and agriculture. The land at the site is projected to continue to rise due to post-glacial uplift leading to ecosystem transitions in excess of ten thousand years. The simple biosphere models developed for this study include the most plausible transport processes and represent various types of ecosystem. The complex biosphere models adopt a relatively coarse representation of the near-surface strata, which is shown to be conservative, but also to under-estimate the time scale required for potential doses to reach equilibrium with radionuclide fluxes

  20. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    Science.gov (United States)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of

  1. The complex sine-Gordon model on a half line

    International Nuclear Information System (INIS)

    Tzamtzis, Georgios

    2003-01-01

    In this thesis, we study the complex sine-Gordon model on a half line. The model in the bulk is an integrable (1+1) dimensional field theory which is U(1) gauge invariant and comprises a generalisation of the sine-Gordon theory. It accepts soliton and breather solutions. By introducing suitably selected boundary conditions we may consider the model on a half line. Through such conditions the model can be shown to remain integrable and various aspects of the boundary theory can be examined. The first chapter serves as a brief introduction to some basic concepts of integrability and soliton solutions. As an example of an integrable system with soliton solutions, the sine-Gordon model is presented both in the bulk and on a half line. These results will serve as a useful guide for the model at hand. The introduction finishes with a brief overview of the two methods that will be used on the fourth chapter in order to obtain the quantum spectrum of the boundary complex sine-Gordon model. In the second chapter the model is properly introduced along with a brief literature review. Different realisations of the model and their connexions are discussed. The vacuum of the theory is investigated. Soliton solutions are given and a discussion on the existence of breathers follows. Finally the collapse of breather solutions to single solitons is demonstrated and the chapter concludes with a different approach to the breather problem. In the third chapter, we construct the lowest conserved currents and through them we find suitable boundary conditions that allow for their conservation in the presence of a boundary. The boundary term is added to the Lagrangian and the vacuum is reexamined in the half line case. The reflection process of solitons from the boundary is studied and the time-delay is calculated. Finally we address the existence of boundary-bound states. In the fourth chapter we study the quantum complex sine-Gordon model. We begin with a brief overview of the theory in

  2. DEM Calibration Approach: design of experiment

    Science.gov (United States)

    Boikov, A. V.; Savelev, R. V.; Payor, V. A.

    2018-05-01

    The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.

  3. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  4. Aespoe Pillar Stability Experiment. Final coupled 3D thermo-mechanical modeling. Preliminary particle mechanical modeling

    International Nuclear Information System (INIS)

    Wanne, Toivo; Johansson, Erik; Potyondy, David

    2004-02-01

    SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that

  5. Aespoe Pillar Stability Experiment. Final coupled 3D thermo-mechanical modeling. Preliminary particle mechanical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wanne, Toivo; Johansson, Erik; Potyondy, David [Saanio and Riekkola Oy, Helsinki (Finland)

    2004-02-01

    SKB is planning to perform a large-scale pillar stability experiment called APSE (Aespoe Pillar Stability Experiment) at Aespoe HRL. The study is focused on understanding and control of progressive rock failure in hard crystalline rock and damage caused by high stresses. The elastic thermo-mechanical modeling was carried out in three dimensions because of the complex test geometry and in-situ stress tensor by using a finite-difference modeling software FLAC3D. Cracking and damage formation were modeled in the area of interest (pillar between two large scale holes) in two dimensions by using the Particle Flow Code (PFC), which is based on particle mechanics. FLAC and PFC were coupled to minimize the computer resources and the computing time. According to the modeling the initial temperature rises from 15 deg C to about 65 deg C in the pillar area during the heating period of 120 days. The rising temperature due to thermal expansion induces stresses in the pillar area and after 120 days heating the stresses have increased about 33% from the excavation induced maximum stress of 150 MPa to 200 MPa in the end of the heating period. The results from FLAC3D model showed that only regions where the crack initiation stress has exceeded were identified and they extended to about two meters down the hole wall. These could be considered the areas where damage may occur during the in-situ test. When the other hole is pressurized with a 0.8 MPa confining pressure it yields that 5 MPa more stress is needed to damage the rock than without confining pressure. This makes the damaged area in some degree smaller. High compressive stresses in addition to some tensile stresses might induce some AE (acoustic emission) activity in the upper part of the hole from the very beginning of the test and are thus potential areas where AE activities may be detected. Monitoring like acoustic emissions will be measured during the test execution. The 2D coupled PFC-FLAC modeling indicated that

  6. CLINICAL EXPERIENCE OF CANCER IMMUNOTHERAPY INTEGRATED WITH OLEIC ACID COMPLEXED WITH DE-GLYCOSYLATED VITAMIN D BINDING PROTEIN

    OpenAIRE

    Emma Ward; Rodney Smith; Jacopo J.V. Branca; David Noakes; Gabriele Morucci; Lynda Thyer

    2014-01-01

    Proteins highly represented in milk such as α-lactalbumin and lactoferrin bind Oleic Acid (OA) to form complexes with selective anti-tumor activity. A protein present in milk, colostrum and blood, vitamin D binding protein is the precursor of a potent Macrophage Activating Factor (GcMAF) and in analogy with other OA-protein complexes, we proposed that OA-GcMAF could demonstrate a greater immunotherapeutic activity than that of GcMAF alone. We describe a preliminary experience treating p...

  7. Uncertainties in model-independent extractions of amplitudes from complete experiments

    International Nuclear Information System (INIS)

    Hoblit, S.; Sandorfi, A.M.; Kamano, H.; Lee, T.-S.H.

    2012-01-01

    A new generation of over-complete experiments is underway, with the goal of performing a high precision extraction of pseudoscalar meson photo-production amplitudes. Such experimentally determined amplitudes can be used both as a test to validate models and as a starting point for an analytic continuation in the complex plane to search for poles. Of crucial importance for both is the level of uncertainty in the extracted multipoles. We have probed these uncertainties by analyses of pseudo-data for KLambda photoproduction, first for the set of 8 observables that have been published for the K + Lambda channel and then for pseudo-data on a complete set of 16 observables with the uncertainties expected from analyses of ongoing CLAS experiments. In fitting multipoles, we have used a combined Monte Carlo sampling of the amplitude space, with gradient minimization, and have found a shallow X 2 valley pitted with a large number of local minima. This results in bands of solutions that are experimentally indistinguishable. All ongoing experiments will measure observables with limited statistics. We have found a dependence on the particular random choice of values of Gaussian distributed pseudo-data, due to the presence of multiple local minima. This results in actual uncertainties for reconstructed multipoles that are often considerable larger than those returned by gradient minimization routines such as Minuit which find a single local minimum. As intuitively expected, this additional level of uncertainty decreases as larger numbers of observables are included.

  8. Thermal-hydraulic Experiments for Advanced Physical Model Development

    International Nuclear Information System (INIS)

    Song, Chulhwa

    2012-04-01

    The improvement of prediction models is needed to enhance the safety analysis capability through experimental database of local phenomena. To improve the two-phase interfacial area transport model, the various experiments were carried out with local two-phase interfacial structure test facilities. 2 Χ 2 and 6 Χ 6 rod bundle test facilities were used for the experiment on the droplet behavior. The experiments on the droplet behavior inside a heated rod bundle geometry. The experiments used GIRLS and JICO and CFD analysis were carried out to comprehend the local condensation of steam jet, turbulent jet induced by condensation and the thermal mixing in a pool. In order to develop a model for key phenomena of newly adapted safety system, experiments for boiling inside a pool and condensation in horizontal channel have been performed. An experimental database of the CHF (Critical Heat Flux) and PDO (Post-dryout) was constructed. The mechanism of the heat transfer enhancement by surface modifications in nano-fluid was investigated in boiling mode and rapid quenching mode. The special measurement techniques were developed. They are Double-sensor optical void probe, Optic Rod, PIV technique and UBIM system

  9. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  10. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  11. Modelling wetting and drying effects over complex topography

    Science.gov (United States)

    Tchamen, G. W.; Kahawita, R. A.

    1998-06-01

    The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry

  12. Surface complexation modeling of uranyl adsorption on corrensite from the Waste Isolation Pilot Plant Site

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang-Won; Leckie, J.O. [Stanford Univ., CA (United States); Siegel, M.D. [Sandia National Labs., Albuquerque, NM (United States)

    1995-09-01

    Corrensite is the dominant clay mineral in the Culebra Dolomite at the Waste Isolation Pilot Plant. The surface characteristics of corrensite, a mixed chlorite/smectite clay mineral, have been studied. Zeta potential measurements and titration experiments suggest that the corrensite surface contains a mixture of permanent charge sites on the basal plane and SiOH and AlOH sites with a net pH-dependent charge at the edge of the clay platelets. Triple-layer model parameters were determined by the double extrapolation technique for use in chemical speciation calculations of adsorption reactions using the computer program HYDRAQL. Batch adsorption studies showed that corrensite is an effective adsorbent for uranyl. The pH-dependent adsorption behavior indicates that adsorption occurs at the edge sites. Adsorption studies were also conducted in the presence of competing cations and complexing ligands. The cations did not affect uranyl adsorption in the range studied. This observation lends support to the hypothesis that uranyl adsorption occurs at the edge sites. Uranyl adsorption was significantly hindered by carbonate. It is proposed that the formation of carbonate uranyl complexes inhibits uranyl adsorption and that only the carbonate-free species adsorb to the corrensite surface. The presence of the organic complexing agents EDTA and oxine also inhibits uranyl sorption.

  13. Surface complexation modeling of uranyl adsorption on corrensite from the Waste Isolation Pilot Plant Site

    International Nuclear Information System (INIS)

    Park, Sang-Won; Leckie, J.O.; Siegel, M.D.

    1995-09-01

    Corrensite is the dominant clay mineral in the Culebra Dolomite at the Waste Isolation Pilot Plant. The surface characteristics of corrensite, a mixed chlorite/smectite clay mineral, have been studied. Zeta potential measurements and titration experiments suggest that the corrensite surface contains a mixture of permanent charge sites on the basal plane and SiOH and AlOH sites with a net pH-dependent charge at the edge of the clay platelets. Triple-layer model parameters were determined by the double extrapolation technique for use in chemical speciation calculations of adsorption reactions using the computer program HYDRAQL. Batch adsorption studies showed that corrensite is an effective adsorbent for uranyl. The pH-dependent adsorption behavior indicates that adsorption occurs at the edge sites. Adsorption studies were also conducted in the presence of competing cations and complexing ligands. The cations did not affect uranyl adsorption in the range studied. This observation lends support to the hypothesis that uranyl adsorption occurs at the edge sites. Uranyl adsorption was significantly hindered by carbonate. It is proposed that the formation of carbonate uranyl complexes inhibits uranyl adsorption and that only the carbonate-free species adsorb to the corrensite surface. The presence of the organic complexing agents EDTA and oxine also inhibits uranyl sorption

  14. Dynamical Behaviors in Complex-Valued Love Model With or Without Time Delays

    Science.gov (United States)

    Deng, Wei; Liao, Xiaofeng; Dong, Tao

    2017-12-01

    In this paper, a novel version of nonlinear model, i.e. a complex-valued love model with two time delays between two individuals in a love affair, has been proposed. A notable feature in this model is that we separate the emotion of one individual into real and imaginary parts to represent the variation and complexity of psychophysiological emotion in romantic relationship instead of just real domain, and make our model much closer to reality. This is because love is a complicated cognitive and social phenomenon, full of complexity, diversity and unpredictability, which refers to the coexistence of different aspects of feelings, states and attitudes ranging from joy and trust to sadness and disgust. By analyzing associated characteristic equation of linearized equations for our model, it is found that the Hopf bifurcation occurs when the sum of time delays passes through a sequence of critical value. Stability of bifurcating cyclic love dynamics is also derived by applying the normal form theory and the center manifold theorem. In addition, it is also shown that, for some appropriate chosen parameters, chaotic behaviors can appear even without time delay.

  15. Thermal model of in-situ experiment of 32nd level Mysore mine, KGF. Report No. 1

    International Nuclear Information System (INIS)

    Narayan, P.K.; Mathur, R.K.; Godse, V.B.; Sunder Rajan, N.S.

    1985-01-01

    Canisters with immobilised high level radioactive wastes require isolation from the biosphere and need to be disposed of in a deep geological media so that radionuclides in the waste are contained in the media for extended period of time. Several countries are evaluating various host rocks for their suitability for location of a geological respository for such work. One of the main thrust of the present work in these countries is in conducting in-situ heater experiment to study the behaviour of the host rock at elevated temperature. The main purpose of the experiment is to evalutate the integrity of the rock by observing the propagation of microfractures consequent to thermal loading. This type of study would lead to optimisation of the spacing and depth of the bore holes and thus the economic usage of space deep underground. In India such thermomechanical experiments have been planned in abandoned chamber of Mysore and Nundydoorg mines of Kolar Gold Fields at the depths of about 1000m-1500m. The scope of the work detailed in this report is to provide guidance to the in situ experiment by developing thermal and thermomechanical models and to generate field data base of various thermal properties of the host rock to facilitate validation of the model. In addition it is also intended to form a basis for developing other complex models which can predict stresses and displacements. (author)

  16. Surface complexation modelling: Experiments on sorption of nickel on quartz, goethite and kaolinite and preliminary tests on sorption of thorium on quartz

    Energy Technology Data Exchange (ETDEWEB)

    Puukko, E.; Hakanen, M. [Univ. of Helsinki (Finland). Dept. of Chemistry. Lab. of Radiochemistry

    1997-09-01

    The aim of the work was to study the sorption behaviour of Ni on quartz, goethite and kaolinite at different pH levels and in different electrolyte solutions of different strength. In addition preliminary experiments were made to study the sorption of thorium on quartz. The MUS quartz and Nilsiae quartz were analysed for MnO{sub 2} by neutron activation analysis (NAA) and the experimental results were modelled with the HYDRAQL computer model. 9 refs.

  17. MODELS AND METHODS OF SAFETY-ORIENTED PROJECT MANAGEMENT OF DEVELOPMENT OF COMPLEX SYSTEMS: METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Олег Богданович ЗАЧКО

    2016-03-01

    Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.

  18. Where to from here? Future applications of mental models of complex performance

    International Nuclear Information System (INIS)

    Hahn, H.A.; Nelson, W.R.; Blackman, H.S.

    1988-01-01

    The purpose of this paper is to raise issues for discussion regarding the applications of mental models in the study of complex performance. Applications for training, expert systems and decision aids, job selection, workstation design, and other complex environments are considered. 1 ref

  19. Experiments and Modeling in Support of Generic Salt Repository Science

    International Nuclear Information System (INIS)

    Bourret, Suzanne Michelle; Stauffer, Philip H.; Weaver, Douglas James; Caporuscio, Florie Andre; Otto, Shawn; Boukhalfa, Hakim; Jordan, Amy B.; Chu, Shaoping; Zyvoloski, George Anthony; Johnson, Peter Jacob

    2017-01-01

    Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.

  20. Experiments and Modeling in Support of Generic Salt Repository Science

    Energy Technology Data Exchange (ETDEWEB)

    Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caporuscio, Florie Andre [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Otto, Shawn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jordan, Amy B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-19

    Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.

  1. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  2. Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.

    Science.gov (United States)

    Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker

    2015-11-01

    Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.

  3. Holocene glacier variability: three case studies using an intermediate-complexity climate model

    NARCIS (Netherlands)

    Weber, S.L.; Oerlemans, J.

    2003-01-01

    Synthetic glacier length records are generated for the Holocene epoch using a process-based glacier model coupled to the intermediate-complexity climate model ECBilt. The glacier model consists of a massbalance component and an ice-flow component. The climate model is forced by the insolation change

  4. Inclusion Complexes of Sunscreen Agents with β-Cyclodextrin: Spectroscopic and Molecular Modeling Studies

    Directory of Open Access Journals (Sweden)

    Nathir A. F. Al-Rawashdeh

    2013-01-01

    Full Text Available The inclusion complexes of selected sunscreen agents, namely, oxybenzone (Oxy, octocrylene (Oct, and ethylhexyl-methoxycinnamate (Cin with β-cyclodextrin (β-CD were studied by UV-Vis spectroscopy, differential scanning calorimetry (DSC, 13C NMR techniques, and molecular mechanics (MM calculations and modeling. Molecular modeling (MM study of the entire process of the formation of 1 : 1 stoichiometry sunscreen agent/β-cyclodextrin structures has been used to contribute to the understanding and rationalization of the experimental results. Molecular mechanics calculations, together with 13C NMR measurements, for the complex with β-CD have been used to describe details of the structural, energetic, and dynamic features of host-guest complex. Accurate structures of CD inclusion complexes have been derived from molecular mechanics (MM calculations and modeling. The photodegradation reaction of the sunscreen agents' molecules in lotion was explored using UV-Vis spectroscopy. It has been demonstrated that the photostability of these selected sunscreen agents has been enhanced upon forming inclusion complexes with β-CD in lotion. The results of this study demonstrate that β-CD can be utilized as photostabilizer additive for enhancing the photostability of the selected sunscreen agents' molecules.

  5. The Model of Complex Structure of Quark

    Science.gov (United States)

    Liu, Rongwu

    2017-09-01

    In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.

  6. On sampling and modeling complex systems

    International Nuclear Information System (INIS)

    Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

    2013-01-01

    The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

  7. Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.

    Science.gov (United States)

    Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos

    2017-01-01

    Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.

  8. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  9. A viscoelastic-viscoplastic model for short-fibre reinforced polymers with complex fibre orientations

    Directory of Open Access Journals (Sweden)

    Nciri M.

    2015-01-01

    Full Text Available This paper presents an innovative approach for the modelling of viscous behaviour of short-fibre reinforced composites (SFRC with complex distributions of fibre orientations and for a wide range of strain rates. As an alternative to more complex homogenisation methods, the model is based on an additive decomposition of the state potential for the computation of composite’s macroscopic behaviour. Thus, the composite material is seen as the assembly of a matrix medium and several linear elastic fibre media. The division of short fibres into several families means that complex distributions of orientation or random orientation can be easily modelled. The matrix behaviour is strain-rate sensitive, i.e. viscoelastic and/or viscoplastic. Viscoelastic constitutive laws are based on a generalised linear Maxwell model and the modelling of the viscoplasticity is based on an overstress approach. The model is tested for the case of a polypropylene reinforced with short-glass fibres with distributed orientations and subjected to uniaxial tensile tests, in different loading directions and under different strain rates. Results demonstrate the efficiency of the model over a wide range of strain rates.

  10. A framework for modelling the complexities of food and water security under globalisation

    Directory of Open Access Journals (Sweden)

    B. J. Dermody

    2018-01-01

    Full Text Available We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  11. A density-based clustering model for community detection in complex networks

    Science.gov (United States)

    Zhao, Xiang; Li, Yantao; Qu, Zehui

    2018-04-01

    Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.

  12. Induced polarization of clay-sand mixtures: experiments and modeling

    International Nuclear Information System (INIS)

    Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.

    2012-01-01

    Document available in extended abstract form only. Frequency-domain induced polarization (IP) measurements consist of imposing an alternative sinusoidal electrical current (AC) at a given frequency and measuring the resulting electrical potential difference between two other non-polarizing electrodes. The magnitude of the conductivity and the phase lag between the current and the difference of potential can be expressed into a complex conductivity with the in-phase representing electro-migration and a quadrature conductivity representing the reversible storage of electrical charges (capacitive effect) of the porous material. Induced polarization has become an increasingly popular geophysical method for hydrogeological and environmental applications. These applications include for instance the characterization of clay materials used as permeability barriers in landfills or to contain various types of contaminants including radioactive wastes. The goal of our study is to get a better understanding of the influence of the clay content, clay mineralogy, and pore water salinity upon complex conductivity measurements of saturated clay-sand mixtures in the frequency range ∼1 mHz-12 kHz. The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite in the frequency range 1.4 mHz - 12 kHz. Four different types of samples were used, two containing mainly kaolinite (80% of the mass, the remaining containing 15% of smectite and 5% of illite/muscovite; 95% of kaolinite and 5% of illite/muscovite), and the two others containing mainly Na-smectite or Na-Ca-smectite (95% of the mass; bentonite). The experiments were performed with various clay contents (1, 5, 20, and 100% in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). In total, 44 saturated clay or clay-sand mixtures were prepared. Induced polarization measurements

  13. Applying complexity theory: A primer for identifying and modeling firm anomalies

    Directory of Open Access Journals (Sweden)

    Arch G. Woodside

    2018-01-01

    Full Text Available This essay elaborates on the usefulness of embracing complexity theory, modeling outcomes rather than directionality, and modeling complex rather than simple outcomes in strategic management. Complexity theory includes the tenet that most antecedent conditions are neither sufficient nor necessary for the occurrence of a specific outcome. Identifying a firm by individual antecedents (i.e., non-innovative versus highly innovative, small versus large size in sales or number of employees, or serving local versus international markets provides shallow information in modeling specific outcomes (e.g., high sales growth or high profitability—even if directional analyses (e.g., regression analysis, including structural equation modeling indicates that the independent (main effects of the individual antecedents relate to outcomes directionally—because firm (case anomalies almost always occur to main effects. Examples: a number of highly innovative firms have low sales while others have high sales and a number of non-innovative firms have low sales while others have high sales. Breaking-away from the current dominant logic of directionality testing—null hypotheses statistic testing (NHST—to embrace somewhat precise outcome testing (SPOT is necessary for extracting highly useful information about the causes of anomalies—associations opposite to expected and “statistically significant” main effects. The study of anomalies extends to identifying the occurrences of four-corner strategy outcomes: firms doing well in favorable circumstances, firms doing badly in favorable circumstances, firms doing well in unfavorable circumstances, and firms doing badly in unfavorable circumstances. Models of four-corner strategy outcomes advances strategic management beyond the current dominant logic of directional modeling of single outcomes.

  14. Modeling experiments using quantum and Kolmogorov probability

    International Nuclear Information System (INIS)

    Hess, Karl

    2008-01-01

    Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.

  15. Simulation of the last glacial cycle with a coupled climate ice-sheet model of intermediate complexity

    Directory of Open Access Journals (Sweden)

    A. Ganopolski

    2010-04-01

    Full Text Available A new version of the Earth system model of intermediate complexity, CLIMBER-2, which includes the three-dimensional polythermal ice-sheet model SICOPOLIS, is used to simulate the last glacial cycle forced by variations of the Earth's orbital parameters and atmospheric concentration of major greenhouse gases. The climate and ice-sheet components of the model are coupled bi-directionally through a physically-based surface energy and mass balance interface. The model accounts for the time-dependent effect of aeolian dust on planetary and snow albedo. The model successfully simulates the temporal and spatial dynamics of the major Northern Hemisphere (NH ice sheets, including rapid glacial inception and strong asymmetry between the ice-sheet growth phase and glacial termination. Spatial extent and elevation of the ice sheets during the last glacial maximum agree reasonably well with palaeoclimate reconstructions. A suite of sensitivity experiments demonstrates that simulated ice-sheet evolution during the last glacial cycle is very sensitive to some parameters of the surface energy and mass-balance interface and dust module. The possibility of a considerable acceleration of the climate ice-sheet model is discussed.

  16. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  17. arXiv Spin models in complex magnetic fields: a hard sign problem

    CERN Document Server

    de Forcrand, Philippe

    2018-01-01

    Coupling spin models to complex external fields can give rise to interesting phenomena like zeroes of the partition function (Lee-Yang zeroes, edge singularities) or oscillating propagators. Unfortunately, it usually also leads to a severe sign problem that can be overcome only in special cases; if the partition function has zeroes, the sign problem is even representation-independent at these points. In this study, we couple the N-state Potts model in different ways to a complex external magnetic field and discuss the above mentioned phenomena and their relations based on analytic calculations (1D) and results obtained using a modified cluster algorithm (general D) that in many cases either cures or at least drastically reduces the sign-problem induced by the complex external field.

  18. Experiments and Modeling of G-Jitter Fluid Mechanics

    Science.gov (United States)

    Leslie, F. W.; Ramachandran, N.; Whitaker, Ann F. (Technical Monitor)

    2002-01-01

    While there is a general understanding of the acceleration environment onboard an orbiting spacecraft, past research efforts in the modeling and analysis area have still not produced a general theory that predicts the effects of multi-spectral periodic accelerations on a general class of experiments nor have they produced scaling laws that a prospective experimenter can use to assess how an experiment might be affected by this acceleration environment. Furthermore, there are no actual flight experimental data that correlates heat or mass transport with measurements of the periodic acceleration environment. The present investigation approaches this problem with carefully conducted terrestrial experiments and rigorous numerical modeling for better understanding the effect of residual gravity and gentler on experiments. The approach is to use magnetic fluids that respond to an imposed magnetic field gradient in much the same way as fluid density responds to a gravitational field. By utilizing a programmable power source in conjunction with an electromagnet, both static and dynamic body forces can be simulated in lab experiments. The paper provides an overview of the technique and includes recent results from the experiments.

  19. Predicting protein complexes using a supervised learning method combined with local structural information.

    Science.gov (United States)

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  20. Cooling tower plume - model and experiment

    Science.gov (United States)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  1. A preliminary investigation of the applicability of surface complexation modeling to the understanding of transportation cask weeping

    International Nuclear Information System (INIS)

    Granstaff, V.E.; Chambers, W.B.; Doughty, D.H.

    1994-01-01

    A new application for surface complexation modeling is described. These models, which describe chemical equilibria among aqueous and adsorbed species, have typically been used for predicting groundwater transport of contaminants by modeling the natural adsorbents as various metal oxides. Our experiments suggest that this type of modeling can also explain stainless steel surface contamination and decontamination mechanisms. Stainless steel transportation casks, when submerged in a spent fuel storage pool at nuclear power stations, can become contaminated with radionuclides such as 137 Cs, 134 Cs, and 60 Co. Subsequent release or desorption of these contaminants under varying environmental conditions occasionally results in the phenomenon known as open-quotes cask weeping.close quotes We have postulated that contaminants in the storage pool adsorb onto the hydrous metal oxide surface of the passivated stainless steel and are subsequently released (by conversion from a fixed to a removable form) during transportation, due to varying environmental factors, such as humidity, road salt, dirt, and acid rain. It is well known that 304 stainless steel has a chromium enriched passive surface layer; thus its adsorption behavior should be similar to that of a mixed chromium/iron oxide. To help us interpret our studies of reversible binding of dissolved metals on stainless steel surfaces, we have studied the adsorption of Co +2 on Cr 2 O 3 . The data are interpreted using electrostatic surface complexation models. The FITEQL computer program was used to obtain the model binding constants and site densities from the experimental data. The MINTEQA2 computer speciation model was used, with the fitted constants, in an attempt to validate this approach

  2. Quantitative analysis of surface deformation and ductile flow in complex analogue geodynamic models based on PIV method.

    Science.gov (United States)

    Krýza, Ondřej; Lexa, Ondrej; Závada, Prokop; Schulmann, Karel; Gapais, Denis; Cosgrove, John

    2017-04-01

    Recently, a PIV (particle image velocimetry) analysis method is optical method abundantly used in many technical branches where material flow visualization and quantification is important. Typical examples are studies of liquid flow through complex channel system, gas spreading or combustion problematics. In our current research we used this method for investigation of two types of complex analogue geodynamic and tectonic experiments. First class of experiments is aimed to model large-scale oroclinal buckling as an analogue of late Paleozoic to early Mesozoic evolution of Central Asian Orogenic Belt (CAOB) resulting from nortward drift of the North-China craton towards the Siberian craton. Here we studied relationship between lower crustal and lithospheric mantle flows and upper crustal deformation respectively. A second class of experiments is focused to more general study of a lower crustal flow in indentation systems that represent a major component of some large hot orogens (e.g. Bohemian massif). The most of simulations in both cases shows a strong dependency of a brittle structures shape, that are situated in upper crust, on folding style of a middle and lower ductile layers which is influenced by rheological, geometrical and thermal conditions of different parts across shortened domain. The purpose of PIV application is to quantify material redistribution in critical domains of the model. The derivation of flow direction and calculation of strain-rate and total displacement field in analogue experiments is generally difficult and time-expensive or often performed only on a base of visual evaluations. PIV method operates with set of images, where small tracer particles are seeded within modeled domain and are assumed to faithfully follow the material flow. On base of pixel coordinates estimation the material displacement field, velocity field, strain-rate, vorticity, tortuosity etc. are calculated. In our experiments we used velocity field divergence to

  3. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    Science.gov (United States)

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  4. Validation Study for an Atmospheric Dispersion Model, Using Effective Source Heights Determined from Wind Tunnel Experiments in Nuclear Safety Analysis

    Directory of Open Access Journals (Sweden)

    Masamichi Oura

    2018-03-01

    Full Text Available For more than fifty years, atmospheric dispersion predictions based on the joint use of a Gaussian plume model and wind tunnel experiments have been applied in both Japan and the U.K. for the evaluation of public radiation exposure in nuclear safety analysis. The effective source height used in the Gaussian model is determined from ground-level concentration data obtained by a wind tunnel experiment using a scaled terrain and site model. In the present paper, the concentrations calculated by this method are compared with data observed over complex terrain in the field, under a number of meteorological conditions. Good agreement was confirmed in near-neutral and unstable stabilities. However, it was found to be necessary to reduce the effective source height by 50% in order to achieve a conservative estimation of the field observations in a stable atmosphere.

  5. FACET: A simulation software framework for modeling complex societal processes and interactions

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J. H.

    2000-06-02

    FACET, the Framework for Addressing Cooperative Extended Transactions, was developed at Argonne National Laboratory to address the need for a simulation software architecture in the style of an agent-based approach, but with sufficient robustness, expressiveness, and flexibility to be able to deal with the levels of complexity seen in real-world social situations. FACET is an object-oriented software framework for building models of complex, cooperative behaviors of agents. It can be used to implement simulation models of societal processes such as the complex interplay of participating individuals and organizations engaged in multiple concurrent transactions in pursuit of their various goals. These transactions can be patterned on, for example, clinical guidelines and procedures, business practices, government and corporate policies, etc. FACET can also address other complex behaviors such as biological life cycles or manufacturing processes. To date, for example, FACET has been applied to such areas as land management, health care delivery, avian social behavior, and interactions between natural and social processes in ancient Mesopotamia.

  6. Development of structural model of adaptive training complex in ergatic systems for professional use

    Science.gov (United States)

    Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.

    2018-03-01

    The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.

  7. Tracer simulation using a global general circulation model: Results from a midlatitude instantaneous source experiment

    International Nuclear Information System (INIS)

    Mahlman, J.D.; Moxim, W.J.

    1978-01-01

    An 11-level general circulation model with seasonal variation is used to perform an experiment on the dispersion of passive tracers. Specially constructed time-dependent winds from this model are used as input to a separate tracer model. The methodologies employed to construct the tracer model are described.The experiment presented is the evolution of a hypothetical instantaneous source of tracer on 1 Janaury with maximum initial concentration at 65 mb, 36 0 N, 180 0 E. The tracer is assumed to have no sources or sinks in the stratosphere, but is subject to removal processes in the lower troposphere.The experimental results reveal a number of similarities to observed tracer behavior, including the average poleward-downward slope of mixing ratio isopleths, strong tracer gradients across the tropopause, intrusion of tracer into the Southern Hemisphere lower stratosphere, and the long-term interhemispheric exchange rate. The model residence times show behavior intermediate to those exhibited for particulate radioactive debris and gaseous C 14 O 2 . This suggests that caution should be employed when either radioactive debris or C 14 O 2 data are used to develop empirical models for prediction of gaseous tracers which are efficiently removed in the troposphere.In this experiment, the tracer mixing ratio and potential vorticity evolve to very high correlations. Mechanisms for this correlation are discussed. The zonal mean tracer balances exhibit complex behavior among the various transport terms. At early stages, the tracer evolution is dominated by eddy effects. Later, a very large degree of self-cancellation between mean cell and eddy effects is observed. During seasonal transitions, however, this self-cancellation diminishes markedly, leading to significant changes in the zonal mean tracer distribution. A possible theoretical explanation is presented

  8. Towards a Unified Theory of Health-Disease: I. Health as a complex model-object

    Directory of Open Access Journals (Sweden)

    Naomar Almeida-Filho

    2013-06-01

    Full Text Available Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.

  9. Modelling and simulating in-stent restenosis with complex automata

    NARCIS (Netherlands)

    Hoekstra, A.G.; Lawford, P.; Hose, R.

    2010-01-01

    In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata Model for in-stent restenosis, coupling bulk flow, drug diffusion, and

  10. Surface complexation modeling of the effects of phosphate on uranium(VI) adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Gonzalez, M.R.; Cheng, T.; Barnett, M.O. [Auburn Univ., AL (United States). Dept. of Civil Engeneering; Roden, E.E. [Wisconsin Univ., Madison, WI (United States). Dept. of Geology and Geophysics

    2007-07-01

    Previous published data for the adsorption of U(VI) and/or phosphate onto amorphous Fe(III) oxides (hydrous ferric oxide, HFO) and crystalline Fe(III) oxides (goethite) was examined. These data were then used to test the ability of a commonly-used surface complexation model (SCM) to describe the adsorption of U(VI) and phosphate onto pure amorphous and crystalline Fe(III) oxides and synthetic goethite-coated sand, a surrogate for a natural Fe(III)-coated material, using the component additivity (CA) approach. Our modeling results show that this model was able to describe U(VI) adsorption onto both amorphous and crystalline Fe(III) oxides and also goethite-coated sand quite well in the absence of phosphate. However, because phosphate adsorption exhibits a stronger dependence on Fe(III) oxide type than U(VI) adsorption, we could not use this model to consistently describe phosphate adsorption onto both amorphous and crystalline Fe(III) oxides and goethite-coated sand. However, the effects of phosphate on U(VI) adsorption could be incorporated into the model to describe U(VI) adsorption to both amorphous and crystalline Fe(III) oxides and goethite-coated sand, at least for an initial approximation. These results illustrate both the potential and limitations of using surface complexation models developed from pure systems to describe metal/radionuclide adsorption under more complex conditions. (orig.)

  11. Surface complexation modeling of the effects of phosphate on uranium(VI) adsorption

    International Nuclear Information System (INIS)

    Romero-Gonzalez, M.R.; Cheng, T.; Barnett, M.O.; Roden, E.E.

    2007-01-01

    Previous published data for the adsorption of U(VI) and/or phosphate onto amorphous Fe(III) oxides (hydrous ferric oxide, HFO) and crystalline Fe(III) oxides (goethite) was examined. These data were then used to test the ability of a commonly-used surface complexation model (SCM) to describe the adsorption of U(VI) and phosphate onto pure amorphous and crystalline Fe(III) oxides and synthetic goethite-coated sand, a surrogate for a natural Fe(III)-coated material, using the component additivity (CA) approach. Our modeling results show that this model was able to describe U(VI) adsorption onto both amorphous and crystalline Fe(III) oxides and also goethite-coated sand quite well in the absence of phosphate. However, because phosphate adsorption exhibits a stronger dependence on Fe(III) oxide type than U(VI) adsorption, we could not use this model to consistently describe phosphate adsorption onto both amorphous and crystalline Fe(III) oxides and goethite-coated sand. However, the effects of phosphate on U(VI) adsorption could be incorporated into the model to describe U(VI) adsorption to both amorphous and crystalline Fe(III) oxides and goethite-coated sand, at least for an initial approximation. These results illustrate both the potential and limitations of using surface complexation models developed from pure systems to describe metal/radionuclide adsorption under more complex conditions. (orig.)

  12. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  13. Computer Simulations and Theoretical Studies of Complex Systems: from complex fluids to frustrated magnets

    Science.gov (United States)

    Choi, Eunsong

    Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We

  14. Modeling of modification experiments involving neutral-gas release

    International Nuclear Information System (INIS)

    Bernhardt, P.A.

    1983-01-01

    Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools

  15. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  16. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  17. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  18. A mouse model of mitochondrial complex III dysfunction induced by myxothiazol

    Energy Technology Data Exchange (ETDEWEB)

    Davoudi, Mina [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Kallijärvi, Jukka; Marjavaara, Sanna [Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Kotarsky, Heike; Hansson, Eva [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Levéen, Per [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Fellman, Vineta, E-mail: Vineta.Fellman@med.lu.se [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Children’s Hospital, Helsinki University Hospital, University of Helsinki, Helsinki 00029 (Finland)

    2014-04-18

    Highlights: • Reversible chemical inhibition of complex III in wild type mouse. • Myxothiazol causes decreased complex III activity in mouse liver. • The model is useful for therapeutic trials to improve mitochondrial function. - Abstract: Myxothiazol is a respiratory chain complex III (CIII) inhibitor that binds to the ubiquinol oxidation site Qo of CIII. It blocks electron transfer from ubiquinol to cytochrome b and thus inhibits CIII activity. It has been utilized as a tool in studies of respiratory chain function in in vitro and cell culture models. We developed a mouse model of biochemically induced and reversible CIII inhibition using myxothiazol. We administered myxothiazol intraperitoneally at a dose of 0.56 mg/kg to C57Bl/J6 mice every 24 h and assessed CIII activity, histology, lipid content, supercomplex formation, and gene expression in the livers of the mice. A reversible CIII activity decrease to 50% of control value occurred at 2 h post-injection. At 74 h only minor histological changes in the liver were found, supercomplex formation was preserved and no significant changes in the expression of genes indicating hepatotoxicity or inflammation were found. Thus, myxothiazol-induced CIII inhibition can be induced in mice for four days in a row without overt hepatotoxicity or lethality. This model could be utilized in further studies of respiratory chain function and pharmacological approaches to mitochondrial hepatopathies.

  19. Using multi-criteria analysis of simulation models to understand complex biological systems

    Science.gov (United States)

    Maureen C. Kennedy; E. David. Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  20. Evaluating the suitability of the SWAN/COSMO-2 model system to simulate short-crested surface waves for a narrow lake with complex bathymetry

    Directory of Open Access Journals (Sweden)

    Michael Graf

    2013-07-01

    Full Text Available The spectral wave model SWAN (Simulating Waves Nearshore was applied to Lake Zurich, a narrow pre-Alpine lake in Switzerland. The aim of the study is to investigate whether the model system consisting of SWAN and the numerical weather prediction model COSMO-2 is a suitable tool for wave forecasts for the pre-Alpine Lake Zurich. SWAN is able to simulate short-crested wind-generated surface waves. The model was forced with a time varying wind field taken from COSMO-2 with hourly outputs. Model simulations were compared with measured wave data at one near-shore site during a frontal passage associated with strong on-shore winds. The overall course of the measured wave height is well captured in the SWAN simulation: the wave amplitude significantly increases during the frontal passage followed by a transient drop in amplitude. The wave pattern on Lake Zurich is quite complex. It strongly depends on the inherent variability of the wind field and on the external forcing due to the surrounding complex topography. The influence of the temporal wind resolution is further studied with two sensitivity experiments. The first one considers a low-pass filtered wind field, based on a 2-h running mean of COSMO-2 output, and the second experiment uses simple synthetic gusts, which are implemented into the SWAN model and take into account short-term fluctuations of wind speed at 1-sec resolution. The wave field significantly differs for the 1-h and 2-h simulations, but is only negligibly affected by the gusts.

  1. Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music

    Science.gov (United States)

    Vuust, Peter; Witek, Maria A. G.

    2014-01-01

    Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms. PMID:25324813

  2. Rhythmic complexity and predictive coding: A novel approach to modeling rhythm and meter perception in music

    Directory of Open Access Journals (Sweden)

    Peter eVuust

    2014-10-01

    Full Text Available Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of predictive coding, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a predictive coding model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (‘rhythm’ and the brain’s anticipatory structuring of music (‘meter’. Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the predictive coding theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.

  3. Cooling tower plume - model and experiment

    Directory of Open Access Journals (Sweden)

    Cizek Jan

    2017-01-01

    Full Text Available The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  4. On Measuring the Complexity of Networks: Kolmogorov Complexity versus Entropy

    Directory of Open Access Journals (Sweden)

    Mikołaj Morzy

    2017-01-01

    Full Text Available One of the most popular methods of estimating the complexity of networks is to measure the entropy of network invariants, such as adjacency matrices or degree sequences. Unfortunately, entropy and all entropy-based information-theoretic measures have several vulnerabilities. These measures neither are independent of a particular representation of the network nor can capture the properties of the generative process, which produces the network. Instead, we advocate the use of the algorithmic entropy as the basis for complexity definition for networks. Algorithmic entropy (also known as Kolmogorov complexity or K-complexity for short evaluates the complexity of the description required for a lossless recreation of the network. This measure is not affected by a particular choice of network features and it does not depend on the method of network representation. We perform experiments on Shannon entropy and K-complexity for gradually evolving networks. The results of these experiments point to K-complexity as the more robust and reliable measure of network complexity. The original contribution of the paper includes the introduction of several new entropy-deceiving networks and the empirical comparison of entropy and K-complexity as fundamental quantities for constructing complexity measures for networks.

  5. Experimental determination and modeling of arsenic complexation with humic and fulvic acids.

    Science.gov (United States)

    Fakour, Hoda; Lin, Tsair-Fuh

    2014-08-30

    The complexation of humic acid (HA) and fulvic acid (FA) with arsenic (As) in water was studied. Experimental results indicate that arsenic may form complexes with HA and FA with a higher affinity for arsenate than for arsenite. With the presence of iron oxide based adsorbents, binding of arsenic to HA/FA in water was significantly suppressed, probably due to adsorption of As and HA/FA. A two-site ligand binding model, considering only strong and weak site types of binding affinity, was successfully developed to describe the complexation of arsenic on the two natural organic fractions. The model showed that the numbers of weak sites were more than 10 times those of strong sites on both HA and FA for both arsenic species studied. The numbers of both types of binding sites were found to be proportional to the HA concentrations, while the apparent stability constants, defined for describing binding affinity between arsenic and the sites, are independent of the HA concentrations. To the best of our knowledge, this is the first study to characterize the impact of HA concentrations on the applicability of the ligand binding model, and to extrapolate the model to FA. The obtained results may give insights on the complexation of arsenic in HA/FA laden groundwater and on the selection of more effective adsorption-based treatment methods for natural waters. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Beam model for seismic analysis of complex shear wall structure based on the strain energy equivalence

    International Nuclear Information System (INIS)

    Reddy, G.R.; Mahajan, S.C.; Suzuki, Kohei

    1997-01-01

    A nuclear reactor building structure consists of shear walls with complex geometry, beams and columns. The complexity of the structure is explained in the section Introduction. Seismic analysis of the complex reactor building structure using the continuum mechanics approach may produce good results but this method is very difficult to apply. Hence, the finite element approach is found to be an useful technique for solving the dynamic equations of the reactor building structure. In this approach, the model which uses finite elements such as brick, plate and shell elements may produce accurate results. However, this model also poses some difficulties which are explained in the section Modeling Techniques. Therefore, seismic analysis of complex structures is generally carried out using a lumped mass beam model. This model is preferred because of its simplicity and economy. Nevertheless, mathematical modeling of a shear wall structure as a beam requires specialized skill and a thorough understanding of the structure. For accurate seismic analysis, it is necessary to model more realistically the stiffness, mass and damping. In linear seismic analysis, modeling of the mass and damping may pose few problems compared to modeling the stiffness. When used to represent a complex structure, the stiffness of the beam is directly related to the shear wall section properties such as area, shear area and moment of inertia. Various beam models which are classified based on the method of stiffness evaluation are also explained under the section Modeling Techniques. In the section Case Studies the accuracy and simplicity of the beam models are explained. Among various beam models, the one which evaluates the stiffness using strain energy equivalence proves to be the simplest and most accurate method for modeling the complex shear wall structure. (author)

  7. The relation between geometry, hydrology and stability of complex hillslopes examined using low-dimensional hydrological models

    NARCIS (Netherlands)

    Talebi, A.

    2008-01-01

    Key words: Hillslope geometry, Hillslope hydrology, Hillslope stability, Complex hillslopes, Modeling shallow landslides, HSB model, HSB-SM model.

    The hydrologic response of a hillslope to rainfall involves a complex, transient saturated-unsaturated interaction that usually leads to a

  8. The Complexity of Developmental Predictions from Dual Process Models

    Science.gov (United States)

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  9. The fence experiment - a first evaluation of shelter models

    DEFF Research Database (Denmark)

    Peña, Alfredo; Bechmann, Andreas; Conti, Davide

    2016-01-01

    We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direct...

  10. Development and evaluation of a musculoskeletal model of the elbow joint complex

    Science.gov (United States)

    Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.

  11. Conflict and complexity countering terrorism, insurgency, ethnic and regional violence

    CERN Document Server

    Bar-Yam, Yaneer; Minai, Ali

    2015-01-01

    Complexity science affords a number of novel tools for examining terrorism, particularly network analysis and NK-Boolean fitness landscapes as well as other tools drawn from non-linear dynamical systems modeling. This book follows the methodologies of complex adaptive systems research in their application to addressing the problems of terrorism, specifically terrorist networks, their structure and various methods of mapping and interdicting them as well as exploring the complex landscape of network-centric and irregular warfare. A variety of new models and approaches are presented here, including Dynamic Network Analysis, DIME/PMESII models, percolation models and emergent models of insurgency. In addition, the analysis is informed by practical experience, with analytical and policy guidance from authors who have served within the U.S. Department of Defense, the British Ministry of Defence as well as those who have served in a civilian capacity as advisors on terrorism and counter-terrorism.

  12. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

    Science.gov (United States)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

  13. Evaluation of the WRF model for precipitation downscaling on orographic complex islands

    Science.gov (United States)

    Díaz, Juan P.; González, Albano; Expósito, Francisco; Pérez, Juan C.

    2010-05-01

    General Circulation Models (GCMs) have proven to be an effective tool to simulate many aspects of large-scale and global climate. However, their applicability to climate impact studies is limited by their capabilities to resolve regional scale situations. In this sense, dynamical downscaling techniques are an appropriate alternative to estimate high resolution regional climatologies. In this work, the Weather Research and Forecasting model (WRF) has been used to simulate precipitations over the Canary Islands region during 2009. The precipitation patterns over Canary Islands, located at North Atlantic region, show large gradients over a relatively small geographical area due to large scale factors such as Trade Winds regime predominant in the area and mesoscale factors mainly due to the complex terrain. Sensitivity study of simulated WRF precipitations to variations in model setup and parameterizations was carried out. Thus, WRF experiments were performed using two way nesting at 3 km horizontal grid spacing and 28 vertical levels in the Canaries inner domain. The initial and lateral and lower boundary conditions for the outer domain were provided at 6 hourly intervals by NCEP FNL (Final) Operational Global Analysis data on 1.0x1.0 degree resolution interpolated onto the WRF model grid. Numerous model options have been tested, including different microphysics schemes, cumulus parameterizations and nudging configuration Positive-definite moisture advection condition was also checked. Two integration approaches were analyzed: a 1-year continuous long-term integration and a consecutive short-term monthly reinitialized integration. To assess the accuracy of our simulations, model results are compared against observational datasets obtained from a network of meteorological stations in the region. In general, we can observe that the regional model is able to reproduce the spatial distribution of precipitation, but overestimates rainfall, mainly during strong

  14. Topographic evolution of sandbars: Flume experiment and computational modeling

    Science.gov (United States)

    Kinzel, Paul J.; Nelson, Jonathan M.; McDonald, Richard R.; Logan, Brandy L.

    2010-01-01

    Measurements of sandbar formation and evolution were carried out in a laboratory flume and the topographic characteristics of these barforms were compared to predictions from a computational flow and sediment transport model with bed evolution. The flume experiment produced sandbars with approximate mode 2, whereas numerical simulations produced a bed morphology better approximated as alternate bars, mode 1. In addition, bar formation occurred more rapidly in the laboratory channel than for the model channel. This paper focuses on a steady-flow laboratory experiment without upstream sediment supply. Future experiments will examine the effects of unsteady flow and sediment supply and the use of numerical models to simulate the response of barform topography to these influences.

  15. Logic-based hierarchies for modeling behavior of complex dynamic systems with applications

    International Nuclear Information System (INIS)

    Hu, Y.S.; Modarres, M.

    2000-01-01

    Most complex systems are best represented in the form of a hierarchy. The Goal Tree Success Tree and Master Logic Diagram (GTST-MLD) are proven powerful hierarchic methods to represent complex snap-shot of plant knowledge. To represent dynamic behaviors of complex systems, fuzzy logic is applied to replace binary logic to extend the power of GTST-MLD. Such a fuzzy-logic-based hierarchy is called Dynamic Master Logic Diagram (DMLD). This chapter discusses comparison of the use of GTST-DMLD when applied as a modeling tool for systems whose relationships are modeled by either physical, binary logical or fuzzy logical relationships. This is shown by applying GTST-DMLD to the Direct Containment Heating (DCH) phenomenon at pressurized water reactors which is an important safety issue being addressed by the nuclear industry. (orig.)

  16. Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model

    Science.gov (United States)

    Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.

    2018-01-01

    The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast

  17. Understanding Coupled Earth-Surface Processes through Experiments and Models (Invited)

    Science.gov (United States)

    Overeem, I.; Kim, W.

    2013-12-01

    Traditionally, both numerical models and experiments have been purposefully designed to ';isolate' singular components or certain processes of a larger mountain to deep-ocean interconnected source-to-sink (S2S) transport system. Controlling factors driven by processes outside of the domain of immediate interest were treated and simplified as input or as boundary conditions. Increasingly, earth surface processes scientists appreciate feedbacks and explore these feedbacks with more dynamically coupled approaches to their experiments and models. Here, we discuss key concepts and recent advances made in coupled modeling and experimental setups. In addition, we emphasize challenges and new frontiers to coupled experiments. Experiments have highlighted the important role of self-organization; river and delta systems do not always need to be forced by external processes to change or develop characteristic morphologies. Similarly modeling f.e. has shown that intricate networks in tidal deltas are stable because of the interplay between river avulsions and the tidal current scouring with both processes being important to develop and maintain the dentritic networks. Both models and experiment have demonstrated that seemingly stable systems can be perturbed slightly and show dramatic responses. Source-to-sink models were developed for both the Fly River System in Papua New Guinea and the Waipaoa River in New Zealand. These models pointed to the importance of upstream-downstream effects and enforced our view of the S2S system as a signal transfer and dampening conveyor belt. Coupled modeling showed that deforestation had extreme effects on sediment fluxes draining from the catchment of the Waipaoa River in New Zealand, and that this increase in sediment production rapidly shifted the locus of offshore deposition. The challenge in designing coupled models and experiments is both technological as well as intellectual. Our community advances to make numerical model coupling more

  18. Recording information on protein complexes in an information management system.

    Science.gov (United States)

    Savitsky, Marc; Diprose, Jonathan M; Morris, Chris; Griffiths, Susanne L; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S; Blake, Richard; Stuart, David I; Esnouf, Robert M

    2011-08-01

    The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein-protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Data from the Hot Serial Cereal Experiment for modeling wheat response to temperature: field experiments and AgMIP-Wheat multi-model simulations

    NARCIS (Netherlands)

    Martre, Pierre; Kimball, Bruce A.; Ottman, Michael J.; Wall, Gerard W.; White, Jeffrey W.; Asseng, Senthold; Ewert, Frank; Cammarano, Davide; Maiorano, Andrea; Aggarwal, Pramod K.; Supit, I.; Wolf, J.

    2018-01-01

    The dataset reported here includes the part of a Hot Serial Cereal Experiment (HSC) experiment recently used in the AgMIP-Wheat project to analyze the uncertainty of 30 wheat models and quantify their response to temperature. The HSC experiment was conducted in an open-field in a semiarid

  20. Chaos from simple models to complex systems

    CERN Document Server

    Cencini, Massimo; Vulpiani, Angelo

    2010-01-01

    Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor