Dynamics of vortices in complex wakes: Modeling, analysis, and experiments
Basu, Saikat
The thesis develops singly-periodic mathematical models for complex laminar wakes which are formed behind vortex-shedding bluff bodies. These wake structures exhibit a variety of patterns as the bodies oscillate or are in close proximity of one another. The most well-known formation comprises two counter-rotating vortices in each shedding cycle and is popularly known as the von Karman vortex street. Of the more complex configurations, as a specific example, this thesis investigates one of the most commonly occurring wake arrangements, which consists of two pairs of vortices in each shedding period. The paired vortices are, in general, counter-rotating and belong to a more general definition of the 2P mode, which involves periodic release of four vortices into the flow. The 2P arrangement can, primarily, be sub-classed into two types: one with a symmetric orientation of the two vortex pairs about the streamwise direction in a periodic domain and the other in which the two vortex pairs per period are placed in a staggered geometry about the wake centerline. The thesis explores the governing dynamics of such wakes and characterizes the corresponding relative vortex motion. In general, for both the symmetric as well as the staggered four vortex periodic arrangements, the thesis develops two-dimensional potential flow models (consisting of an integrable Hamiltonian system of point vortices) that consider spatially periodic arrays of four vortices with their strengths being +/-Gamma1 and +/-Gamma2. Vortex formations observed in the experiments inspire the assumed spatial symmetry. The models demonstrate a number of dynamic modes that are classified using a bifurcation analysis of the phase space topology, consisting of level curves of the Hamiltonian. Despite the vortex strengths in each pair being unequal in magnitude, some initial conditions lead to relative equilibrium when the vortex configuration moves with invariant size and shape. The scaled comparisons of the
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Complexity effects in choice experiments-based models
Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.
2012-01-01
Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual
DEFF Research Database (Denmark)
Eby, M.; Weaver, A. J.; Alexander, K.
2013-01-01
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20...
Surface complexation modelling: Experiments on the sorption of nickel on quartz
International Nuclear Information System (INIS)
Puukko, E.; Hakanen, M.
1995-10-01
Assessing the safety of a final repository for nuclear wastes requires knowledge concerning the way in which the radionuclides released are retarded in the geosphere. The aim of the work is to aquire knowledge of empirical methods repeating the experiments on the sorption of nickel on quartz described in the reports published by the British Geological Survey (BGS). The experimental results were modelled with computer models at the Technical Research Centre of Finland (VTT Chemical Technology). The results showed that the experimental knowledge of the sorption of Ni on quartz have been acheved by repeating the experiments of BGS. Experiments made with the two quartz types, Min-U-Sil 5 (MUS) and Nilsiae, showed the difference in sorption of Ni in the low ionic strength solution (0.001 M NaNO 3 ). The sorption of Ni on MUS was higher than predicted by the Surface Complexation Model (SCM). The phenomenon was also observed by the BGS, and may be due to the different amounts of inpurities in the MUS and in the NLS. In other respects, the results of the sorption experiments fitted quite well with those predicted by the SCM model. (8 refs., 8 figs., 11 tabs.)
Zhang, Y.; Li, S.
2014-12-01
Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest
Complexity and formative experiences
Directory of Open Access Journals (Sweden)
Roque Strieder
2017-12-01
Full Text Available The contemporaneity is characterized by instability and diversity calling into question certainties and truths proposed in modernity. We recognize that the reality of things and phenomena become effective as a set of events, interactions, retroactions and chances. This different frame extends the need for revision of the epistemological foundations that sustain educational practices and give them sense. The complex thinking is an alternative option for acting as a counterpoint to classical science and its reductionist logic and knowledge compartmentalization, as well as to answer to contemporary epistemological and educational challenges. It aims to associate different areas and forms of knowledge, without, however merge them, distinguishing without separating the several disciplines and instances of the realities. This study, in theoretical references, highlights the relevance of complex approaches to support formative experiences because also able to produce complexities in reflections about educational issues. We conclude that formative possibilities from complexity potentialize the resignification of human’s conception and the understanding of its singularity in interdependence; The understanding that pedagogical and educational activities is a constant interrogation about the possibilities of knowing the knowledge and reframe learning, far beyond knowing its functions and utilitarian purposes; and, as a formative possibility, places us on the trail of responsibility, not as something eventual, but present and indicative of freedom to choose to stay or go beyond.
A business process modeling experience in a complex information system re-engineering.
Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis
2013-01-01
This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.
Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E
2018-04-11
In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.
Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan
2016-01-01
Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…
Energy Technology Data Exchange (ETDEWEB)
Karimzadeh, Lotfallah; Barthen, Robert; Gruendig, Marion; Franke, Karsten; Lippmann-Pipke, Johanna [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactive Transport; Stockmann, Madlen [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Surface Processes
2017-06-01
In this work, we study the mobility behavior of Cu(II) under conditions related to an alternative, neutrophile biohydrometallurgical Cu(II) leaching approach. Sorption of copper onto kaolinite influenced by glutamic acid (Glu) was investigated in the presence of 0.01 M NaClO{sub 4} by means of binary and ternary batch adsorption measurements over a pH range of 4 to 9 and surface complexation modeling.
International Nuclear Information System (INIS)
Karimzadeh, Lotfallah; Barthen, Robert; Gruendig, Marion; Franke, Karsten; Lippmann-Pipke, Johanna; Stockmann, Madlen
2017-01-01
In this work, we study the mobility behavior of Cu(II) under conditions related to an alternative, neutrophile biohydrometallurgical Cu(II) leaching approach. Sorption of copper onto kaolinite influenced by glutamic acid (Glu) was investigated in the presence of 0.01 M NaClO_4 by means of binary and ternary batch adsorption measurements over a pH range of 4 to 9 and surface complexation modeling.
Sorption of phosphate onto calcite; results from batch experiments and surface complexation modeling
DEFF Research Database (Denmark)
Sø, Helle Ugilt; Postma, Dieke; Jakobsen, Rasmus
2011-01-01
The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite was stud......The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3h) and low concentrations of phosphate (⩽50μM). Sorption of phosphate on calcite...... of a high degree of super-saturation with respect to hydroxyapatite (SIHAP⩽7.83). The amount of phosphate adsorbed varied with the solution composition, in particular, adsorption increases as the CO32- activity decreases (at constant pH) and as pH increases (at constant CO32- activity). The primary effect...... of ionic strength on phosphate sorption onto calcite is its influence on the activity of the different aqueous phosphate species. The experimental results were modeled satisfactorily using the constant capacitance model with >CaPO4Ca0 and either >CaHPO4Ca+ or >CaHPO4- as the adsorbed surface species...
Lensink, Marc F.
2016-04-28
We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. © 2016 Wiley Periodicals, Inc.
Lensink, Marc F.; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaë l A.G.; Bates, Paul A.; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Franç oise; Guerois, Raphaë l; Vangone, Anna; Rodrigues, Joã o P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jimé nez-Garcí a, Brian; Moal, Iain H.; Fé rnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J.
2016-01-01
We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. © 2016 Wiley Periodicals, Inc.
Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David
2010-09-01
Using a morphological-behavior model to conduct sensitivity experiments, we investigate the sea level rise response of a complex coastal environment to changes in a variety of factors. Experiments reveal that substrate composition, followed in rank order by substrate slope, sea level rise rate, and sediment supply rate, are the most important factors in determining barrier island response to sea level rise. We find that geomorphic threshold crossing, defined as a change in state (e.g., from landward migrating to drowning) that is irreversible over decadal to millennial time scales, is most likely to occur in muddy coastal systems where the combination of substrate composition, depth-dependent limitations on shoreface response rates, and substrate erodibility may prevent sand from being liberated rapidly enough, or in sufficient quantity, to maintain a subaerial barrier. Analyses indicate that factors affecting sediment availability such as low substrate sand proportions and high sediment loss rates cause a barrier to migrate landward along a trajectory having a lower slope than average barrier island slope, thereby defining an "effective" barrier island slope. Other factors being equal, such barriers will tend to be smaller and associated with a more deeply incised shoreface, thereby requiring less migration per sea level rise increment to liberate sufficient sand to maintain subaerial exposure than larger, less incised barriers. As a result, the evolution of larger/less incised barriers is more likely to be limited by shoreface erosion rates or substrate erodibility making them more prone to disintegration related to increasing sea level rise rates than smaller/more incised barriers. Thus, the small/deeply incised North Carolina barriers are likely to persist in the near term (although their long-term fate is less certain because of the low substrate slopes that will soon be encountered). In aggregate, results point to the importance of system history (e
Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David
2010-01-01
Using a morphological-behavior model to conduct sensitivity experiments, we investigate the sea level rise response of a complex coastal environment to changes in a variety of factors. Experiments reveal that substrate composition, followed in rank order by substrate slope, sea level rise rate, and sediment supply rate, are the most important factors in determining barrier island response to sea level rise. We find that geomorphic threshold crossing, defined as a change in state (e.g., from landward migrating to drowning) that is irreversible over decadal to millennial time scales, is most likely to occur in muddy coastal systems where the combination of substrate composition, depth-dependent limitations on shoreface response rates, and substrate erodibility may prevent sand from being liberated rapidly enough, or in sufficient quantity, to maintain a subaerial barrier. Analyses indicate that factors affecting sediment availability such as low substrate sand proportions and high sediment loss rates cause a barrier to migrate landward along a trajectory having a lower slope than average barrier island slope, thereby defining an “effective” barrier island slope. Other factors being equal, such barriers will tend to be smaller and associated with a more deeply incised shoreface, thereby requiring less migration per sea level rise increment to liberate sufficient sand to maintain subaerial exposure than larger, less incised barriers. As a result, the evolution of larger/less incised barriers is more likely to be limited by shoreface erosion rates or substrate erodibility making them more prone to disintegration related to increasing sea level rise rates than smaller/more incised barriers. Thus, the small/deeply incised North Carolina barriers are likely to persist in the near term (although their long-term fate is less certain because of the low substrate slopes that will soon be encountered). In aggregate, results point to the importance of system history (e
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
International Nuclear Information System (INIS)
Brown, T.W.
2010-11-01
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Chisci, Emiliano; de Donato, Gianmarco; Fargion, Aaron; Ventoruzzo, Giorgio; Parlani, Gianbattista; Setacci, Carlo; Ercolini, Leonardo; Michelagnoli, Stefano
2018-03-01
The objective of this study was to report the methodology and 1-year experience of a regional service model of teleconsultation for planning and treatment of complex thoracoabdominal aortic disease (TAAD). Complex TAADs without a feasible conventional surgical repair were prospectively evaluated by vascular surgeons of the same public health service (National Health System) located in a huge area of 22,994 km 2 with 3.7 million inhabitants and 11 tertiary hospitals. Surgeons evaluated computed tomography scans and clinical details that were placed on a web platform (Google Drive; Google, Mountain View, Calif) and shared by all surgeons. Patients gave informed consent for the teleconsultation. The surgeon who submits a case discusses in detail his or her case and proposes a possible therapeutic strategy. The other surgeons suggest other solutions and options in terms of grafts, techniques, or access to be used. Computed tomography angiography, angiography, and clinical outcomes of cases are then presented at the following telemeetings, and a final agreement of the operative strategy is evaluated. Teleconsultation is performed using a web conference service (WebConference.com; Avaya Inc, Basking Ridge, NJ) every month. An inter-rater agreement statistic was calculated, and the κ value was interpreted according to Altman's criteria for computed tomography angiography measurements. The rate of participation was constant (mean number of surgeons, 11; range, 9-15). Twenty-four complex TAAD cases were discussed for planning and operation during the study period. The interobserver reliability recorded was moderate (κ = 0.41-0.60) to good (κ = 0.61-0.80) for measurements of proximal and distal sealing and very good (κ = 0.81-1) for detection of any target vessel angulation >60 degrees, significant calcification (circumferential), and thrombus presence (>50%). The concordance for planning and therapeutic strategy among all participants was complete in 16 cases. In
Simulation in Complex Modelling
DEFF Research Database (Denmark)
Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin
2017-01-01
This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....
Boccara, Nino
2010-01-01
Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).
International Nuclear Information System (INIS)
Schreckenberg, M
2004-01-01
This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Polystochastic Models for Complexity
Iordache, Octavian
2010-01-01
This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...
Complex terrain experiments in the New European Wind Atlas
DEFF Research Database (Denmark)
Mann, Jakob; Angelou, Nikolas; Arnqvist, Johan
2017-01-01
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiment...
Lensink, Marc F.; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A G; Bates, Paul A.; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Garcia Lopes Maia Rodrigues, João; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S J; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M J J; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H.; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J.
2016-01-01
We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014.
Lensink, Marc F.; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Lee, Gyu Rie; Seok, Chaok; Qin, Sanbo; Zhou, Huan Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; Van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H.; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J.
2016-01-01
We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014.
The task complexity experiment 2003/2004
International Nuclear Information System (INIS)
Laumann, Karin; Braarud, Per Oeivind; Svengren, Haakan
2005-08-01
The purpose of this experiment was to explore how additional tasks added to base case scenarios affected the operators' performance of the main tasks. These additional tasks were in different scenario variants intended to cause high time pressure, high information load, and high masking. The experiment was run in Halden Man-Machine Laboratory's BWR simulator. Seven crews participated, each for one week. There were three operators in each crew. Five main types of scenarios and 20 scenario variants were run. The data from the experiment were analysed by completion time for important actions and by in-depth qualitative analyses of the crews' communications. The results showed that high time pressure decreased some of the crews' performance in the scenarios. When a crew had problems in solving a task for which the time pressure was high, they had even more problems in solving other important tasks. High information load did not affect the operators' performance much and in general the crews were very good at selecting the most important tasks in the scenarios. The scenarios that included both high time pressure and high information load resulted in more reduced performance for the crews compared to the scenarios that only included high time pressure. The total amount of tasks to do and information load to attend to seemed to affect the crews' performance. To solve the scenarios with high time pressure well, it was important to have good communication and good allocation of tasks within the crew. Furthermore, the results showed that scenarios with an added complex, masked task created problems for some crews when solving a relatively simple main task. Overall, the results confirmed that complicating, but secondary tasks, that are not normally taken into account when modelling the primary tasks in a PRA scenario can adversely affect the performance of the main tasks modelled in the PRA scenario. (Author)
DEFF Research Database (Denmark)
Sø, Helle Ugilt
different calcite-equilibrated solutions that varied in pH, PCO2, ionic strength and activity of Ca2+, CO3 2- and HCO3 -. To avoid the precipitation of phosphate or arsenic-containing minerals the experiments were conducted using a short reaction time (generally 3 h) and a low concentration of phosphate...... adsorption affinity for calcite is greater as compared to arsenate and the phosphate sorption isotherms are more strongly curved. However, the amount of both arsenate and phosphate adsorbed varied with the solution composition in the same manner. In particular, adsorption increased as the CO3 2- activity...... decreased (at constant pH) and as pH increased (at constant CO3 2- activity). The dependency on the carbonate activity indicates competition for sorption sites between carbonate and arsenate/phosphate, whereas the pH dependency is likely a response to changes in arsenate and phosphate speciation...
Appropriate complexity landscape modeling
Larsen, Laurel G.; Eppinga, Maarten B.; Passalacqua, Paola; Getz, Wayne M.; Rose, Kenneth A.; Liang, Man
Advances in computing technology, new and ongoing restoration initiatives, concerns about climate change's effects, and the increasing interdisciplinarity of research have encouraged the development of landscape-scale mechanistic models of coupled ecological-geophysical systems. However,
Complexity in Climate Change Manipulation Experiments
DEFF Research Database (Denmark)
Kreyling, Juergen; Beier, Claus
2014-01-01
Climate change goes beyond gradual changes in mean conditions. It involves increased variability in climatic drivers and increased frequency and intensity of extreme events. Climate manipulation experiments are one major tool to explore the ecological impacts of climate change. Until now...... variability in temperature are ecologically important. Embracing complexity in future climate change experiments in general is therefore crucial......., precipitation experiments have dealt with temporal variability or extreme events, such as drought, resulting in a multitude of approaches and scenarios with limited comparability among studies. Temperature manipulations have mainly been focused only on warming, resulting in better comparability among studies...
DEFF Research Database (Denmark)
Jessen, Søren; Postma, Dieke; Larsen, Flemming
2012-01-01
, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed......Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer......(III) while PO43− and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment...
Energy Technology Data Exchange (ETDEWEB)
Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.
2012-10-07
Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied
Epidemic modeling in complex realities.
Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro
2007-04-01
In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.
Adolescents' experience of complex persistent pain.
Sørensen, Kari; Christiansen, Bjørg
2017-04-01
Persistent (chronic) pain is a common phenomenon in adolescents. When young people are referred to a pain clinic, they usually have amplified pain signals, with pain syndromes of unconfirmed ethology, such as fibromyalgia and complex regional pain syndrome (CRPS). Pain is complex and seems to be related to a combination of illness, injury, psychological distress, and environmental factors. These young people are found to have higher levels of distress, anxiety, sleep disturbance, and lower mood than their peers and may be in danger of entering adulthood with mental and physical problems. In order to understand the complexity of persistent pain in adolescents, there seems to be a need for further qualitative research into their lived experiences. The aim of this study was to explore adolescents' experiences of complex persistent pain and its impact on everyday life. The study has an exploratory design with individual in-depth interviews with six youths aged 12-19, recruited from a pain clinic at a main referral hospital in Norway. A narrative approach allowed the informants to give voice to their experiences concerning complex persistent pain. A hermeneutic analysis was used, where the research question was the basis for a reflective interpretation. Three main themes were identified: (1) a life with pain and unpleasant bodily expressions; (2) an altered emotional wellbeing; and (3) the struggle to keep up with everyday life. The pain was experienced as extremely strong, emerging from a minor injury or without any obvious causation, and not always being recognised by healthcare providers. The pain intensity increased as the suffering got worse, and the sensation was hard to describe with words. Parts of their body could change in appearance, and some described having pain-attacks or fainting. The feeling of anxiety was strongly connected to the pain. Despair and uncertainty contributed to physical disability, major sleep problems, school absence, and withdrawal from
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
Barrier experiment: Shock initiation under complex loading
Energy Technology Data Exchange (ETDEWEB)
Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-01-12
The barrier experiments are a variant of the gap test; a detonation wave in a donor HE impacts a barrier and drives a shock wave into an acceptor HE. The question we ask is: What is the trade-off between the barrier material and threshold barrier thickness to prevent the acceptor from detonating. This can be viewed from the perspective of shock initiation of the acceptor subject to a complex pressure drive condition. Here we consider key factors which affect whether or not the acceptor undergoes a shock-to-detonation transition. These include the following: shock impedance matches for the donor detonation wave into the barrier and then the barrier shock into the acceptor, the pressure gradient behind the donor detonation wave, and the curvature of detonation front in the donor. Numerical simulations are used to illustrate how these factors affect the reaction in the acceptor.
Balemans, C.; Hulsen, M.A.; Tervoort, T.A.; Anderson, P.D.
2017-01-01
The original version of this article unfortunately contained mistakes. Theo A. Tervoort was not listed among the authors. The correct information is given above. In Balemans et al. (2016), an axisymmetric finite element model is presented to study the behaviour of complex interfaces in pendant drop
Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus
2012-12-01
Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43- but failed in the prediction of Si desorption. These results indicate that the prediction of As mobility, by using SCMs for synthetic Fe-oxides, will be strongly dependent on the model chosen. The SCM based on the Pleistocene aquifer sediment predicts the desorption of As(III), PO43- and Si quite superiorly, as compared to the SCMs for ferrihydrite and goethite, even though Si desorption is still somewhat under-predicted. The observation that a SCM calibrated on a different sediment can predict our field results so well suggests that sediment based SCMs may be a
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Complex Networks in Psychological Models
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Liquid jets for experiments on complex fluids
International Nuclear Information System (INIS)
Steinke, Ingo
2015-02-01
The ability of modern storage rings and free-electron lasers to produce intense X-ray beams that can be focused down to μm and nm sizes offers the possibility to study soft condensed matter systems on small length and short time scales. Gas dynamic virtual nozzles (GDVN) offer the unique possibility to investigate complex fluids spatially confined in a μm sized liquid jet with high flow rates, high pressures and shear stress distributions. In this thesis two different applications of liquid jet injection systems have been studied. The influence of the shear flow present in a liquid jet on colloidal dispersions was investigated via small angle X-ray scattering and a coherent wide angle X-ray scattering experiment on a liquid water jet was performed. For these purposes, liquid jet setups that are capable for X-ray scattering experiments have been developed and the manufacturing of gas dynamic virtual nozzles was realized. The flow properties of a liquid jet and their influences on the liquid were studied with two different colloidal dispersions at beamline P10 at the storage ring PETRA III. The results show that high shear flows present in a liquid jet lead to compressions and expansions of the particle structure and to particle alignments. The shear rate in the used liquid jet could be estimated to γ ≥ 5.4 . 10 4 Hz. The feasibility of rheology studies with a liquid jet injection system and the combined advantages is discussed. The coherent X-ray scattering experiment on a water jet was performed at the XCS instrument at the free-electron laser LCLS. First coherent single shot diffraction patterns from water were taken to investigate the feasibility of measuring speckle patterns from water.
Complex fluids modeling and algorithms
Saramito, Pierre
2016-01-01
This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.
Complex terrain experiments in the New European Wind Atlas
Angelou, N.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga
2017-01-01
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265025
Quantum interference experiments with complex organic molecules
International Nuclear Information System (INIS)
Eibenberger, S. I.
2015-01-01
Matter-wave interference with complex particles is a thriving field in experimental quantum physics. The quest for testing the quantum superposition principle with highly complex molecules has motivated the development of the Kapitza-Dirac-Talbot-Lau interferometer (KDTLI). This interferometer has enabled quantum interference with large organic molecules in an unprecedented mass regime. In this doctoral thesis I describe quantum superposition experiments which we were able to successfully realize with molecules of masses beyond 10 000 amu and consisting of more than 800 atoms. The typical de Broglie wavelengths of all particles in this thesis are in the order of 0.3-5 pm. This is significantly smaller than any molecular extension (nanometers) or the delocalization length in our interferometer (hundreds of nanometers). Many vibrational and rotational states are populated since the molecules are thermally highly excited (300-1000 K). And yet, high-contrast quantum interference patterns could be observed. The visibility and position of these matter-wave interference patterns is highly sensitive to external perturbations. This sensitivity has opened the path to extensive studies of the influence of internal molecular properties on the coherence of their associated matter waves. In addition, it enables a new approach to quantum-assisted metrology. Quantum interference imprints a high-contrast nano-structured density pattern onto the molecular beam which allows us to resolve tiny shifts and dephasing of the molecular beam. I describe how KDTL interferometry can be used to investigate a number of different molecular properties. We have studied vibrationally-induced conformational changes of floppy molecules and permanent electric dipole moments using matter-wave deflectometry in an external electric field. We have developed a new method for optical absorption spectroscopy which uses the recoil of the molecules upon absorption of individual photons. This allows us to
Energy Technology Data Exchange (ETDEWEB)
Puukko, E.; Hakanen, M. [Univ. of Helsinki (Finland). Dept. of Chemistry. Lab. of Radiochemistry
1997-09-01
The aim of the work was to study the sorption behaviour of Ni on quartz, goethite and kaolinite at different pH levels and in different electrolyte solutions of different strength. In addition preliminary experiments were made to study the sorption of thorium on quartz. The MUS quartz and Nilsiae quartz were analysed for MnO{sub 2} by neutron activation analysis (NAA) and the experimental results were modelled with the HYDRAQL computer model. 9 refs.
Simulation - modeling - experiment
International Nuclear Information System (INIS)
2004-01-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
Model complexity control for hydrologic prediction
Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.
2008-01-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore
Stern-Gerlach Experiments and Complex Numbers in Quantum Physics
Sivakumar, S.
2012-01-01
It is often stated that complex numbers are essential in quantum theory. In this article, the need for complex numbers in quantum theory is motivated using the results of tandem Stern-Gerlach experiments
A Practical Philosophy of Complex Climate Modelling
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Nonparametric Bayesian Modeling of Complex Networks
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Mørup, Morten
2013-01-01
an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...
MARKETING MODELS APPLICATION EXPERIENCE
Directory of Open Access Journals (Sweden)
A. Yu. Rymanov
2011-01-01
Full Text Available Marketing models are used for the assessment of such marketing elements as sales volume, market share, market attractiveness, advertizing costs, product pushing and selling, profit, profitableness. Classification of buying process decision taking models is presented. SWOT- and GAPbased models are best for selling assessments. Lately, there is a tendency to transfer from the assessment on the ba-sis of financial indices to that on the basis of those non-financial. From the marketing viewpoint, most important are long-term company activities and consumer drawingmodels as well as market attractiveness operative models.
DEFF Research Database (Denmark)
Jantzen, Christian; Vetner, Mikael
2008-01-01
How can urban designers develop an emotionally satisfying environment not only for today's users but also for coming generations? Which devices can they use to elicit interesting and relevant urban experiences? This paper attempts to answer these questions by analyzing the design of Zuidas, a new...
Computations, Complexity, Experiments, and the World Outside Physics
International Nuclear Information System (INIS)
Kadanoff, L.P
2009-01-01
Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)
Simulation - modeling - experiment; Simulation - modelisation - experience
Energy Technology Data Exchange (ETDEWEB)
NONE
2004-07-01
After two workshops held in 2001 on the same topics, and in order to make a status of the advances in the domain of simulation and measurements, the main goals proposed for this workshop are: the presentation of the state-of-the-art of tools, methods and experiments in the domains of interest of the Gedepeon research group, the exchange of information about the possibilities of use of computer codes and facilities, about the understanding of physical and chemical phenomena, and about development and experiment needs. This document gathers 18 presentations (slides) among the 19 given at this workshop and dealing with: the deterministic and stochastic codes in reactor physics (Rimpault G.); MURE: an evolution code coupled with MCNP (Meplan O.); neutronic calculation of future reactors at EdF (Lecarpentier D.); advance status of the MCNP/TRIO-U neutronic/thermal-hydraulics coupling (Nuttin A.); the FLICA4/TRIPOLI4 thermal-hydraulics/neutronics coupling (Aniel S.); methods of disturbances and sensitivity analysis of nuclear data in reactor physics, application to VENUS-2 experimental reactor (Bidaud A.); modeling for the reliability improvement of an ADS accelerator (Biarotte J.L.); residual gas compensation of the space charge of intense beams (Ben Ismail A.); experimental determination and numerical modeling of phase equilibrium diagrams of interest in nuclear applications (Gachon J.C.); modeling of irradiation effects (Barbu A.); elastic limit and irradiation damage in Fe-Cr alloys: simulation and experiment (Pontikis V.); experimental measurements of spallation residues, comparison with Monte-Carlo simulation codes (Fallot M.); the spallation target-reactor coupling (Rimpault G.); tools and data (Grouiller J.P.); models in high energy transport codes: status and perspective (Leray S.); other ways of investigation for spallation (Audoin L.); neutrons and light particles production at intermediate energies (20-200 MeV) with iron, lead and uranium targets (Le Colley F
Osteosarcoma models : understanding complex disease
Mohseny, Alexander Behzad
2012-01-01
A mesenchymal stem cell (MSC) based osteosarcoma model was established. The model provided evidence for a MSC origin of osteosarcoma. Normal MSCs transformed spontaneously to osteosarcoma-like cells which was always accompanied by genomic instability and loss of the Cdkn2a locus. Accordingly loss of
Thermodynamic modeling of complex systems
DEFF Research Database (Denmark)
Liang, Xiaodong
after an oil spill. Engineering thermodynamics could be applied in the state-of-the-art sonar products through advanced artificial technology, if the speed of sound, solubility and density of oil-seawater systems could be satisfactorily modelled. The addition of methanol or glycols into unprocessed well...... is successfully applied to model the phase behaviour of water, chemical and hydrocarbon (oil) containing systems with newly developed pure component parameters for water and chemicals and characterization procedures for petroleum fluids. The performance of the PCSAFT EOS on liquid-liquid equilibria of water...... with hydrocarbons has been under debate for some vii years. An interactive step-wise procedure is proposed to fit the model parameters for small associating fluids by taking the liquid-liquid equilibrium data into account. It is still far away from a simple task to apply PC-SAFT in routine PVT simulations and phase...
Role models for complex networks
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Modelling the structure of complex networks
DEFF Research Database (Denmark)
Herlau, Tue
networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...
Extracting Models in Single Molecule Experiments
Presse, Steve
2013-03-01
Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.
Reducing Spatial Data Complexity for Classification Models
International Nuclear Information System (INIS)
Ruta, Dymitr; Gabrys, Bogdan
2007-01-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Reducing Spatial Data Complexity for Classification Models
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Computational Modeling of Complex Protein Activity Networks
Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude
2017-01-01
Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a
Models of complex attitude systems
DEFF Research Database (Denmark)
Sørensen, Bjarne Taulo
search algorithms and structural equation models. The results suggest that evaluative judgments of the importance of production system attributes are generated in a schematic manner, driven by personal value orientations. The effect of personal value orientations was strong and largely unmediated...... that evaluative affect propagates through the system in such a way that the system becomes evaluatively consistent and operates as a schema for the generation of evaluative judgments. In the empirical part of the paper, the causal structure of an attitude system from which people derive their evaluations of pork......Existing research on public attitudes towards agricultural production systems is largely descriptive, abstracting from the processes through which members of the general public generate their evaluations of such systems. The present paper adopts a systems perspective on such evaluations...
Complex fluids in biological systems experiment, theory, and computation
2015-01-01
This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...
Modeling Musical Complexity: Commentary on Eerola (2016
Directory of Open Access Journals (Sweden)
Joshua Albrecht
2016-07-01
Full Text Available In his paper, "Expectancy violation and information-theoretic models of melodic complexity," Eerola compares a number of models that correlate musical features of monophonic melodies with participant ratings of perceived melodic complexity. He finds that fairly strong results can be achieved using several different approaches to modeling perceived melodic complexity. The data used in this study are gathered from several previously published studies that use widely different types of melodies, including isochronous folk melodies, isochronous 12-tone rows, and rhythmically complex African folk melodies. This commentary first briefly reviews the article's method and main findings, then suggests a rethinking of the theoretical framework of the study. Finally, some of the methodological issues of the study are discussed.
Modeling of Complex Life Cycle Prediction Based on Cell Division
Directory of Open Access Journals (Sweden)
Fucheng Zhang
2017-01-01
Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.
Numerical experiments on 2D strongly coupled complex plasmas
International Nuclear Information System (INIS)
Hou Lujing; Ivlev, A V; Thomas, H M; Morfill, G E
2010-01-01
The Brownian Dynamics simulation method is briefly reviewed at first and then applied to study some non-equilibrium phenomena in strongly coupled complex plasmas, such as heat transfer processes, shock wave excitation/propagation and particle trapping, by directly mimicking the real experiments.
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Fatigue modeling of materials with complex microstructures
DEFF Research Database (Denmark)
Qing, Hai; Mishnaevsky, Leon
2011-01-01
with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Complexity, Modeling, and Natural Resource Management
Directory of Open Access Journals (Sweden)
Paul Cilliers
2013-09-01
Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Modeling OPC complexity for design for manufacturability
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data
The 'model omnitron' proposed experiment
International Nuclear Information System (INIS)
Sestero, A.
1997-05-01
The Model Omitron is a compact tokamak experiment which is designed by the Fusion Engineering Unit of ENEA and CITIF CONSORTIUM. The building of Model Omitron would allow for full testing of Omitron engineering, and partial testing of Omitron physics -at about 1/20 of the cost that has been estimated for the larger parent machine. In particular, due to the unusually large ohmic power densities (up to 100 times the nominal value in the Frascati FTU experiment), in Model Omitron the radial energy flux is reaching values comparable or higher than envisaged of the larger ignition experiments Omitron, Ignitor and Iter. Consequently, conditions are expected to occur at the plasma border in the scrape-off layer of Model Omitron, which are representative of the quoted larger experiments. Moreover, since all this will occur under ohmic heating alone, one will hopefully be able to derive an energy transport model fo the ohmic heating regime that is valid over a range of plasma parameters (in particular, of the temperature parameter) wider than it was possible before. In the Model Omitron experiment, finally - by reducing the plasma current and/or the toroidal field down to, say, 1/3 or 1/4 of the nominal values -additional topics can be tackled, such as: large safety-factor configurations (of interest for improving confinement), large aspect-ratio configurations (of interest for the investigation of advanced concepts in tokamaks), high beta (with RF heating -also of interest for the investigation of advanced concepts in tokamaks), long pulse discharges (of interest for demonstrating stationary conditions in the current profile)
Sutherland models for complex reflection groups
International Nuclear Information System (INIS)
Crampe, N.; Young, C.A.S.
2008-01-01
There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups
Minimum-complexity helicopter simulation math model
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
THE COMPLEX OF EMOTIONAL EXPERIENCES, RELEVANT MANIFESTATIONS OF INSPIRATION
Directory of Open Access Journals (Sweden)
Pavel A. Starikov
2015-01-01
Full Text Available The aim of the study is to investigate structure of emotional experiences, relevant manifestations of inspiration creative activities of students.Methods. The proposed methods of mathematical statistics (correlation analysis, factor analysis, multidimensional scaling are applied.Results and scientific novelty. The use of factor analysis, multidimensional scaling allowed to reveal a consistent set of positive experiences of the students, the relevant experience of inspiration in creative activities. «Operational» rueful feelings dedicated by M. Chiksentmihaji («feeling of full involvement, and dilution in what you do», «feeling of concentration, perfect clarity of purpose, complete control and a feeling of total immersion in a job that does not require special efforts» and experiences of the «spiritual» nature, more appropriate to peaks experiences of A. Maslow («feeling of love for all existing, all life»; «a deep sense of self importance, the inner feeling of approval of self»; «feeling of unity with the whole world»; «acute perception of the beauty of the world of nature, “beautiful instant”»; «feeling of lightness, flowing» are included in this complex in accordance with the study results. The interrelation of degree of expressiveness of the given complex of experiences with inspiration experience is considered.Practical significance. The results of the study show structure of emotional experiences, relevant manifestations of inspiration. Research materials can be useful both to psychologists, and experts in the field of pedagogy of creative activity.
Complex Systems and Self-organization Modelling
Bertelle, Cyrille; Kadri-Dahmani, Hakima
2009-01-01
The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.
Geometric Modelling with a-Complexes
Gerritsen, B.H.M.; Werff, K. van der; Veltkamp, R.C.
2001-01-01
The shape of real objects can be so complicated, that only a sampling data point set can accurately represent them. Analytic descriptions are too complicated or impossible. Natural objects, for example, can be vague and rough with many holes. For this kind of modelling, a-complexes offer advantages
The Kuramoto model in complex networks
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
A cognitive model for software architecture complexity
Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.
2010-01-01
Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In
Reassessing Geophysical Models of the Bushveld Complex in 3D
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less
Comparing flood loss models of different complexity
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
Complex scaling in the cluster model
International Nuclear Information System (INIS)
Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.
1987-01-01
To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs
Modeling of anaerobic digestion of complex substrates
International Nuclear Information System (INIS)
Keshtkar, A. R.; Abolhamd, G.; Meyssami, B.; Ghaforian, H.
2003-01-01
A structured mathematical model of anaerobic conversion of complex organic materials in non-ideally cyclic-batch reactors for biogas production has been developed. The model is based on multiple-reaction stoichiometry (enzymatic hydrolysis, acidogenesis, aceto genesis and methano genesis), microbial growth kinetics, conventional material balances in the liquid and gas phases for a cyclic-batch reactor, liquid-gas interactions, liquid-phase equilibrium reactions and a simple mixing model which considers the reactor volume in two separate sections: the flow-through and the retention regions. The dynamic model describes the effects of reactant's distribution resulting from the mixing conditions, time interval of feeding, hydraulic retention time and mixing parameters on the process performance. The model is applied in the simulation of anaerobic digestion of cattle manure under different operating conditions. The model is compared with experimental data and good correlations are obtained
Modelling and simulation of gas explosions in complex geometries
Energy Technology Data Exchange (ETDEWEB)
Saeter, Olav
1998-12-31
This thesis presents a three-dimensional Computational Fluid Dynamics (CFD) code (EXSIM94) for modelling and simulation of gas explosions in complex geometries. It gives the theory and validates the following sub-models : (1) the flow resistance and turbulence generation model for densely packed regions, (2) the flow resistance and turbulence generation model for single objects, and (3) the quasi-laminar combustion model. It is found that a simple model for flow resistance and turbulence generation in densely packed beds is able to reproduce the medium and large scale MERGE explosion experiments of the Commission of European Communities (CEC) within a band of factor 2. The model for a single representation is found to predict explosion pressure in better agreement with the experiments with a modified k-{epsilon} model. This modification also gives a slightly improved grid independence for realistic gas explosion approaches. One laminar model is found unsuitable for gas explosion modelling because of strong grid dependence. Another laminar model is found to be relatively grid independent and to work well in harmony with the turbulent combustion model. The code is validated against 40 realistic gas explosion experiments. It is relatively grid independent in predicting explosion pressure in different offshore geometries. It can predict the influence of ignition point location, vent arrangements, different geometries, scaling effects and gas reactivity. The validation study concludes with statistical and uncertainty analyses of the code performance. 98 refs., 96 figs, 12 tabs.
Modeling Users' Experiences with Interactive Systems
Karapanos, Evangelos
2013-01-01
Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences. This book identifies the notion of diversity in usersʼ experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in usersʽ responses to early conceptual designs, and (b) the dynamics of usersʼ experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in usersʼ responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of usersʼ experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited exper...
Intrinsic Uncertainties in Modeling Complex Systems.
Energy Technology Data Exchange (ETDEWEB)
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Different Epidemic Models on Complex Networks
International Nuclear Information System (INIS)
Zhang Haifeng; Small, Michael; Fu Xinchu
2009-01-01
Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.
FRAM Modelling Complex Socio-technical Systems
Hollnagel, Erik
2012-01-01
There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Energy Technology Data Exchange (ETDEWEB)
Koestner, Stefan [CERN (Switzerland)], E-mail: koestner@mpi-halle.mpg.de
2009-09-11
With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.
Complex networks under dynamic repair model
Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao
2018-01-01
Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.
Bridging experiments, models and simulations
DEFF Research Database (Denmark)
Carusi, Annamaria; Burrage, Kevin; Rodríguez, Blanca
2012-01-01
Computational models in physiology often integrate functional and structural information from a large range of spatiotemporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and skepticism concerning how computational methods can improve our...... understanding of living organisms and also how they can reduce, replace, and refine animal experiments. A fundamental requirement to fulfill these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present...... that contributes to defining the specific aspects of cardiac electrophysiology the MSE system targets, rather than being only an external test, and that this is driven by advances in experimental and computational methods and the combination of both....
From complex to simple: interdisciplinary stochastic models
International Nuclear Information System (INIS)
Mazilu, D A; Zamora, G; Mazilu, I
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)
A SIMULATION MODEL OF THE GAS COMPLEX
Directory of Open Access Journals (Sweden)
Sokolova G. E.
2016-06-01
Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.
Structured analysis and modeling of complex systems
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Glass Durability Modeling, Activated Complex Theory (ACT)
International Nuclear Information System (INIS)
CAROL, JANTZEN
2005-01-01
The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.
In situ SAXS experiment during DNA and liposome complexation
Energy Technology Data Exchange (ETDEWEB)
Gasperini, A.A.; Cavalcanti, L.P. [Laboratorio Nacional de Luz Sincrotron (LNLS), Campinas, SP (Brazil); Balbino, T.A.; Torre, L.G. de la [Universidade Estadual de Campinas (UNICAMP), SP (Brazil); Oliveira, C.L.P. [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil)
2012-07-01
Full text: Gene therapy is an exciting research area that allows the treatment of different diseases. Basically, an engineered DNA that codes a protein is the therapeutic drug that has to be delivered to the cell nucleus. After that, the DNA transfection process allows the protein production using the cell machinery. However, the efficient delivery needs DNA protection against nucleases and interstitial fluids. In this context, the use of cationic liposome/DNA complexes is a promising strategy for non-viral gene therapy. Liposomes are lipid systems that self-aggregate in bilayers and the use of cationic lipids allows the electrostatic complexation with DNA. In this work, we used SAXS technique to study the complexation kinetics between cationic liposomes and plasmid DNA and evaluate the liposome structural modifications in the presence of DNA. Liposomes were prepared according to [1] using as plasmid DNA vector model a modified version of pVAX1-GFP with luciferase as reporter gene [2]. The complexation was promoted in a SAXS sample holder containing a microchannel to get access to the compartment between two mica windows where the X-ray beam could cross through [3]. We obtained in situ complexation using such sample holder coupled to a fed-batch reactor through a peristaltic pump. The scattering curves were recorded each 30 seconds during the cycles. The DNA was added until a certain final ratio between surface charges previously determined. We studied the form and structure factor model for the liposome bilayer to fit the scattering curves [4]. Structural information such as the bilayer electronic density profiles, number of bilayers and fluidity were determined as a function of the complexation with DNA. These differences can reflect in singular in vitro and in vivo effects. [1] L. G. de la Torre et al. Colloids and Surfaces B: Biointerfaces, 73, 175 (2009) [2] A. R. Azzoni et al. The Journal of Gene Medicine, 9, 392 (2007) [3] L. P. Cavalcanti et al. Review of
Chaos from simple models to complex systems
Cencini, Massimo; Vulpiani, Angelo
2010-01-01
Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor
Deep ocean model penetrator experiments
International Nuclear Information System (INIS)
Freeman, T.J.; Burdett, J.R.F.
1986-01-01
Preliminary trials of experimental model penetrators in the deep ocean have been conducted as an international collaborative exercise by participating members (national bodies and the CEC) of the Engineering Studies Task Group of the Nuclear Energy Agency's Seabed Working Group. This report describes and gives the results of these experiments, which were conducted at two deep ocean study areas in the Atlantic: Great Meteor East and the Nares Abyssal Plain. Velocity profiles of penetrators of differing dimensions and weights have been determined as they free-fell through the water column and impacted the sediment. These velocity profiles are used to determine the final embedment depth of the penetrators and the resistance to penetration offered by the sediment. The results are compared with predictions of embedment depth derived from elementary models of a penetrator impacting with a sediment. It is tentatively concluded that once the resistance to penetration offered by a sediment at a particular site has been determined, this quantity can be used to sucessfully predict the embedment that penetrators of differing sizes and weights would achieve at the same site
Complex singlet extension of the standard model
International Nuclear Information System (INIS)
Barger, Vernon; McCaskey, Mathew; Langacker, Paul; Ramsey-Musolf, Michael; Shaughnessy, Gabe
2009-01-01
We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider.
Extension of association models to complex chemicals
DEFF Research Database (Denmark)
Avlund, Ane Søgaard
Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry......; CPA and sPC-SAFT. Phase equilibrium and monomer fraction calculations with sPC-SAFT for methanol are used in the thesis to illustrate the importance of parameter estimation when using SAFT. Different parameter sets give similar pure component vapor pressure and liquid density results, whereas very...... association is presented in the thesis, and compared to the corresponding lattice theory. The theory for intramolecular association is then applied in connection with sPC-SAFT for mixtures containing glycol ethers. Calculations with sPC-SAFT (without intramolecular association) are presented for comparison...
Complexity and agent-based modelling in urban research
DEFF Research Database (Denmark)
Fertner, Christian
influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...
Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing
Murphy, Patrick C.; Landman, Drew
2015-01-01
Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.
The Complexity of Constructing Evolutionary Trees Using Experiments
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm
2001-01-01
We present tight upper and lower bounds for the problem of constructing evolutionary trees in the experiment model. We describe an algorithm which constructs an evolutionary tree of n species in time O(nd logd n) using at most n⌈d/2⌉(log2⌈d/2⌉-1 n+O(1)) experiments for d > 2, and at most n(log n......+O(1)) experiments for d = 2, where d is the degree of the tree. This improves the previous best upper bound by a factor θ(log d). For d = 2 the previously best algorithm with running time O(n log n) had a bound of 4n log n on the number of experiments. By an explicit adversary argument, we show an Ω......(nd logd n) lower bound, matching our upper bounds and improving the previous best lower bound by a factor θ(logd n). Central to our algorithm is the construction and maintenance of separator trees of small height, which may be of independent interest....
Predicting the future completing models of observed complex systems
Abarbanel, Henry
2013-01-01
Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...
The Model of Complex Structure of Quark
Liu, Rongwu
2017-09-01
In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.
Experiments beyond the standard model
International Nuclear Information System (INIS)
Perl, M.L.
1984-09-01
This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
On sampling and modeling complex systems
International Nuclear Information System (INIS)
Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser
2013-01-01
The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)
Mathematical modelling of complex contagion on clustered networks
O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James
2015-09-01
The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
Mathematical modelling of complex contagion on clustered networks
Directory of Open Access Journals (Sweden)
David J. P. O'Sullivan
2015-09-01
Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
Modeling the Structure and Complexity of Engineering Routine Design Problems
Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.
2011-01-01
This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development
Complex licences: a decade of experience with internal inspections
International Nuclear Information System (INIS)
Boersma, Hielke Freerk; Bunskoeke, Erik J.
2008-01-01
Full text: In 2008 the University of Groningen has ten years of experience with the system of complex broad licences. This system was introduced on a larger scale in the Netherlands in the last decade of the twentieth century. Its main characteristics are an internal radiation protection organization and a system of internal permits or licences for all applications of ionizing radiation along with periodical inspections. Since 1998/9 we yearly conduct inspections of all users of ionizing radiation within the University of Groningen. In our presentation we will discuss the general scheme for these inspections as well as the development of its results over the past decade. Following a period of habituation the last few years show a adequate and continuous level of radiation protection. It is also concluded that continuation of the inspection projects is a prerequisite to preserve this situation. Combined with the inspection project of 2007 an informal study of the 'customer'-satisfaction with respect to various aspects of the radiation protection organization was performed. Our data were collected using a questionnaire filled out by local radiation safety officers. In this contribution detailed results of this investigation will be presented. Preliminary results show that the overall appreciation of the radiation protection organization is qualified as 'good'. (author)
Employers' experience of employees with cancer: trajectories of complex communication.
Tiedtke, C M; Dierckx de Casterlé, B; Frings-Dresen, M H W; De Boer, A G E M; Greidanus, M A; Tamminga, S J; De Rijk, A E
2017-10-01
Remaining in paid work is of great importance for cancer survivors, and employers play a crucial role in achieving this. Return to work (RTW) is best seen as a process. This study aims to provide insight into (1) Dutch employers' experiences with RTW of employees with cancer and (2) the employers' needs for support regarding this process. Thirty employer representatives of medium and large for-profit and non-profit organizations were interviewed to investigate their experiences and needs in relation to employees with cancer. A Grounded Theory approach was used. We revealed a trajectory of complex communication and decision-making during different stages, from the moment the employee disclosed that they had been diagnosed to the period after RTW, permanent disability, or the employee's passing away. Employers found this process demanding due to various dilemmas. Dealing with an unfavorable diagnosis and balancing both the employer's and the employee's interests were found to be challenging. Two types of approach to support RTW of employees with cancer were distinguished: (1) a business-oriented approach and (2) a care-oriented approach. Differences in approach were related to differences in organizational structure and employer and employee characteristics. Employers expressed a need for communication skills, information, and decision-making skills to support employees with cancer. The employers interviewed stated that dealing with an employee with cancer is demanding and that the extensive Dutch legislation on RTW did not offer all the support needed. We recommend providing them with easily accessible information on communication and leadership training to better support employees with cancer. • Supporting employers by training communication and decision-making skills and providing information on cancer will contribute to improving RTW support for employees with cancer. • Knowing that the employer will usually be empathic when an employee reveals that they have
Modeling the Experience of Emotion
Broekens, Joost
2009-01-01
Affective computing has proven to be a viable field of research comprised of a large number of multidisciplinary researchers resulting in work that is widely published. The majority of this work consists of computational models of emotion recognition, computational modeling of causal factors of emotion and emotion expression through rendered and robotic faces. A smaller part is concerned with modeling the effects of emotion, formal modeling of cognitive appraisal theory and models of emergent...
Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K
2009-01-01
Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.
Modeling competitive substitution in a polyelectrolyte complex
International Nuclear Information System (INIS)
Peng, B.; Muthukumar, M.
2015-01-01
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Modelling of information processes management of educational complex
Directory of Open Access Journals (Sweden)
Оксана Николаевна Ромашкова
2014-12-01
Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.
Sandpile model for relaxation in complex systems
International Nuclear Information System (INIS)
Vazquez, A.; Sotolongo-Costa, O.; Brouers, F.
1997-10-01
The relaxation in complex systems is, in general, nonexponential. After an initial rapid decay the system relaxes slowly following a long time tail. In the present paper a sandpile moderation of the relaxation in complex systems is analysed. Complexity is introduced by a process of avalanches in the Bethe lattice and a feedback mechanism which leads to slower decay with increasing time. In this way, some features of relaxation in complex systems: long time tails relaxation, aging, and fractal distribution of characteristic times, are obtained by simple computer simulations. (author)
Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn
2015-09-08
The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and
An Experiment on Isomerism in Metal-Amino Acid Complexes.
Harrison, R. Graeme; Nolan, Kevin B.
1982-01-01
Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…
Modeling Complex Chemical Systems: Problems and Solutions
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Modeling the Chemical Complexity in Titan's Atmosphere
Vuitton, Veronique; Yelle, Roger; Klippenstein, Stephen J.; Horst, Sarah; Lavvas, Panayotis
2018-06-01
Titan's atmospheric chemistry is extremely complicated because of the multiplicity of chemical as well as physical processes involved. Chemical processes begin with the dissociation and ionization of the most abundant species, N2 and CH4, by a variety of energy sources, i.e. solar UV and X-ray photons, suprathermal electrons (reactions involving radicals as well as positive and negative ions, all possibly in some excited electronic and vibrational state. Heterogeneous chemistry at the surface of the aerosols could also play a significant role. The efficiency and outcome of these reactions depends strongly on the physical characteristics of the atmosphere, namely pressure and temperature, ranging from 1.5×103 to 10-10 mbar and from 70 to 200 K, respectively. Moreover, the distribution of the species is affected by molecular diffusion and winds as well as escape from the top of the atmosphere and condensation in the lower stratosphere.Photochemical and microphysical models are the keystones of our understanding of Titan's atmospheric chemistry. Their main objective is to compute the distribution and nature of minor chemical species (typically containing up to 6 carbon atoms) and haze particles, respectively. Density profiles are compared to the available observations, allowing to identify important processes and to highlight those that remain to be constrained in the laboratory, experimentally and/or theoretically. We argue that positive ion chemistry is at the origin of complex organic molecules, such as benzene, ammonia and hydrogen isocyanide while neutral-neutral radiative association reactions are a significant source of alkanes. We find that negatively charged macromolecules (m/z ~100) attract the abundant positive ions, which ultimately leads to the formation of the aerosols. We also discuss the possibility that an incoming flux of oxygen from Enceladus, another Saturn's satellite, is responsible for the presence of oxygen-bearing species in Titan's reductive
Modelling the complex dynamics of vegetation, livestock and rainfall ...
African Journals Online (AJOL)
Open Access DOWNLOAD FULL TEXT ... In this paper, we present mathematical models that incorporate ideas from complex systems theory to integrate several strands of rangeland theory in a hierarchical framework. ... Keywords: catastrophe theory; complexity theory; disequilibrium; hysteresis; moving attractors
Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004
Energy Technology Data Exchange (ETDEWEB)
Torralba, B.; Martinez-Arias, R.
2007-07-01
Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)
Analysis of Operators Comments on the PSF Questionnaire of the Task Complexity Experiment 2003/2004
International Nuclear Information System (INIS)
Torralba, B.; Martinez-Arias, R.
2007-01-01
Human Reliability Analysis (HRA) methods usually take into account the effect of Performance Shaping Factors (PSF). Therefore, the adequate treatment of PSFs in HRA of Probabilistic Safety Assessment (PSA) models has a crucial importance. There is an important need for collecting PSF data based on simulator experiments. During the task complexity experiment 2003-2004, carried out in the BWR simulator of Halden Man-Machine Laboratory (HAMMLAB), there was a data collection on PSF by means of a PSF Questionnaire. Seven crews (composed of shift supervisor, reactor operator and turbine operator) from Swedish Nuclear Power Plants participated in the experiment. The PSF Questionnaire collected data on the factors: procedures, training and experience, indications, controls, team management, team communication, individual work practice, available time for the tasks, number of tasks or information load, masking and seriousness. The main statistical significant results are presented on Performance Shaping Factors data collection and analysis of the task complexity experiment 2003/2004 (HWR-810). The analysis of the comments about PSFs, which were provided by operators on the PSF Questionnaire, is described. It has been summarised the comments provided for each PSF on the scenarios, using a content analysis technique. (Author)
Generative complexity of Gray-Scott model
Adamatzky, Andrew
2018-03-01
In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).
Modeling of microgravity combustion experiments
Buckmaster, John
1995-01-01
This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.
Spectroscopic studies of molybdenum complexes as models for nitrogenase
International Nuclear Information System (INIS)
Walker, T.P.
1981-05-01
Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo 2 O 4 (cysteine) 2 2- and trans-Mo(N 2 ) 2 (dppe) 2 (dppe = 1,2-bis(diphenylphosphino)ethane). The H 1 and C 13 NMR of solutions of Mo 2 O 4 (cys) 2 2- are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N 2 ) 2 (dppe) 2 is described and compared to the EXAFS of MoH 4 (dppe) 2 . The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo 2 O 4 (cys) 2 2- and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction
Modeling the Propagation of Mobile Phone Virus under Complex Network
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209
The database for reaching experiments and models.
Directory of Open Access Journals (Sweden)
Ben Walker
Full Text Available Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc. from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM. DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.
HISTORICAL EXPERIENCE OF RUSSIA-KAZAKHSTAN GEO-ENERGY COMPLEX
Directory of Open Access Journals (Sweden)
Владимир Ильич Цай
2015-12-01
Full Text Available The article analyzes the perspective directions of the Russian Federation and the Republic of Kazakhstan in the sphere of fuel and energy complex. The authors give particular examples of the joint implementation of the adopted documents aimed at strengthening the two countries’ cooperation in the exploration and production of oil and gas in the past decade. Particular attention is paid to the Russia-Kazakhstan cooperation in the spheres of nuclear power, oil and oil products. These areas are considered by the example of the largest enterprises of fuel and energy complexes of Russia and Kazakhstan. One of the main components of the fuel and energy complex is the Petroleum Industry.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Directory of Open Access Journals (Sweden)
Bjarke Mønsted
Full Text Available It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion, or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
Complex systems modeling by cellular automata
Kroc, J.; Sloot, P.M.A.; Rabuñal Dopico, J.R.; Dorado de la Calle, J.; Pazos Sierra, A.
2009-01-01
In recent years, the notion of complex systems proved to be a very useful concept to define, describe, and study various natural phenomena observed in a vast number of scientific disciplines. Examples of scientific disciplines that highly benefit from this concept range from physics, mathematics,
Modeling pitch perception of complex tones
Houtsma, A.J.M.
1986-01-01
When one listens to a series of harmonic complex tones that have no acoustic energy at their fundamental frequencies, one usually still hears a melody that corresponds to those missing fundamentals. Since it has become evident some two decades ago that neither Helmholtz's difference tone theory nor
Experience economy meets business model design
DEFF Research Database (Denmark)
Gudiksen, Sune Klok; Smed, Søren Graakjær; Poulsen, Søren Bolvig
2012-01-01
Through the last decade the experience economy has found solid ground and manifested itself as a parameter where business and organizations can differentiate from competitors. The fundamental premise is the one found in Pine & Gilmores model from 1999 over 'the progression of economic value' where...... produced, designed or staged experience that gains the most profit or creates return of investment. It becomes more obvious that other parameters in the future can be a vital part of the experience economy and one of these is business model innovation. Business model innovation is about continuous...
Applicability of surface complexation modelling in TVO's studies on sorption of radionuclides
International Nuclear Information System (INIS)
Carlsson, T.
1994-03-01
The report focuses on the possibility of applying surface complexation theories to the conditions at a potential repository site in Finland and of doing proper experimental work in order to determine necessary constants for the models. The report provides background information on: (1) what type experiments should be carried out in order to produce data for surface complexation modelling of sorption phenomena under potential Finnish repository conditions, and (2) how to design and perform properly such experiments, in order to gather data, develop models or both. The report does not describe in detail how proper surface complexation experiments or modelling should be carried out. The work contains several examples of information that may be valuable in both modelling and experimental work. (51 refs., 6 figs., 4 tabs.)
Experience of web-complex development of NPP thermophysical optimization
International Nuclear Information System (INIS)
Nikolaev, M.A.
2014-01-01
Current state of developing computation web complex (CWC) of thermophysical optimization of nuclear power plants is described. Main databases of CWC is realized on the MySQL platform. CWC information architecture, its functionality, optimization algorithms and CWC user interface are under consideration [ru
International Nuclear Information System (INIS)
Astakhova, N.V.; Beskrovnyj, A.I.; Bogdzel', A.A.; Butorin, P.E.; Vasilovskij, S.G.; Gundorin, N.A.; Zlokazov, V.B.; Kutuzov, S.A.; Salamatin, I.M.; Shvetsov, V.N.
2003-01-01
An instrumental software complex for automation of spectrometry (AS) that enables prompt realization of experiment automation systems for spectrometers, which use data buferisation, has been developed. In the development new methods of programming and building of automation systems together with novel net technologies were employed. It is suggested that programs to schedule and conduct experiments should be based on the parametric model of the spectrometer, the approach that will make it possible to write programs suitable for any FLNP (Frank Laboratory of Neutron Physics) spectrometer and experimental technique applied and use different hardware interfaces for introducing the spectrometric data into the data acquisition system. The article describes the possibilities provided to the user in the field of scheduling and control of the experiment, data viewing, and control of the spectrometer parameters. The possibility of presenting the current spectrometer state, programs and the experimental data in the Internet in the form of dynamically formed protocols and graphs, as well as of the experiment control via the Internet is realized. To use the means of the Internet on the side of the client, applied programs are not needed. It suffices to know how to use the two programs to carry out experiments in the automated mode. The package is designed for experiments in condensed matter and nuclear physics and is ready for using. (author)
Modeling a High Explosive Cylinder Experiment
Zocher, Marvin A.
2017-06-01
Cylindrical assemblies constructed from high explosives encased in an inert confining material are often used in experiments aimed at calibrating and validating continuum level models for the so-called equation of state (constitutive model for the spherical part of the Cauchy tensor). Such is the case in the work to be discussed here. In particular, work will be described involving the modeling of a series of experiments involving PBX-9501 encased in a copper cylinder. The objective of the work is to test and perhaps refine a set of phenomenological parameters for the Wescott-Stewart-Davis reactive burn model. The focus of this talk will be on modeling the experiments, which turned out to be non-trivial. The modeling is conducted using ALE methodology.
Surface complexation modelling applied to the sorption of nickel on silica
International Nuclear Information System (INIS)
Olin, M.
1995-10-01
The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)
The utility of Earth system Models of Intermediate Complexity
Weber, S.L.
2010-01-01
Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by
Advances in dynamic network modeling in complex transportation systems
Ukkusuri, Satish V
2013-01-01
This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.
Directory of Open Access Journals (Sweden)
Xiang Ding
2014-01-01
Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.
Narrowing the gap between network models and real complex systems
Viamontes Esquivel, Alcides
2014-01-01
Simple network models that focus only on graph topology or, at best, basic interactions are often insufficient to capture all the aspects of a dynamic complex system. In this thesis, I explore those limitations, and some concrete methods of resolving them. I argue that, in order to succeed at interpreting and influencing complex systems, we need to take into account slightly more complex parts, interactions and information flows in our models.This thesis supports that affirmation with five a...
Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?
Simmons, Chris
2014-01-01
This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…
Uncertainty and Complexity in Mathematical Modeling
Cannon, Susan O.; Sanders, Mark
2017-01-01
Modeling is an effective tool to help students access mathematical concepts. Finding a math teacher who has not drawn a fraction bar or pie chart on the board would be difficult, as would finding students who have not been asked to draw models and represent numbers in different ways. In this article, the authors will discuss: (1) the properties of…
Information, complexity and efficiency: The automobile model
Energy Technology Data Exchange (ETDEWEB)
Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin
2011-01-01
A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…
Socio-Environmental Resilience and Complex Urban Systems Modeling
Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir
2017-04-01
The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water
Employers' experience of employees with cancer: trajectories of complex communication
Tiedtke, C. M.; Dierckx de Casterlé, B.; Frings-Dresen, M. H. W.; de Boer, A. G. E. M.; Greidanus, M. A.; Tamminga, S. J.; de Rijk, A. E.
2017-01-01
Purpose Remaining in paid work is of great importance for cancer survivors, and employers play a crucial role in achieving this. Return to work (RTW) is best seen as a process. This study aims to provide insight into (1) Dutch employers' experiences with RTW of employees with cancer and (2) the
Modeling of laser-driven hydrodynamics experiments
di Stefano, Carlos; Doss, Forrest; Rasmus, Alex; Flippo, Kirk; Desjardins, Tiffany; Merritt, Elizabeth; Kline, John; Hager, Jon; Bradley, Paul
2017-10-01
Correct interpretation of hydrodynamics experiments driven by a laser-produced shock depends strongly on an understanding of the time-dependent effect of the irradiation conditions on the flow. In this talk, we discuss the modeling of such experiments using the RAGE radiation-hydrodynamics code. The focus is an instability experiment consisting of a period of relatively-steady shock conditions in which the Richtmyer-Meshkov process dominates, followed by a period of decaying flow conditions, in which the dominant growth process changes to Rayleigh-Taylor instability. The use of a laser model is essential for capturing the transition. also University of Michigan.
Modeling Power Systems as Complex Adaptive Systems
Energy Technology Data Exchange (ETDEWEB)
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions
Directory of Open Access Journals (Sweden)
Camaren Peter
2014-03-01
Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.
Modeling experiments using quantum and Kolmogorov probability
International Nuclear Information System (INIS)
Hess, Karl
2008-01-01
Criteria are presented that permit a straightforward partition of experiments into sets that can be modeled using both quantum probability and the classical probability framework of Kolmogorov. These new criteria concentrate on the operational aspects of the experiments and lead beyond the commonly appreciated partition by relating experiments to commuting and non-commuting quantum operators as well as non-entangled and entangled wavefunctions. In other words the space of experiments that can be understood using classical probability is larger than usually assumed. This knowledge provides advantages for areas such as nanoscience and engineering or quantum computation.
Software complex for developing dynamically packed program system for experiment automation
International Nuclear Information System (INIS)
Baluka, G.; Salamatin, I.M.
1985-01-01
Software complex for developing dynamically packed program system for experiment automation is considered. The complex includes general-purpose programming systems represented as the RT-11 standard operating system and specially developed problem-oriented moduli providing execution of certain jobs. The described complex is realized in the PASKAL' and MAKRO-2 languages and it is rather flexible to variations of the technique of the experiment
Mathematical modeling and optimization of complex structures
Repin, Sergey; Tuovinen, Tero
2016-01-01
This volume contains selected papers in three closely related areas: mathematical modeling in mechanics, numerical analysis, and optimization methods. The papers are based upon talks presented on the International Conference for Mathematical Modeling and Optimization in Mechanics, held in Jyväskylä, Finland, March 6-7, 2014 dedicated to Prof. N. Banichuk on the occasion of his 70th birthday. The articles are written by well-known scientists working in computational mechanics and in optimization of complicated technical models. Also, the volume contains papers discussing the historical development, the state of the art, new ideas, and open problems arising in modern continuum mechanics and applied optimization problems. Several papers are concerned with mathematical problems in numerical analysis, which are also closely related to important mechanical models. The main topics treated include: * Computer simulation methods in mechanics, physics, and biology; * Variational problems and methods; minimiz...
Hierarchical Models of the Nearshore Complex System
National Research Council Canada - National Science Library
Werner, Brad
2004-01-01
.... This grant was termination funding for the Werner group, specifically aimed at finishing up and publishing research related to synoptic imaging of near shore bathymetry, testing models for beach cusp...
Complex models of nodal nuclear data
International Nuclear Information System (INIS)
Dufek, Jan
2011-01-01
During the core simulations, nuclear data are required at various nodal thermal-hydraulic and fuel burnup conditions. The nodal data are also partially affected by thermal-hydraulic and fuel burnup conditions in surrounding nodes as these change the neutron energy spectrum in the node. Therefore, the nodal data are functions of many parameters (state variables), and the more state variables are considered by the nodal data models the more accurate and flexible the models get. The existing table and polynomial regression models, however, cannot reflect the data dependences on many state variables. As for the table models, the number of mesh points (and necessary lattice calculations) grows exponentially with the number of variables. As for the polynomial regression models, the number of possible multivariate polynomials exceeds the limits of existing selection algorithms that should identify a few dozens of the most important polynomials. Also, the standard scheme of lattice calculations is not convenient for modelling the data dependences on various burnup conditions since it performs only a single or few burnup calculations at fixed nominal conditions. We suggest a new efficient algorithm for selecting the most important multivariate polynomials for the polynomial regression models so that dependences on many state variables can be considered. We also present a new scheme for lattice calculations where a large number of burnup histories are accomplished at varied nodal conditions. The number of lattice calculations being performed and the number of polynomials being analysed are controlled and minimised while building the nodal data models of a required accuracy. (author)
Integrated Modeling of Complex Optomechanical Systems
Andersen, Torben; Enmark, Anita
2011-09-01
Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.
Wind Tunnel Modeling Of Wind Flow Over Complex Terrain
Banks, D.; Cochran, B.
2010-12-01
This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation
Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication
Thompson, Kimberly M.
Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.
ZZ SIESTA, Atmospheric Dispersion Experiment over Complex Terrain
International Nuclear Information System (INIS)
2000-01-01
1 - Name of experiment: SIESTA. 2 - Computer for which program is designed and other machine version packages available: To request or retrieve programs click on the one of the active versions below. A password and special authorization is required. Explanation of the status codes. Program-name: ZZ-SIESTA; Package-ID Status: NEA-1617/01 Tested; Machines used: Package-ID: NEA-1617/01; Orig. Computer: DEC VAX 6000; Test Computer: DEC VAX 6000. 3 - Purpose and phenomena tested: The aim of the project was to obtain knowledge of the general nature of the turbulence, advection and atmospheric dispersion in the two flow regimes parallel to the Swiss Jura ridge, which represent the most frequent wind systems occurring on the Swiss Plain. 4 - Description of the experimental set-up used: The atmospheric dispersion process was investigated by carrying out SF 6 tracer experiments. The tracer was released about 6 m above ground level near the Goesgen meteo tower. Sampling units were placed on ellipses around the release point. Total sampling time was at least one hour. Tracer concentrations were determined after each experiment by Gas chromatography. 5 - Special features: Because of the uncertainty in the transport direction of the tracer plume, a mobile tracer analyzing system was used. 6 - Description of experiment and analysis: To investigate the flow field in the test region, the following measuring setups were used: (1) Three tethered balloon sounding systems to measure temperature, humidity, wind speed and direction; (2) a meteo tower to measure 10-minute averages of wind direction and velocity at two fixed heights; (3) sonic anemometers to measure heat flux, friction velocity, Monin-Obukhov length, and wind speed at the release point and at a certain distance; (4) 2-m masts to measure wind speed and direction continuously. The wind flow system was measured by radar-tracked tetroons
Firn Model Intercomparison Experiment (FirnMICE)
DEFF Research Database (Denmark)
Lundin, Jessica M.D.; Stevens, C. Max; Arthern, Robert
2017-01-01
Evolution of cold dry snow and firn plays important roles in glaciology; however, the physical formulation of a densification law is still an active research topic. We forced eight firn-densification models and one seasonal-snow model in six different experiments by imposing step changes in tempe...
Bigras, Noémie; Godbout, Natacha; Hébert, Martine; Sabourin, Stéphane
2017-03-01
Patients consulting for sexual difficulties frequently present additional personal or relational disorders and symptoms. This is especially the case when they have experienced cumulative adverse childhood experiences (CACEs), which are associated with symptom complexity. CACEs refer to the extent to which an individual has experienced an accumulation of different types of adverse childhood experiences including sexual, physical, and psychological abuse; neglect; exposure to inter-parental violence; and bullying. However, past studies have not examined how symptom complexity might relate to CACEs and sexual satisfaction and even less so in samples of adults consulting for sex therapy. To document the presence of CACEs in a sample of individuals consulting for sexual difficulties and its potential association with sexual satisfaction through the development of symptom complexity operationalized through well-established clinically significant indicators of individual and relationship distress. Men and women (n = 307) aged 18 years and older consulting for sexual difficulties completed a set of questionnaires during their initial assessment. (i) Global Measure of Sexual Satisfaction Scale, (ii) Dyadic Adjustment Scale-4, (iii) Experiences in Close Relationships-12, (iv) Beck Depression Inventory-13, (v) Trauma Symptom Inventory-2, and (vi) Psychiatric Symptom Inventory-14. Results showed that 58.1% of women and 51.9% of men reported at least four forms of childhood adversity. The average number of CACEs was 4.10 (SD = 2.23) in women and 3.71 (SD = 2.08) in men. Structural equation modeling showed that CACEs contribute directly and indirectly to sexual satisfaction in adults consulting for sex therapy through clinically significant individual and relational symptom complexities. The findings underscore the relevance of addressing clinically significant psychological and relational symptoms that can stem from CACEs when treating sexual difficulties in adults seeking sex
Argonne Bubble Experiment Thermal Model Development
Energy Technology Data Exchange (ETDEWEB)
Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-12-03
This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.
Smart modeling and simulation for complex systems practice and theory
Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin
2015-01-01
This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.
A hedonic analysis of the complex hunting experience
DEFF Research Database (Denmark)
Lundhede, Thomas; Jacobsen, Jette Bredahl; Thorsen, Bo Jellesmark
2015-01-01
In Denmark, the right to hunt is vested with the land owner but can be transferred to others and is traded on a well-established market. The dominant form of hunting leases is time limited contract transferring the hunting rights on a piece of land to one or more persons. We analyze this market...... for hunting leases using the hedonic method on a rich set of data obtained from Danish hunters. We hypothesize and show that the price of a hunting lease reflects that hunting is a composite experience; and also reflects aspects relating to the landowners cost of leasing out hunting. Thus, the value...
Energy Technology Data Exchange (ETDEWEB)
Vavilin, V A; Vasiliev, V B; Ponomarev, A V; Rytow, S V [Russian Academy of Sciences, Moscow (Russian Federation). Water Problems Inst.
1994-01-01
A universal basic model of anaerobic conversion of complex organic material is suggested. The model can be used for investigating the start-up experiments for food industry wastewater. General results obtained in the model agreed with the experimental data. An explanation of a complex dynamic behaviour of the anaerobic system is suggested. (author)
The sigma model on complex projective superspaces
Energy Technology Data Exchange (ETDEWEB)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker [DESY, Hamburg (Germany). Theory Group; Quella, Thomas [Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics; Saleur, Hubert [CEA Saclay, 91 - Gif-sur-Yvette (France). Inst. de Physique Theorique; USC, Los Angeles, CA (United States). Physics Dept.
2009-08-15
The sigma model on projective superspaces CP{sup S-1} {sup vertical} {sup stroke} {sup S} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle {theta}. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP{sup S-1} {sup vertical} {sup stroke} {sup S} model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)
The sigma model on complex projective superspaces
International Nuclear Information System (INIS)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker; Quella, Thomas; Saleur, Hubert; USC, Los Angeles, CA
2009-08-01
The sigma model on projective superspaces CP S-1 vertical stroke S gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP S-1 vertical stroke S model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)
A complex autoregressive model and application to monthly temperature forecasts
Directory of Open Access Journals (Sweden)
X. Gu
2005-11-01
Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.
Understanding complex urban systems integrating multidisciplinary data in urban models
Gebetsroither-Geringer, Ernst; Atun, Funda; Werner, Liss
2016-01-01
This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers—including municipal politicians, spatial planners, and citizen groups—in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches—and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Age...
CFD and FEM modeling of PPOOLEX experiments
Energy Technology Data Exchange (ETDEWEB)
Paettikangas, T.; Niemi, J.; Timperi, A. (VTT Technical Research Centre of Finland (Finland))
2011-01-15
Large-break LOCA experiment performed with the PPOOLEX experimental facility is analysed with CFD calculations. Simulation of the first 100 seconds of the experiment is performed by using the Euler-Euler two-phase model of FLUENT 6.3. In wall condensation, the condensing water forms a film layer on the wall surface, which is modelled by mass transfer from the gas phase to the liquid water phase in the near-wall grid cell. The direct-contact condensation in the wetwell is modelled with simple correlations. The wall condensation and direct-contact condensation models are implemented with user-defined functions in FLUENT. Fluid-Structure Interaction (FSI) calculations of the PPOOLEX experiments and of a realistic BWR containment are also presented. Two-way coupled FSI calculations of the experiments have been numerically unstable with explicit coupling. A linear perturbation method is therefore used for preventing the numerical instability. The method is first validated against numerical data and against the PPOOLEX experiments. Preliminary FSI calculations are then performed for a realistic BWR containment by modeling a sector of the containment and one blowdown pipe. For the BWR containment, one- and two-way coupled calculations as well as calculations with LPM are carried out. (Author)
Cyclic complex loading of 316 stainless steel: Experiments and calculations
International Nuclear Information System (INIS)
Jacquelin, B.; Hourlier, F.; Dang Van, K.; Stolz, C.
1981-01-01
To test the ability of cyclic constitutive law established by mean of uniaxial test a benchmark is proposed. The calculated results using the model of Chaboche-Cordier-Dang Van are compared with experimental data obtained on cylindrical specimens undergoing simultaneously constant torque and cyclic tension. (orig.)
Refining Grasp Affordance Models by Experience
DEFF Research Database (Denmark)
Detry, Renaud; Kraft, Dirk; Buch, Anders Glent
2010-01-01
We present a method for learning object grasp affordance models in 3D from experience, and demonstrate its applicability through extensive testing and evaluation on a realistic and largely autonomous platform. Grasp affordance refers here to relative object-gripper configurations that yield stable...... with a visual model of the object they characterize. We explore a batch-oriented, experience-based learning paradigm where grasps sampled randomly from a density are performed, and an importance-sampling algorithm learns a refined density from the outcomes of these experiences. The first such learning cycle...... is bootstrapped with a grasp density formed from visual cues. We show that the robot effectively applies its experience by downweighting poor grasp solutions, which results in increased success rates at subsequent learning cycles. We also present success rates in a practical scenario where a robot needs...
Model for measuring complex performance in an aviation environment
International Nuclear Information System (INIS)
Hahn, H.A.
1988-01-01
An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab
Fluid flow modeling in complex areas*, **
Directory of Open Access Journals (Sweden)
Poullet Pascal
2012-04-01
Full Text Available We show first results of 3D simulation of sea currents in a realistic context. We use the full Navier–Stokes equations for incompressible viscous fluid. The problem is solved using a second order incremental projection method associated with the finite volume of the staggered (MAC scheme for the spatial discretization. After validation on classical cases, it is used in a numerical simulation of the Pointe à Pitre harbour area. The use of the fictious domain method permits us to take into account the complexity of bathymetric data and allows us to work with regular meshes and thus preserves the efficiency essential for a 3D code. Dans cette étude, nous présentons les premiers résultats de simulation d’un écoulement d’un fluide incompressible visqueux dans un contexte environnemental réel. L’approche utilisée utilise une méthode de domaines fictifs pour une prise en compte d’un domaine physique tridimensionnel très irrégulier. Le schéma numérique combine un schéma de projection incrémentale et des volumes finis utilisant des volumes de contrôle adaptés à un maillage décalé. Les tests de validation sont menés pour les cas tests de la cavité double entraînée ainsi que l’écoulement dans un canal avec un obstacle placé de manière asymmétrique.
Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M
2012-10-01
To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.
Experience from the ECORS program in regions of complex geology
Damotte, B.
1993-04-01
The French ECORS program was launched in 1983 by a cooperation agreement between universities and petroleum companies. Crustal surveys have tried to find explanations for the formation of geological features, such as rifts, mountains ranges or subsidence in sedimentary basins. Several seismic surveys were carried out, some across areas with complex geological structures. The seismic techniques and equipment used were those developed by petroleum geophysicists, adapted to the depth aimed at (30-50 km) and to various physical constraints encountered in the field. In France, ECORS has recorded 850 km of deep seismic lines onshore across plains and mountains, on various kinds of geological formations. Different variations of the seismic method (reflection, refraction, long-offset seismic) were used, often simultaneously. Multiple coverage profiling constitutes the essential part of this data acquisition. Vibrators and dynamite shots were employed with a spread generally 15 km long, but sometimes 100 km long. Some typical seismic examples show that obtaining crustal reflections essentialy depends on two factors: (1) the type and structure of shallow formations, and (2) the sources used. Thus, when seismic energy is strongly absorbed across the first kilometers in shallow formations, or when these formations are highly structured, standard multiple-coverage profiling is not able to provide results beyond a few seconds. In this case, it is recommended to simultaneously carry out long-offset seismic in low multiple coverage. Other more methodological examples show: how the impact on the crust of a surface fault may be evaluated according to the seismic method implemented ( VIBROSEIS 96-fold coverage or single dynamite shot); that vibrators make it possible to implement wide-angle seismic surveying with an offset 80 km long; how to implement the seismic reflection method on complex formations in high mountains. All data were processed using industrial seismic software
Passengers, Crowding and Complexity : Models for passenger oriented public transport
P.C. Bouman (Paul)
2017-01-01
markdownabstractPassengers, Crowding and Complexity was written as part of the Complexity in Public Transport (ComPuTr) project funded by the Netherlands Organisation for Scientific Research (NWO). This thesis studies in three parts how microscopic data can be used in models that have the potential
Stability of Rotor Systems: A Complex Modelling Approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1996-01-01
A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Complex versus simple models: ion-channel cardiac toxicity prediction
Directory of Open Access Journals (Sweden)
Hitesh B. Mistry
2018-02-01
Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Modeling Air-Quality in Complex Terrain Using Mesoscale and ...
African Journals Online (AJOL)
Air-quality in a complex terrain (Colorado-River-Valley/Grand-Canyon Area, Southwest U.S.) is modeled using a higher-order closure mesoscale model and a higher-order closure dispersion model. Non-reactive tracers have been released in the Colorado-River valley, during winter and summer 1992, to study the ...
Surface-complexation models for sorption onto heterogeneous surfaces
International Nuclear Information System (INIS)
Harvey, K.B.
1997-10-01
This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)
On spin and matrix models in the complex plane
International Nuclear Information System (INIS)
Damgaard, P.H.; Heller, U.M.
1993-01-01
We describe various aspects of statistical mechanics defined in the complex temperature or coupling-constant plane. Using exactly solvable models, we analyse such aspects as renormalization group flows in the complex plane, the distribution of partition function zeros, and the question of new coupling-constant symmetries of complex-plane spin models. The double-scaling form of matrix models is shown to be exactly equivalent to finite-size scaling of two-dimensional spin systems. This is used to show that the string susceptibility exponents derived from matrix models can be obtained numerically with very high accuracy from the scaling of finite-N partition function zeros in the complex plane. (orig.)
A Framework for Modeling and Analyzing Complex Distributed Systems
National Research Council Canada - National Science Library
Lynch, Nancy A; Shvartsman, Alex Allister
2005-01-01
Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...
Modelling the self-organization and collapse of complex networks
Indian Academy of Sciences (India)
Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.
Design of experiments and springback prediction for AHSS automotive components with complex geometry
International Nuclear Information System (INIS)
Asgari, A.; Pereira, M.; Rolfe, B.; Dingle, M.; Hodgson, P.
2005-01-01
With the drive towards implementing Advanced High Strength Steels (AHSS) in the automotive industry; stamping engineers need to quickly answer questions about forming these strong materials into elaborate shapes. Commercially available codes have been successfully used to accurately predict formability, thickness and strains in complex parts. However, springback and twisting are still challenging subjects in numerical simulations of AHSS components. Design of Experiments (DOE) has been used in this paper to study the sensitivity of the implicit and explicit numerical results with respect to certain arrays of user input parameters in the forming of an AHSS component. Numerical results were compared to experimental measurements of the parts stamped in an industrial production line. The forming predictions of the implicit and explicit codes were in good agreement with the experimental measurements for the conventional steel grade, while lower accuracies were observed for the springback predictions. The forming predictions of the complex component with an AHSS material were also in good correlation with the respective experimental measurements. However, much lower accuracies were observed in its springback predictions. The number of integration points through the thickness and tool offset were found to be of significant importance, while coefficient of friction and Young's modulus (modeling input parameters) have no significant effect on the accuracy of the predictions for the complex geometry
Entropies from Markov Models as Complexity Measures of Embedded Attractors
Directory of Open Access Journals (Sweden)
Julián D. Arias-Londoño
2015-06-01
Full Text Available This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.
Hoehn, John P.; Lupi, Frank; Kaplowitz, Michael D.
2010-01-01
Stated choice experiments about ecosystem changes involve complex information. This study examines whether the format in which ecosystem information is presented to respondents affects stated choice outcomes. Our analysis develops a utility-maximizing model to describe respondent behavior. The model shows how alternative questionnaire formats alter respondentsâ€™ use of filtering heuristics and result in differences in preference estimates. Empirical results from a large-scale stated choice e...
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Algebraic computability and enumeration models recursion theory and descriptive complexity
Nourani, Cyrus F
2016-01-01
This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...
Applications of Nonlinear Dynamics Model and Design of Complex Systems
In, Visarath; Palacios, Antonio
2009-01-01
This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.
Coping with Complexity Model Reduction and Data Analysis
Gorban, Alexander N
2011-01-01
This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.
Mathematical Models to Determine Stable Behavior of Complex Systems
Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.
2018-05-01
The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.
Understanding complex urban systems multidisciplinary approaches to modeling
Gurr, Jens; Schmidt, J
2014-01-01
Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...
Dynamic complexities in a parasitoid-host-parasitoid ecological model
International Nuclear Information System (INIS)
Yu Hengguo; Zhao Min; Lv Songjuan; Zhu Lili
2009-01-01
Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model
Dynamic complexities in a parasitoid-host-parasitoid ecological model
Energy Technology Data Exchange (ETDEWEB)
Yu Hengguo [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China); Zhao Min [School of Life and Environmental Science, Wenzhou University, Wenzhou, Zhejiang 325027 (China)], E-mail: zmcn@tom.com; Lv Songjuan; Zhu Lili [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China)
2009-01-15
Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model.
Research of experience of leading foreign countries in the management by a build complex
Borovik, Yu
2010-01-01
In the article the experience of leading foreign countries is explored in the management by build industry and possibilities of his application in the management by the transport build complex of Ukraine.
Hohlraum modeling for opacity experiments on the National Ignition Facility
Dodd, E. S.; DeVolder, B. G.; Martin, M. E.; Krasheninnikova, N. S.; Tregillis, I. L.; Perry, T. S.; Heeter, R. F.; Opachich, Y. P.; Moore, A. S.; Kline, J. L.; Johns, H. M.; Liedahl, D. A.; Cardenas, T.; Olson, R. E.; Wilde, B. H.; Urbatsch, T. J.
2018-06-01
This paper discusses the modeling of experiments that measure iron opacity in local thermodynamic equilibrium (LTE) using laser-driven hohlraums at the National Ignition Facility (NIF). A previous set of experiments fielded at Sandia's Z facility [Bailey et al., Nature 517, 56 (2015)] have shown up to factors of two discrepancies between the theory and experiment, casting doubt on the validity of the opacity models. The purpose of the new experiments is to make corroborating measurements at the same densities and temperatures, with the initial measurements made at a temperature of 160 eV and an electron density of 0.7 × 1022 cm-3. The X-ray hot spots of a laser-driven hohlraum are not in LTE, and the iron must be shielded from a direct line-of-sight to obtain the data [Perry et al., Phys. Rev. B 54, 5617 (1996)]. This shielding is provided either with the internal structure (e.g., baffles) or external wall shapes that divide the hohlraum into a laser-heated portion and an LTE portion. In contrast, most inertial confinement fusion hohlraums are simple cylinders lacking complex gold walls, and the design codes are not typically applied to targets like those for the opacity experiments. We will discuss the initial basis for the modeling using LASNEX, and the subsequent modeling of five different hohlraum geometries that have been fielded on the NIF to date. This includes a comparison of calculated and measured radiation temperatures.
A marketing mix model for a complex and turbulent environment
Directory of Open Access Journals (Sweden)
R. B. Mason
2007-12-01
Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the companys external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with
Some atmospheric tracer experiments in complex terrain at LASL: experimental design and data
International Nuclear Information System (INIS)
Archuleta, J.; Barr, S.; Clements, W.E.; Gedayloo, T.; Wilson, S.K.
1978-03-01
Two series of atmospheric tracer experiments were conducted in complex terrain situations in and around the Los Alamos Scientific Laboratory. Fluorescent particle tracers were used to investigate nighttime drainage flow in Los Alamos Canyon and daytime flow across the local canyon-mesa complex. This report describes the details of these experiments and presents a summary of the data collected. A subsequent report will discuss the analysis of these data
Generalized complex geometry, generalized branes and the Hitchin sigma model
International Nuclear Information System (INIS)
Zucchini, Roberto
2005-01-01
Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)
Cooling tower plume - model and experiment
Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri
The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.
Cooling tower plume - model and experiment
Directory of Open Access Journals (Sweden)
Cizek Jan
2017-01-01
Full Text Available The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.
Numerical Modeling of Fluid-Structure Interaction with Rheologically Complex Fluids
Chen, Xingyuan
2014-01-01
In the present work the interaction between rheologically complex fluids and elastic solids is studied by means of numerical modeling. The investigated complex fluids are non-Newtonian viscoelastic fluids. The fluid-structure interaction (FSI) of this kind is frequently encountered in injection molding, food processing, pharmaceutical engineering and biomedicine. The investigation via experiments is costly, difficult or in some cases, even impossible. Therefore, research is increasingly aided...
Theory Meets Experiment: Metal Ion Effects in HCV Genomic RNA Kissing Complex Formation
Directory of Open Access Journals (Sweden)
Li-Zhen Sun
2017-12-01
Full Text Available The long-range base pairing between the 5BSL3. 2 and 3′X domains in hepatitis C virus (HCV genomic RNA is essential for viral replication. Experimental evidence points to the critical role of metal ions, especially Mg2+ ions, in the formation of the 5BSL3.2:3′X kissing complex. Furthermore, NMR studies suggested an important ion-dependent conformational switch in the kissing process. However, for a long time, mechanistic understanding of the ion effects for the process has been unclear. Recently, computational modeling based on the Vfold RNA folding model and the partial charge-based tightly bound ion (PCTBI model, in combination with the NMR data, revealed novel physical insights into the role of metal ions in the 5BSL3.2-3′X system. The use of the PCTBI model, which accounts for the ion correlation and fluctuation, gives reliable predictions for the ion-dependent electrostatic free energy landscape and ion-induced population shift of the 5BSL3.2:3′X kissing complex. Furthermore, the predicted ion binding sites offer insights about how ion-RNA interactions shift the conformational equilibrium. The integrated theory-experiment study shows that Mg2+ ions may be essential for HCV viral replication. Moreover, the observed Mg2+-dependent conformational equilibrium may be an adaptive property of the HCV genomic RNA such that the equilibrium is optimized to the intracellular Mg2+ concentration in liver cells for efficient viral replication.
Thermal experiments in the ADS target model
International Nuclear Information System (INIS)
Efanov, A.D.; Orlov, Yu.I.; Sorokin, A.P.; Ivanov, E.F.; Bogoslovskaya, G.P.; Li, N.
2002-01-01
Experiments on the development of the target heat model project and method of investigation into heat exchange in target were conducted with the aim of analysis of thermomechanical and strength characteristics of device; experimental data on the temperature distribution in coolant and membrane were obtained. Obtained data demonstrate that the temperature heterogeneity of membrane and coolant are connected with the temperature distribution variability near the membrane. Peculiarities of the experiment are noted: maximal temperature of oscillations at high point of the membrane, and power bearing temperature oscillations in the range 0 - 1 Hz [ru
Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.
Taha, Mohamed; Khan, Imran; Coutinho, João A P
2016-04-01
With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Complex Nesting Structures in International Business Research
DEFF Research Database (Denmark)
Nielsen, Bo Bernhard; Nielsen, Sabina
2013-01-01
hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
Modeling the Nab Experiment Electronics in SPICE
Blose, Alexander; Crawford, Christopher; Sprow, Aaron; Nab Collaboration
2017-09-01
The goal of the Nab experiment is to measure the neutron decay coefficients a, the electron-neutrino correlation, as well as b, the Fierz interference term to precisely test the Standard Model, as well as probe for Beyond the Standard Model physics. In this experiment, protons from the beta decay of the neutron are guided through a magnetic field into a Silicon detector. Event reconstruction will be achieved via time-of-flight measurement for the proton and direct measurement of the coincident electron energy in highly segmented silicon detectors, so the amplification circuitry needs to preserve fast timing, provide good amplitude resolution, and be packaged in a high-density format. We have designed a SPICE simulation to model the full electronics chain for the Nab experiment in order to understand the contributions of each stage and optimize them for performance. Additionally, analytic solutions to each of the components have been determined where available. We will present a comparison of the output from the SPICE model, analytic solution, and empirically determined data.
Universal correlators for multi-arc complex matrix models
International Nuclear Information System (INIS)
Akemann, G.
1997-01-01
The correlation functions of the multi-arc complex matrix model are shown to be universal for any finite number of arcs. The universality classes are characterized by the support of the eigenvalue density and are conjectured to fall into the same classes as the ones recently found for the Hermitian model. This is explicitly shown to be true for the case of two arcs, apart from the known result for one arc. The basic tool is the iterative solution of the loop equation for the complex matrix model with multiple arcs, which provides all multi-loop correlators up to an arbitrary genus. Explicit results for genus one are given for any number of arcs. The two-arc solution is investigated in detail, including the double-scaling limit. In addition universal expressions for the string susceptibility are given for both the complex and Hermitian model. (orig.)
Modeling Hemispheric Detonation Experiments in 2-Dimensions
Energy Technology Data Exchange (ETDEWEB)
Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F
2006-06-22
Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.
Modeling variability in porescale multiphase flow experiments
Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.
2017-07-01
Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.
Bim Automation: Advanced Modeling Generative Process for Complex Structures
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Background modeling for the GERDA experiment
Becerici-Schmidt, N.; Gerda Collaboration
2013-08-01
The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.
Background modeling for the GERDA experiment
Energy Technology Data Exchange (ETDEWEB)
Becerici-Schmidt, N. [Max-Planck-Institut für Physik, München (Germany); Collaboration: GERDA Collaboration
2013-08-08
The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.
Complex groundwater flow systems as traveling agent models
Directory of Open Access Journals (Sweden)
Oliver López Corona
2014-10-01
Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.
Modelization of ratcheting in biaxial experiments
International Nuclear Information System (INIS)
Guionnet, C.
1989-08-01
A new unified viscoplastic constitutive equation has been developed in order to interpret ratcheting experiments on mechanical structures of fast reactors. The model is based essentially on a generalized Armstrong Frederick equation for the kinematic variable; the coefficients of the dynamic recovery term in this equation is a function of both instantaneous and accumulated inelastic strain which is allowed to vary in an appropriate manner in order to reproduce the experimental ratcheting rate. The validity of the model is verified by comparing predictions with experimental results for austenitic stainless steel (17-12 SPH) tubular specimens subjected to cyclic torsional loading under constant tensile stress at 600 0 C [fr
Data assimilation and model evaluation experiment datasets
Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.
1994-01-01
The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
ANS main control complex three-dimensional computer model development
International Nuclear Information System (INIS)
Cleaves, J.E.; Fletcher, W.M.
1993-01-01
A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use
Nostradamus 2014 prediction, modeling and analysis of complex systems
Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto
2014-01-01
The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...
Historical and idealized climate model experiments: an EMIC intercomparison
DEFF Research Database (Denmark)
Eby, M.; Weaver, A. J.; Alexander, K.
2012-01-01
Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures...... the Medieval Climate Anomaly and the Little Ice Age estimated from paleoclimate reconstructions. This in turn could be a result of errors in the reconstructions of volcanic and/or solar radiative forcing used to drive the models or the incomplete representation of certain processes or variability within...
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...
Modeling geophysical complexity: a case for geometric determinism
Directory of Open Access Journals (Sweden)
C. E. Puente
2007-01-01
Full Text Available It has been customary in the last few decades to employ stochastic models to represent complex data sets encountered in geophysics, particularly in hydrology. This article reviews a deterministic geometric procedure to data modeling, one that represents whole data sets as derived distributions of simple multifractal measures via fractal functions. It is shown how such a procedure may lead to faithful holistic representations of existing geophysical data sets that, while complementing existing representations via stochastic methods, may also provide a compact language for geophysical complexity. The implications of these ideas, both scientific and philosophical, are stressed.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Modelling wetting and drying effects over complex topography
Tchamen, G. W.; Kahawita, R. A.
1998-06-01
The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry
Argonne Bubble Experiment Thermal Model Development III
Energy Technology Data Exchange (ETDEWEB)
Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-11
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development” and “Argonne Bubble Experiment Thermal Model Development II”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at beam power levels between 6 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was recorded. The previous report2 described the Monte-Carlo N-Particle (MCNP) calculations and Computational Fluid Dynamics (CFD) analysis performed on the as-built solution vessel geometry. The CFD simulations in the current analysis were performed using Ansys Fluent, Ver. 17.2. The same power profiles determined from MCNP calculations in earlier work were used for the 12 and 15 kW simulations. The primary goal of the current work is to calculate the temperature profiles for the 12 and 15 kW cases using reasonable estimates for the gas generation rate, based on images of the bubbles recorded during the irradiations. Temperature profiles resulting from the CFD calculations are compared to experimental measurements.
Energy Technology Data Exchange (ETDEWEB)
Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)
2016-11-01
The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.
System Testability Analysis for Complex Electronic Devices Based on Multisignal Model
International Nuclear Information System (INIS)
Long, B; Tian, S L; Huang, J G
2006-01-01
It is necessary to consider the system testability problems for electronic devices during their early design phase because modern electronic devices become smaller and more compositive while their function and structure are more complex. Multisignal model, combining advantage of structure model and dependency model, is used to describe the fault dependency relationship for the complex electronic devices, and the main testability indexes (including optimal test program, fault detection rate, fault isolation rate, etc.) to evaluate testability and corresponding algorithms are given. The system testability analysis process is illustrated for USB-GPIB interface circuit with TEAMS toolbox. The experiment results show that the modelling method is simple, the computation speed is rapid and this method has important significance to improve diagnostic capability for complex electronic devices
Complex networks-based energy-efficient evolution model for wireless sensor networks
Energy Technology Data Exchange (ETDEWEB)
Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)
2009-08-30
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
Complex networks-based energy-efficient evolution model for wireless sensor networks
International Nuclear Information System (INIS)
Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun
2009-01-01
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
Complexation of metal ions with humic acid: charge neutralization model
International Nuclear Information System (INIS)
Kim, J.I.; Czerwinski, K.R.
1995-01-01
A number of different approaches are being used for describing the complexation equilibrium of actinide ions with humic or fulvic acid. The approach chosen and verified experimentally by Tu Muenchen will be discussed with notable examples from experiment. This approach is based on the conception that a given actinide ion is neutralized upon complexation with functional groups of humic or fulvic acid, e.g. carboxylic and phenolic groups, which are known as heterogeneously cross-linked polyelectrolytes. The photon energy transfer experiment with laser light excitation has shown that the actinide ion binding with the functional groups is certainly a chelation process accompanied by metal ion charge neutralization. This fact is in accordance with the experimental evidence of the postulated thermodynamic equilibrium reaction. The experimental results are found to be independent of origin of humic or fulvic acid and applicable for a broad range of pH. (authors). 23 refs., 7 figs., 1 tab
The Geodynamo: Models and supporting experiments
International Nuclear Information System (INIS)
Mueller, U.; Stieglitz, R.
2003-03-01
The magnetic field is a characteristic feature of our planet Earth. It shelters the biosphere against particle radiation from the space and offers by its direction orientation to creatures. The question about its origin has challenged scientists to find sound explanations. Major progress has been achieved during the last two decades in developing dynamo models and performing corroborating laboratory experiments to explain convincingly the principle of the Earth magnetic field. The article reports some significant steps towards our present understanding of this subject and outlines in particular relevant experiments, which either substantiate crucial elements of self-excitation of magnetic fields or demonstrate dynamo action completely. The authors are aware that they have not addressed all aspects of geomagnetic studies; rather, they have selected the material from the huge amount of literature such as to motivate the recently growing interest in experimental dynamo research. (orig.)
Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka
2016-01-01
An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…
The 'OMITRON' and 'MODEL OMITRON' proposed experiments
International Nuclear Information System (INIS)
Sestero, A.
1997-12-01
In the present paper the main features of the OMITRON and MODEL OMITRON proposed high field tokamaks are illustrated. Of the two, OMITRON is an ambitious experiment, aimed at attaining plasma burning conditions. its key physics issues are discussed, and a comparison is carried out with corresponding physics features in ignition experiments such as IGNITOR and ITER. Chief asset and chief challenge - in both OMITRON and MODEL OMITRON is the conspicuous 20 Tesla toroidal field value on the plasma axis. The advanced features of engineering which consent such a reward in terms of toroidal magnet performance are discussed in convenient depth and detail. As for the small, propaedeutic device MODEL OMITRON among its goals one must rank the purpose of testing key engineering issues in vivo, which are vital for the larger and more expensive parent device. Besides that, however - as indicated by ad hoc performed scoping studies - the smaller machine is found capable also of a number of quite interesting physics investigations in its own right
Argonne Bubble Experiment Thermal Model Development II
Energy Technology Data Exchange (ETDEWEB)
Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-07-01
This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.
Modelling, Estimation and Control of Networked Complex Systems
Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro
2009-01-01
The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...
Infinite Multiple Membership Relational Modeling for Complex Networks
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...
Modeling data irregularities and structural complexities in data envelopment analysis
Zhu, Joe
2007-01-01
In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. This book deals with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling of both qualitative and quantitative data. This handbook treatment deals with specific data problems including: imprecise or inaccurate data; missing data; qualitative data; outliers; undesirable outputs; quality data; statistical analysis; software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.
Modeling the propagation of mobile malware on complex networks
Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue
2016-08-01
In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.
Simulation-based modeling of building complexes construction management
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
Elert, M.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Energy Technology Data Exchange (ETDEWEB)
Yun, Sung Mi; Kang, Christina S. [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Jonghwa [Department of Industrial Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Han S., E-mail: hankim@konkuk.ac.kr [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of)
2015-04-28
Highlights: • Remediation of complex contaminated soil achieved by sequential soil flushing. • Removal of Zn, Pb, and heavy petroleum oils using 0.05 M citric acid and 2% SDS. • Unified desorption distribution coefficients modeled and experimentally determined. • Nonequilibrium models for the transport behavior of complex contaminants in soils. - Abstract: The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, K{sub Ubatch} and K{sub Lbatch}, converged to constant values as C{sub e} increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of K{sub Ubatch} and K{sub Lbatch} were very close to those of K{sub Ufit} and K{sub Lfit} determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination.
Atmospheric dispersion experiments over complex terrain in a spanish valley site (Guardo-90)
International Nuclear Information System (INIS)
Ibarra, J.I.
1991-01-01
An intensive field experimental campaign was conducted in Spain to quantify atmospheric diffusion within a deep, steep-walled valley in rough, mountainous terrain. The program has been sponsored by the spanish companies of electricity and is intended to validate existing plume models and to provide the scientific basis for future model development. The atmospheric dispersion and transport processes in a 40x40 km domain were studied in order to evaluate SO 2 and SF 6 releases from an existing 185 m chimney and ground level sources in a complex terrain valley site. Emphasis was placed on the local mesoscale flows and light wind stable conditions. Although the measuring program was intensified during daytime for dual tracking of SO 2 /SF 6 from an elevated source, nighttime experiments were conducted for mountain-valley flows characterization. Two principle objectives were pursued: impaction of plumes upon elevated terrain, and diffusion of gases within the valley versus diffusion over flat, open terrain. Artificial smoke flows visualizations provided qualitative information: quantitative diffusion measurements were obtained using sulfur hexafluoride gas with analysis by highly sensitive electron capture gas chromatographs systems. Fourteen 2 hours gaseous tracer releases were conducted
Modelling and simulating in-stent restenosis with complex automata
Hoekstra, A.G.; Lawford, P.; Hose, R.
2010-01-01
In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata Model for in-stent restenosis, coupling bulk flow, drug diffusion, and
The Complexity of Developmental Predictions from Dual Process Models
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
Constructive Lower Bounds on Model Complexity of Shallow Perceptron Networks
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra
2018-01-01
Roč. 29, č. 7 (2018), s. 305-315 ISSN 0941-0643 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : shallow and deep networks * model complexity and sparsity * signum perceptron networks * finite mappings * variational norms * Hadamard matrices Subject RIV: IN - Informatics, Computer Science Impact factor: 2.505, year: 2016
Kolmogorov complexity, pseudorandom generators and statistical models testing
Czech Academy of Sciences Publication Activity Database
Šindelář, Jan; Boček, Pavel
2002-01-01
Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002
Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations
DEFF Research Database (Denmark)
Padfield, Nicolas; Andreasen, Troels
2012-01-01
on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...
Model-based safety architecture framework for complex systems
Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang
2015-01-01
The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural
A binary logistic regression model with complex sampling design of ...
African Journals Online (AJOL)
2017-09-03
Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.
On the general procedure for modelling complex ecological systems
International Nuclear Information System (INIS)
He Shanyu.
1987-12-01
In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs
The dynamic complexity of a three species food chain model
International Nuclear Information System (INIS)
Lv Songjuan; Zhao Min
2008-01-01
In this paper, a three-species food chain model is analytically investigated on theories of ecology and using numerical simulation. Bifurcation diagrams are obtained for biologically feasible parameters. The results show that the system exhibits rich complexity features such as stable, periodic and chaotic dynamics
Loeb, Danielle F; Bayliss, Elizabeth A; Candrian, Carey; deGruy, Frank V; Binswanger, Ingrid A
2016-03-22
Complex patients are increasingly common in primary care and often have poor clinical outcomes. Healthcare system barriers to effective care for complex patients have been previously described, but less is known about the potential impact and meaning of caring for complex patients on a daily basis for primary care providers (PCPs). Our objective was to describe PCPs' experiences providing care for complex patients, including their experiences of health system barriers and facilitators and their strategies to enhance provision of effective care. Using a general inductive approach, our qualitative research study was guided by an interpretive epistemology, or way of knowing. Our method for understanding included semi-structured in-depth interviews with internal medicine PCPs from two university-based and three community health clinics. We developed an interview guide, which included questions on PCPs' experiences, perceived system barriers and facilitators, and strategies to improve their ability to effectively treat complex patients. To focus interviews on real cases, providers were asked to bring de-identified clinical notes from patients they considered complex to the interview. Interview transcripts were coded and analyzed to develop categories from the raw data, which were then conceptualized into broad themes after team-based discussion. PCPs (N = 15) described complex patients with multidimensional needs, such as socio-economic, medical, and mental health. A vision of optimal care emerged from the data, which included coordinating care, preventing hospitalizations, and developing patient trust. PCPs relied on professional values and individual care strategies to overcome local and system barriers. Team based approaches were endorsed to improve the management of complex patients. Given the barriers to effective care described by PCPs, individual PCP efforts alone are unlikely to meet the needs of complex patients. To fulfill PCP's expressed concepts of
A proposed experiment on ball lightning model
International Nuclear Information System (INIS)
Ignatovich, Vladimir K.; Ignatovich, Filipp V.
2011-01-01
Highlights: → We propose to put a glass sphere inside an excited gas. → Then to put a light ray inside the glass in a whispering gallery mode. → If the light is resonant to gas excitation, it will be amplified at every reflection. → In ms time the light in the glass will be amplified, and will melt the glass. → A liquid shell kept integer by electrostriction forces is the ball lightning model. -- Abstract: We propose an experiment for strong light amplification at multiple total reflections from active gaseous media.
Multiaxial behavior of foams - Experiments and modeling
Maheo, Laurent; Guérard, Sandra; Rio, Gérard; Donnard, Adrien; Viot, Philippe
2015-09-01
Cellular materials are strongly related to pressure level inside the material. It is therefore important to use experiments which can highlight (i) the pressure-volume behavior, (ii) the shear-shape behavior for different pressure level. Authors propose to use hydrostatic compressive, shear and combined pressure-shear tests to determine cellular materials behavior. Finite Element Modeling must take into account these behavior specificities. Authors chose to use a behavior law with a Hyperelastic, a Viscous and a Hysteretic contributions. Specific developments has been performed on the Hyperelastic one by separating the spherical and the deviatoric part to take into account volume change and shape change characteristics of cellular materials.
Mechanical Interaction in Pressurized Pipe Systems: Experiments and Numerical Models
Directory of Open Access Journals (Sweden)
Mariana Simão
2015-11-01
Full Text Available The dynamic interaction between the unsteady flow occurrence and the resulting vibration of the pipe are analyzed based on experiments and numerical models. Waterhammer, structural dynamic and fluid–structure interaction (FSI are the main subjects dealt with in this study. Firstly, a 1D model is developed based on the method of characteristics (MOC using specific damping coefficients for initial components associated with rheological pipe material behavior, structural and fluid deformation, and type of anchored structural supports. Secondly a 3D coupled complex model based on Computational Fluid Dynamics (CFD, using a Finite Element Method (FEM, is also applied to predict and distinguish the FSI events. Herein, a specific hydrodynamic model of viscosity to replicate the operation of a valve was also developed to minimize the number of mesh elements and the complexity of the system. The importance of integrated analysis of fluid–structure interaction, especially in non-rigidity anchored pipe systems, is equally emphasized. The developed models are validated through experimental tests.
Historical and idealized climate model experiments: an EMIC intercomparison
DEFF Research Database (Denmark)
Eby, M.; Weaver, A. J.; Alexander, K.
2012-01-01
Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE......, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows considerable synergy between land-use change and CO2... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures...
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Automation of program model developing for complex structure control objects
International Nuclear Information System (INIS)
Ivanov, A.P.; Sizova, T.B.; Mikhejkina, N.D.; Sankovskij, G.A.; Tyufyagin, A.N.
1991-01-01
A brief description of software for automated developing the models of integrating modular programming system, program module generator and program module library providing thermal-hydraulic calcualtion of process dynamics in power unit equipment components and on-line control system operation simulation is given. Technical recommendations for model development are based on experience in creation of concrete models of NPP power units. 8 refs., 1 tab., 4 figs
Implementation of the model project: Ghanaian experience
International Nuclear Information System (INIS)
Schandorf, C.; Darko, E.O.; Yeboah, J.; Asiamah, S.D.
2003-01-01
Upgrading of the legal infrastructure has been the most time consuming and frustrating part of the implementation of the Model project due to the unstable system of governance and rule of law coupled with the low priority given to legislation on technical areas such as safe applications of Nuclear Science and Technology in medicine, industry, research and teaching. Dwindling Governmental financial support militated against physical and human resource infrastructure development and operational effectiveness. The trend over the last five years has been to strengthen the revenue generation base of the Radiation Protection Institute through good management practices to ensure a cost effective use of the limited available resources for a self-reliant and sustainable radiation and waste safety programme. The Ghanaian experience regarding the positive and negative aspects of the implementation of the Model Project is highlighted. (author)
Forces between permanent magnets: experiments and model
International Nuclear Information System (INIS)
González, Manuel I
2017-01-01
This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The experiment consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical model is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the model and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r −4 at large distances, as expected. (paper)
Bucky gel actuator displacement: experiment and model
International Nuclear Information System (INIS)
Ghamsari, A K; Zegeye, E; Woldesenbet, E; Jin, Y
2013-01-01
Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of experiments were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A model was established to predict BGA maximum displacement based on the effect of these parameters. This model showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated. (paper)
Energy Technology Data Exchange (ETDEWEB)
Buck, D. R. [Iowa State Univ., Ames, IA (United States)
2000-09-12
Theoretical simulations and ultrafast pump-probe laser spectroscopy experiments were used to study photosynthetic pigment-protein complexes and antennae found in green sulfur bacteria such as Prosthecochloris aestuarii, Chloroflexus aurantiacus, and Chlorobium tepidum. The work focused on understanding structure-function relationships in energy transfer processes in these complexes through experiments and trying to model that data as we tested our theoretical assumptions with calculations. Theoretical exciton calculations on tubular pigment aggregates yield electronic absorption spectra that are superimpositions of linear J-aggregate spectra. The electronic spectroscopy of BChl c/d/e antennae in light harvesting chlorosomes from Chloroflexus aurantiacus differs considerably from J-aggregate spectra. Strong symmetry breaking is needed if we hope to simulate the absorption spectra of the BChl c antenna. The theory for simulating absorption difference spectra in strongly coupled photosynthetic antenna is described, first for a relatively simple heterodimer, then for the general N-pigment system. The theory is applied to the Fenna-Matthews-Olson (FMO) BChl a protein trimers from Prosthecochloris aestuarii and then compared with experimental low-temperature absorption difference spectra of FMO trimers from Chlorobium tepidum. Circular dichroism spectra of the FMO trimer are unusually sensitive to diagonal energy disorder. Substantial differences occur between CD spectra in exciton simulations performed with and without realistic inhomogeneous distribution functions for the input pigment diagonal energies. Anisotropic absorption difference spectroscopy measurements are less consistent with 21-pigment trimer simulations than 7-pigment monomer simulations which assume that the laser-prepared states are localized within a subunit of the trimer. Experimental anisotropies from real samples likely arise from statistical averaging over states with diagonal energies shifted by
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Modeling reproducibility of porescale multiphase flow experiments
Ling, B.; Tartakovsky, A. M.; Bao, J.; Oostrom, M.; Battiato, I.
2017-12-01
Multi-phase flow in porous media is widely encountered in geological systems. Understanding immiscible fluid displacement is crucial for processes including, but not limited to, CO2 sequestration, non-aqueous phase liquid contamination and oil recovery. Microfluidic devices and porescale numerical models are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition experiments in six identical microfluidic cells to study the reproducibility of multiphase flow experiments. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e.,fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rate. Stochastic simulations are able to capture variability in the experiments associated with the varying pump injection rate.
Hill, Renee J.; Chopra, Pradeep; Richardi, Toni
2012-01-01
Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338
Directory of Open Access Journals (Sweden)
Henry de-Graft Acquah
2013-01-01
Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.
Higher genus correlators for the complex matrix model
International Nuclear Information System (INIS)
Ambjorn, J.; Kristhansen, C.F.; Makeenko, Y.M.
1992-01-01
In this paper, the authors describe an iterative scheme which allows us to calculate any multi-loop correlator for the complex matrix model to any genus using only the first in the chain of loop equations. The method works for a completely general potential and the results contain no explicit reference to the couplings. The genus g contribution to the m-loop correlator depends on a finite number of parameters, namely at most 4g - 2 + m. The authors find the generating functional explicitly up to genus three. The authors show as well that the model is equivalent to an external field problem for the complex matrix model with a logarithmic potential
Reduced Complexity Volterra Models for Nonlinear System Identification
Directory of Open Access Journals (Sweden)
Hacıoğlu Rıfat
2001-01-01
Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.
Deciphering the complexity of acute inflammation using mathematical models.
Vodovotz, Yoram
2006-01-01
Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.
Nonlinear model of epidemic spreading in a complex social network.
Kosiński, Robert A; Grabowski, A
2007-10-01
The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.
Elastic Network Model of a Nuclear Transport Complex
Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.
2010-05-01
The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
BlenX-based compositional modeling of complex reaction mechanisms
Directory of Open Access Journals (Sweden)
Judit Zámborszky
2010-02-01
Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.
Mathematical Model of Nicholson’s Experiment
Directory of Open Access Journals (Sweden)
Sergey D. Glyzin
2017-01-01
Full Text Available Considered is a mathematical model of insects population dynamics, and an attempt is made to explain classical experimental results of Nicholson with its help. In the first section of the paper Nicholson’s experiment is described and dynamic equations for its modeling are chosen. A priori estimates for model parameters can be made more precise by means of local analysis of the dynamical system, that is carried out in the second section. For parameter values found there the stability loss of the problem equilibrium of the leads to the bifurcation of a stable two-dimensional torus. Numerical simulations based on the estimates from the second section allows to explain the classical Nicholson’s experiment, whose detailed theoretical substantiation is given in the last section. There for an atrractor of the system the largest Lyapunov exponent is computed. The nature of this exponent change allows to additionally narrow the area of model parameters search. Justification of this experiment was made possible only due to the combination of analytical and numerical methods in studying equations of insects population dynamics. At the same time, the analytical approach made it possible to perform numerical analysis in a rather narrow region of the parameter space. It is not possible to get into this area, based only on general considerations.
Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.
2014-01-01
The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values
Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
Energy Technology Data Exchange (ETDEWEB)
Cherne, Frank J [Los Alamos National Laboratory; Jensen, Brian J [Los Alamos National Laboratory; Elkin, Vyacheslav M [VNIITF
2009-01-01
The complexity of cerium combined with its interesting material properties makes it a desirable material to examine dynamically. Characteristics such as the softening of the material before the phase change, low pressure solid-solid phase change, predicted low pressure melt boundary, and the solid-solid critical point add complexity to the construction of its equation of state. Currently, we are incorporating a feedback loop between a theoretical understanding of the material and an experimental understanding. Using a model equation of state for cerium we compare calculated wave profiles with experimental wave profiles for a number of front surface impact (cerium impacting a plated window) experiments. Using the calculated release isentrope we predict the temperature of the observed rarefaction shock. These experiments showed that the release state occurs at different magnitudes, thus allowing us to infer where dynamic {gamma} - {alpha} phase boundary is.
Building Better Ecological Machines: Complexity Theory and Alternative Economic Models
Directory of Open Access Journals (Sweden)
Jess Bier
2016-12-01
Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.
Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling
Directory of Open Access Journals (Sweden)
Samar Hayat Khan Tareen
2015-07-01
Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model
Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.
Transtrum, Mark K; Qiu, Peng
2016-05-01
The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior.
Abell, Timothy N.; McCarrick, Robert M.; Bretz, Stacey Lowery; Tierney, David L.
2017-01-01
A structured inquiry experiment for inorganic synthesis has been developed to introduce undergraduate students to advanced spectroscopic techniques including paramagnetic nuclear magnetic resonance and electron paramagnetic resonance. Students synthesize multiple complexes with unknown first row transition metals and identify the unknown metals by…
Micro Wire-Drawing: Experiments And Modelling
International Nuclear Information System (INIS)
Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.
2007-01-01
In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating models of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive model seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material model has been proposed and implemented in a FE code giving a better agreement with the experiments
The semiotics of control and modeling relations in complex systems.
Joslyn, C
2001-01-01
We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.
Extensive video-game experience alters cortical networks for complex visuomotor transformations.
Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E
2010-10-01
Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.
Photogrammetry experiments with a model eye.
Rosenthal, A R; Falconer, D G; Pieper, I
1980-01-01
Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139
Pipe missile impact experiments on concrete models
International Nuclear Information System (INIS)
McHugh, S.; Gupta, Y.; Seaman, L.
1981-06-01
The experiments described in this study are a part of SRI studies for EPRI on the local response of reinforced concrete panels to missile impacts. The objectives of this task were to determine the feasibility of using scale model tests to reproduce the impact response of reinforced concrete panels observed in full-scale tests with pipe missiles and to evaluate the effect of concrete strength on the impact response. The experimental approach consisted of replica scaling: the missile and target materials were similar to those used in the full-scale tests, with all dimensions scaled by 5/32. Four criteria were selected for comparing the scaled and full-scale test results: frontface penetration, backface scabbing threshold, internal cracking in the panel, and missile deformation
Josephson cross-sectional model experiment
International Nuclear Information System (INIS)
Ketchen, M.B.; Herrell, D.J.; Anderson, C.J.
1985-01-01
This paper describes the electrical design and evaluation of the Josephson cross-sectional model (CSM) experiment. The experiment served as a test vehicle to verify the operation at liquid-helium temperatures of Josephson circuits integrated in a package environment suitable for high-performance digital applications. The CSM consisted of four circuit chips assembled on two cards in a three-dimensional card-on-board package. The chips (package) were fabricated in a 2.5-μm (5-μm) minimum linewidth Pb-alloy technology. A hierarchy of solder and pluggable connectors was used to attach the parts together and to provide electrical interconnections between parts. A data path which simulated a jump control sequence and a cache access in each machine cycle was successfully operated with cycle times down to 3.7 ns. The CSM incorporated the key components of the logic, power, and package of a prototype Josephson signal processor and demonstrated the feasibility of making such a processor with a sub-4-ns cycle time
Morphogenesis and pattern formation in biological systems experiments and models
Noji, Sumihare; Ueno, Naoto; Maini, Philip
2003-01-01
A central goal of current biology is to decode the mechanisms that underlie the processes of morphogenesis and pattern formation. Concerned with the analysis of those phenomena, this book covers a broad range of research fields, including developmental biology, molecular biology, plant morphogenesis, ecology, epidemiology, medicine, paleontology, evolutionary biology, mathematical biology, and computational biology. In Morphogenesis and Pattern Formation in Biological Systems: Experiments and Models, experimental and theoretical aspects of biology are integrated for the construction and investigation of models of complex processes. This collection of articles on the latest advances by leading researchers not only brings together work from a wide spectrum of disciplines, but also provides a stepping-stone to the creation of new areas of discovery.
An Ontology for Modeling Complex Inter-relational Organizations
Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel
This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.
A computational framework for modeling targets as complex adaptive systems
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
Fundamentals of complex networks models, structures and dynamics
Chen, Guanrong; Li, Xiang
2014-01-01
Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F
Model Complexities of Shallow Networks Representing Highly Varying Functions
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2016-01-01
Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016
Modelling of turbulence and combustion for simulation of gas explosions in complex geometries
Energy Technology Data Exchange (ETDEWEB)
Arntzen, Bjoern Johan
1998-12-31
This thesis analyses and presents new models for turbulent reactive flows for CFD (Computational Fluid Dynamics) simulation of gas explosions in complex geometries like offshore modules. The course of a gas explosion in a complex geometry is largely determined by the development of turbulence and the accompanying increased combustion rate. To be able to model the process it is necessary to use a CFD code as a starting point, provided with a suitable turbulence and combustion model. The modelling and calculations are done in a three-dimensional finite volume CFD code, where complex geometries are represented by a porosity concept, which gives porosity on the grid cell faces, depending on what is inside the cell. The turbulent flow field is modelled with a k-{epsilon} turbulence model. Subgrid models are used for production of turbulence from geometry not fully resolved on the grid. Results from laser doppler anemometry measurements around obstructions in steady and transient flows have been analysed and the turbulence models have been improved to handle transient, subgrid and reactive flows. The combustion is modelled with a burning velocity model and a flame model which incorporates the burning velocity into the code. Two different flame models have been developed: SIF (Simple Interface Flame model), which treats the flame as an interface between reactants and products, and the {beta}-model where the reaction zone is resolved with about three grid cells. The flame normally starts with a quasi laminar burning velocity, due to flame instabilities, modelled as a function of flame radius and laminar burning velocity. As the flow field becomes turbulent, the flame uses a turbulent burning velocity model based on experimental data and dependent on turbulence parameters and laminar burning velocity. The laminar burning velocity is modelled as a function of gas mixture, equivalence ratio, pressure and temperature in reactant. Simulations agree well with experiments. 139
Energy Technology Data Exchange (ETDEWEB)
Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.
Unified Model for Generation Complex Networks with Utility Preferential Attachment
International Nuclear Information System (INIS)
Wu Jianjun; Gao Ziyou; Sun Huijun
2006-01-01
In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.
Methodology and Results of Mathematical Modelling of Complex Technological Processes
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
The complex sine-Gordon model on a half line
International Nuclear Information System (INIS)
Tzamtzis, Georgios
2003-01-01
In this thesis, we study the complex sine-Gordon model on a half line. The model in the bulk is an integrable (1+1) dimensional field theory which is U(1) gauge invariant and comprises a generalisation of the sine-Gordon theory. It accepts soliton and breather solutions. By introducing suitably selected boundary conditions we may consider the model on a half line. Through such conditions the model can be shown to remain integrable and various aspects of the boundary theory can be examined. The first chapter serves as a brief introduction to some basic concepts of integrability and soliton solutions. As an example of an integrable system with soliton solutions, the sine-Gordon model is presented both in the bulk and on a half line. These results will serve as a useful guide for the model at hand. The introduction finishes with a brief overview of the two methods that will be used on the fourth chapter in order to obtain the quantum spectrum of the boundary complex sine-Gordon model. In the second chapter the model is properly introduced along with a brief literature review. Different realisations of the model and their connexions are discussed. The vacuum of the theory is investigated. Soliton solutions are given and a discussion on the existence of breathers follows. Finally the collapse of breather solutions to single solitons is demonstrated and the chapter concludes with a different approach to the breather problem. In the third chapter, we construct the lowest conserved currents and through them we find suitable boundary conditions that allow for their conservation in the presence of a boundary. The boundary term is added to the Lagrangian and the vacuum is reexamined in the half line case. The reflection process of solitons from the boundary is studied and the time-delay is calculated. Finally we address the existence of boundary-bound states. In the fourth chapter we study the quantum complex sine-Gordon model. We begin with a brief overview of the theory in
Multi-scale modelling for HEDP experiments on Orion
Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.
2016-05-01
The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.
Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei
2017-01-01
In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as ;the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its
Extending a configuration model to find communities in complex networks
International Nuclear Information System (INIS)
Jin, Di; Hu, Qinghua; He, Dongxiao; Yang, Bo; Baquero, Carlos
2013-01-01
Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities. (paper)
The Bolund experiment: Overview and background; Wind conditions in complex terrain
Energy Technology Data Exchange (ETDEWEB)
Bechmann, A.; Berg, J.; Courtney, M.S.; Joergensen, Hans E.; Mann, J.; Soerensen, Niels N.
2009-07-15
The Bolund experiment is a measuring campaign performed in 2007 and 2008. The aim of the experiment is to measure the flow field around the Bolund hill in order to provide a dataset for validating numerical flow models. The present report gives an overview of the whole experiment including a description of the orography, the instrumentation used and of the data processing. The Actual measurements are available from a database also described. (au)
Analogue experiments as benchmarks for models of lava flow emplacement
Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.
2013-12-01
experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.
Using model complexes to augment and advance metalloproteinase inhibitor design.
Jacobsen, Faith E; Cohen, Seth M
2004-05-17
The tetrahedral zinc complex [(Tp(Ph,Me))ZnOH] (Tp(Ph,Me) = hydrotris(3,5-phenylmethylpyrazolyl)borate) was combined with 2-thenylmercaptan, ethyl 4,4,4-trifluoroacetoacetate, salicylic acid, salicylamide, thiosalicylic acid, thiosalicylamide, methyl salicylate, methyl thiosalicyliate, and 2-hydroxyacetophenone to form the corresponding [(Tp(Ph,Me))Zn(ZBG)] complexes (ZBG = zinc-binding group). X-ray crystal structures of these complexes were obtained to determine the mode of binding for each ZBG, several of which had been previously studied with SAR by NMR (structure-activity relationship by nuclear magnetic resonance) as potential ligands for use in matrix metalloproteinase inhibitors. The [(Tp(Ph,Me))Zn(ZBG)] complexes show that hydrogen bonding and donor atom acidity have a pronounced effect on the mode of binding for this series of ligands. The results of these studies give valuable insight into how ligand protonation state and intramolecular hydrogen bonds can influence the coordination mode of metal-binding proteinase inhibitors. The findings here suggest that model-based approaches can be used to augment drug discovery methods applied to metalloproteins and can aid second-generation drug design.
Semiotic aspects of control and modeling relations in complex systems
Energy Technology Data Exchange (ETDEWEB)
Joslyn, C.
1996-08-01
A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.
Stability of rotor systems: A complex modelling approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1998-01-01
The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and
Surface complexation modeling of zinc sorption onto ferrihydrite.
Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L
2004-02-01
A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength
Modelling of the quenching process in complex superconducting magnet systems
International Nuclear Information System (INIS)
Hagedorn, D.; Rodriguez-Mateos, F.
1992-01-01
This paper reports that the superconducting twin bore dipole magnet for the proposed Large Hadron Collider (LHC) at CERN shows a complex winding structure consisting of eight compact layers each of them electromagnetically and thermally coupled with the others. This magnet is only one part of an electrical circuit; test and operation conditions are characterized by different circuits. In order to study the quenching process in this complex system, design adequate protection schemes, and provide a basis for the dimensioning of protection devices such as heaters, current breakers and dump resistors, a general simulation tool called QUABER has been developed using the analog system analysis program SABER. A complete set of electro-thermal models has been crated for the propagation of normal regions. Any network extension or modification is easy to implement without rewriting the whole set of differential equations
A Primer for Model Selection: The Decisive Role of Model Complexity
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Bates, P. D.; Neal, J. C.; Fewtrell, T. J.
2012-12-01
In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound
Diffusion in higher dimensional SYK model with complex fermions
Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong
2018-01-01
We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.
3D model of amphioxus steroid receptor complexed with estradiol
Energy Technology Data Exchange (ETDEWEB)
Baker, Michael E., E-mail: mbaker@ucsd.edu [Department of Medicine, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0693 (United States); Chang, David J. [Department of Biology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0693 (United States)
2009-08-28
The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.
International Nuclear Information System (INIS)
Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de
2011-01-01
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.
Energy Technology Data Exchange (ETDEWEB)
Bonten, Luc T.C., E-mail: luc.bonten@wur.nl [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Groenenberg, Jan E. [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Meesenburg, Henning [Northwest German Forest Research Station, Abt. Umweltkontrolle, Sachgebiet Intensives Umweltmonitoring, Goettingen (Germany); Vries, Wim de [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands)
2011-10-15
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: > Surface complexation models can be well applied in field studies. > Soil chemistry under a forest site is adequately modelled using generic parameters. > The model is easily extended with extra elements within the existing framework. > Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.
Dynamic crack initiation toughness : experiments and peridynamic modeling.
Energy Technology Data Exchange (ETDEWEB)
Foster, John T.
2009-10-01
This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model
Robotic general surgery experience: a gradual progress from simple to more complex procedures.
Al-Naami, M; Anjum, M N; Aldohayan, A; Al-Khayal, K; Alkharji, H
2013-12-01
Robotic surgery was introduced at our institution in 2003, and we used a progressive approach advancing from simple to more complex procedures. A retrospective chart review. Cases included totalled 129. Set-up and operative times have improved over time and with experience. Conversion rates to standard laparoscopic or open techniques were 4.7% and 1.6%, respectively. Intraoperative complications (6.2%), blood loss and hospital stay were directly proportional to complexity. There were no mortalities and the postoperative complication rate (13.2%) was within accepted norms. Our findings suggest that robot technology is presently most useful in cases tailored toward its advantages, i.e. those confined to a single space, those that require performance of complex tasks, and re-do procedures. Copyright © 2013 John Wiley & Sons, Ltd.
Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds
Directory of Open Access Journals (Sweden)
M. Schnaiter
2016-04-01
Full Text Available This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere cloud chamber of the Karlsruhe Institute of Technology (KIT. A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the −40 to −60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3. It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN probe of Laboratoire de Métérologie et Physique (LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.
Experiment planning using high-level component models at W7-X
International Nuclear Information System (INIS)
Lewerentz, Marc; Spring, Anett; Bluhm, Torsten; Heimann, Peter; Hennig, Christine; Kühner, Georg; Kroiss, Hugo; Krom, Johannes G.; Laqua, Heike; Maier, Josef; Riemann, Heike; Schacht, Jörg; Werner, Andreas; Zilker, Manfred
2012-01-01
Highlights: ► Introduction of models for an abstract description of fusion experiments. ► Component models support creating feasible experiment programs at planning time. ► Component models contain knowledge about physical and technical constraints. ► Generated views on models allow to present crucial information. - Abstract: The superconducting stellarator Wendelstein 7-X (W7-X) is a fusion device, which is capable of steady state operation. Furthermore W7-X is a very complex technical system. To cope with these requirements a modular and strongly hierarchical component-based control and data acquisition system has been designed. The behavior of W7-X is characterized by thousands of technical parameters of the participating components. The intended sequential change of those parameters during an experiment is defined in an experiment program. Planning such an experiment program is a crucial and complex task. To reduce the complexity an abstract, more physics-oriented high-level layer has been introduced earlier. The so-called high-level (physics) parameters are used to encapsulate technical details. This contribution will focus on the extension of this layer to a high-level component model. It completely describes the behavior of a component for a certain period of time. It allows not only defining simple value ranges but also complex dependencies between physics parameters. This can be: dependencies within components, dependencies between components or temporal dependencies. Component models can now be analyzed to generate various views of an experiment. A first implementation of such an analyze process is already finished. A graphical preview of a planned discharge can be generated from a chronological sequence of component models. This allows physicists to survey complex planned experiment programs at a glance.
Multiagent model and mean field theory of complex auction dynamics
Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng
2015-09-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.
Multiagent model and mean field theory of complex auction dynamics
International Nuclear Information System (INIS)
Chen, Qinghua; Wang, Yougui; Huang, Zi-Gang; Lai, Ying-Cheng
2015-01-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena. (paper)
Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.
Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos
2017-01-01
Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.
Merging experiences and perspectives in the complexity of cross-cultural design
DEFF Research Database (Denmark)
Winschiers-Theophilus, Heike; Bidwell, Nicola; Blake, Edward
2010-01-01
While our cross-cultural IT research continuously strives to contribute towards the development of HCI appropriate cross-cultural models and best practices, we are aware of the specificity of each development context and the influence of each participant. Uncovering the complexity within our...
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Modelling study of sea breezes in a complex coastal environment
Cai, X.-M.; Steyn, D. G.
This study investigates a mesoscale modelling of sea breezes blowing from a narrow strait into the lower Fraser valley (LFV), British Columbia, Canada, during the period of 17-20 July, 1985. Without a nudging scheme in the inner grid, the CSU-RAMS model produces satisfactory wind and temperature fields during the daytime. In comparison with observation, the agreement indices for surface wind and temperature during daytime reach about 0.6 and 0.95, respectively, while the agreement indices drop to 0.4 at night. In the vertical, profiles of modelled wind and temperature generally agree with tethersonde data collected on 17 and 19 July. The study demonstrates that in late afternoon, the model does not capture the advection of an elevated warm layer which originated from land surfaces outside of the inner grid. Mixed layer depth (MLD) is calculated from model output of turbulent kinetic energy field. Comparison of MLD results with observation shows that the method generates a reliable MLD during the daytime, and that accurate estimates of MLD near the coast require the correct simulation of wind conditions over the sea. The study has shown that for a complex coast environment like the LFV, a reliable modelling study depends not only on local surface fluxes but also on elevated layers transported from remote land surfaces. This dependence is especially important when local forcings are weak, for example, during late afternoon and at night.
Physical modelling of flow and dispersion over complex terrain
Cermak, J. E.
1984-09-01
Atmospheric motion and dispersion over topography characterized by irregular (or regular) hill-valley or mountain-valley distributions are strongly dependent upon three general sets of variables. These are variables that describe topographic geometry, synoptic-scale winds and surface-air temperature distributions. In addition, pollutant concentration distributions also depend upon location and physical characteristics of the pollutant source. Overall fluid-flow complexity and variability from site to site have stimulated the development and use of physical modelling for determination of flow and dispersion in many wind-engineering applications. Models with length scales as small as 1:12,000 have been placed in boundary-layer wind tunnels to study flows in which forced convection by synoptic winds is of primary significance. Flows driven primarily by forces arising from temperature differences (gravitational or free convection) have been investigated by small-scale physical models placed in an isolated space (gravitational convection chamber). Similarity criteria and facilities for both forced and gravitational-convection flow studies are discussed. Forced-convection modelling is illustrated by application to dispersion of air pollutants by unstable flow near a paper mill in the state of Maryland and by stable flow over Point Arguello, California. Gravitational-convection modelling is demonstrated by a study of drainage flow and pollutant transport from a proposed mining operation in the Rocky Mountains of Colorado. Other studies in which field data are available for comparison with model data are reviewed.
Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)
2010-01-01
Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887
Keane, Carol A; Magee, Christopher A; Kelly, Peter J
2016-11-01
Traumatic childhood experiences predict many adverse outcomes in adulthood including Complex-PTSD. Understanding complex trauma within socially disadvantaged populations has important implications for policy development and intervention implementation. This paper examined the nature of complex trauma experienced by disadvantaged individuals using a latent class analysis (LCA) approach. Data were collected through the large-scale Journeys Home Study (N=1682), utilising a representative sample of individuals experiencing low housing stability. Data on adverse childhood experiences, adulthood interpersonal trauma and relevant covariates were collected through interviews at baseline (Wave 1). Latent class analysis (LCA) was conducted to identify distinct classes of childhood trauma history, which included physical assault, neglect, and sexual abuse. Multinomial logistic regression investigated childhood relevant factors associated with class membership such as biological relationship of primary carer at age 14 years and number of times in foster care. Of the total sample (N=1682), 99% reported traumatic adverse childhood experiences. The most common included witnessing of violence, threat/experience of physical abuse, and sexual assault. LCA identified six distinct childhood trauma history classes including high violence and multiple traumas. Significant covariate differences between classes included: gender, biological relationship of primary carer at age 14 years, and time in foster care. Identification of six distinct childhood trauma history profiles suggests there might be unique treatment implications for individuals living in extreme social disadvantage. Further research is required to examine the relationship between these classes of experience, consequent impact on adulthood engagement, and future transitions though homelessness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Complex Dynamics of an Adnascent-Type Game Model
Directory of Open Access Journals (Sweden)
Baogui Xin
2008-01-01
Full Text Available The paper presents a nonlinear discrete game model for two oligopolistic firms whose products are adnascent. (In biology, the term adnascent has only one sense, “growing to or on something else,” e.g., “moss is an adnascent plant.” See Webster's Revised Unabridged Dictionary published in 1913 by C. & G. Merriam Co., edited by Noah Porter. The bifurcation of its Nash equilibrium is analyzed with Schwarzian derivative and normal form theory. Its complex dynamics is demonstrated by means of the largest Lyapunov exponents, fractal dimensions, bifurcation diagrams, and phase portraits. At last, bifurcation and chaos anticontrol of this system are studied.
Experiments and Modeling in Support of Generic Salt Repository Science
International Nuclear Information System (INIS)
Bourret, Suzanne Michelle; Stauffer, Philip H.; Weaver, Douglas James; Caporuscio, Florie Andre; Otto, Shawn; Boukhalfa, Hakim; Jordan, Amy B.; Chu, Shaoping; Zyvoloski, George Anthony; Johnson, Peter Jacob
2017-01-01
Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.
Electrostatic Model Applied to ISS Charged Water Droplet Experiment
Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.
2015-01-01
The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.
Experiments and Modeling in Support of Generic Salt Repository Science
Energy Technology Data Exchange (ETDEWEB)
Bourret, Suzanne Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weaver, Douglas James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caporuscio, Florie Andre [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Otto, Shawn [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Jordan, Amy B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zyvoloski, George Anthony [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johnson, Peter Jacob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-19
Salt is an attractive material for the disposition of heat generating nuclear waste (HGNW) because of its self-sealing, viscoplastic, and reconsolidation properties (Hansen and Leigh, 2012). The rate at which salt consolidates and the properties of the consolidated salt depend on the composition of the salt, including its content in accessory minerals and moisture, and the temperature under which consolidation occurs. Physicochemical processes, such as mineral hydration/dehydration salt dissolution and precipitation play a significant role in defining the rate of salt structure changes. Understanding the behavior of these complex processes is paramount when considering safe design for disposal of heat-generating nuclear waste (HGNW) in salt formations, so experimentation and modeling is underway to characterize these processes. This report presents experiments and simulations in support of the DOE-NE Used Fuel Disposition Campaign (UFDC) for development of drift-scale, in-situ field testing of HGNW in salt formations.
International Nuclear Information System (INIS)
Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.
1978-06-01
The zonal model experiments with modified surface boundary conditions suggest an initial chain of feedback processes that is largest at the site of the perturbation: deforestation and/or desertification → increased surface albedo → reduced surface absorption of solar radiation → surface cooling and reduced evaporation → reduced convective activity → reduced precipitation and latent heat release → cooling of upper troposphere and increased tropospheric lapse rates → general global cooling and reduced precipitation. As indicated above, although the two experiments give similar overall global results, the location of the perturbation plays an important role in determining the response of the global circulation. These two-dimensional model results are also consistent with three-dimensional model experiments. These results have tempted us to consider the possibility that self-induced growth of the subtropical deserts could serve as a possible mechanism to cause the initial global cooling that then initiates a glacial advance thus activating the positive feedback loop involving ice-albedo feedback (also self-perpetuating). Reversal of the cycle sets in when the advancing ice cover forces the wave-cyclone tracks far enough equatorward to quench (revegetate) the subtropical deserts
Vibration behavior of PWR reactor internals Model experiments and analysis
International Nuclear Information System (INIS)
Assedo, R.; Dubourg, M.; Livolant, M.; Epstein, A.
1975-01-01
In the late 1971, the CEA and FRAMATOME decided to undertake a comprehensive joint program of studying the vibration behavior of PWR internals of the 900 MWe, 50 cycle, 3 loop reactor series being built by FRAMATOME in France. The PWR reactor internals are submitted to several sources of excitation during normal operation. Two main sources of excitation may effect the internals behavior: the large flow turbulences which could generate various instabilities such as: vortex shedding: the pump pressure fluctuations which could generate acoustic noise in the circuit at frequencies corresponding to shaft speed frequencies or blade passing frequencies, and their respective harmonics. The flow induced vibrations are of complex nature and the approach selected, for this comprehensive program, is semi-empirical and based on both theoretical analysis and experiments on a reduced scale model and full scale internals. The experimental support of this program consists of: the SAFRAN test loop which consists of an hydroelastic similitude of a 1/8 scale model of a PWR; harmonic vibration tests in air performed on full scale reactor internals in the manufacturing shop; the GENNEVILLIERS facilities which is a full flow test facility of primary pump; the measurements carried out during start up on the Tihange reactor. This program will be completed in April 1975. The results of this program, the originality of which consists of studying separately the effects of random excitations and acoustic noises, on the internals behavior, and by establishing a comparison between experiments and analysis, will bring a major contribution for explaining the complex vibration phenomena occurring in a PWR
Dynamical phase separation using a microfluidic device: experiments and modeling
Aymard, Benjamin; Vaes, Urbain; Radhakrishnan, Anand; Pradas, Marc; Gavriilidis, Asterios; Kalliadasis, Serafim; Complex Multiscale Systems Team
2017-11-01
We study the dynamical phase separation of a binary fluid by a microfluidic device both from the experimental and from the modeling points of view. The experimental device consists of a main channel (600 μm wide) leading into an array of 276 trapezoidal capillaries of 5 μm width arranged on both sides and separating the lateral channels from the main channel. Due to geometrical effects as well as wetting properties of the substrate, and under well chosen pressure boundary conditions, a multiphase flow introduced into the main channel gets separated at the capillaries. Understanding this dynamics via modeling and numerical simulation is a crucial step in designing future efficient micro-separators. We propose a diffuse-interface model, based on the classical Cahn-Hilliard-Navier-Stokes system, with a new nonlinear mobility and new wetting boundary conditions. We also propose a novel numerical method using a finite-element approach, together with an adaptive mesh refinement strategy. The complex geometry is captured using the same computer-aided design files as the ones adopted in the fabrication of the actual device. Numerical simulations reveal a very good qualitative agreement between model and experiments, demonstrating also a clear separation of phases.
a Range Based Method for Complex Facade Modeling
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of
A RANGE BASED METHOD FOR COMPLEX FACADE MODELING
Directory of Open Access Journals (Sweden)
A. Adami
2012-09-01
Full Text Available 3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes and the final results (a more detailed and complex mesh versus an approximate and more simple solid model. Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and
A subsurface model of the beaver meadow complex
Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.
2015-12-01
Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.
Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.
Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C
2010-05-01
Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.
Simple models for studying complex spatiotemporal patterns of animal behavior
Tyutyunov, Yuri V.; Titova, Lyudmila I.
2017-06-01
Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.
Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin
2015-01-01
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Complex system modelling and control through intelligent soft computations
Azar, Ahmad
2015-01-01
The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...
Does model performance improve with complexity? A case study with three hydrological models
Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano
2015-04-01
In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).
Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome
Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.
2018-03-01
The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.
International Nuclear Information System (INIS)
Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy
2012-01-01
A powerful and easy-to-use workflow environment has been developed at the ESRF for combining experiment control with online data analysis on synchrotron beamlines. This tool provides the possibility of automating complex experiments without the need for expertise in instrumentation control and programming, but rather by accessing defined beamline services. The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE
Accurate modeling and evaluation of microstructures in complex materials
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Green IT engineering concepts, models, complex systems architectures
Kondratenko, Yuriy; Kacprzyk, Janusz
2017-01-01
This volume provides a comprehensive state of the art overview of a series of advanced trends and concepts that have recently been proposed in the area of green information technologies engineering as well as of design and development methodologies for models and complex systems architectures and their intelligent components. The contributions included in the volume have their roots in the authors’ presentations, and vivid discussions that have followed the presentations, at a series of workshop and seminars held within the international TEMPUS-project GreenCo project in United Kingdom, Italy, Portugal, Sweden and the Ukraine, during 2013-2015 and at the 1st - 5th Workshops on Green and Safe Computing (GreenSCom) held in Russia, Slovakia and the Ukraine. The book presents a systematic exposition of research on principles, models, components and complex systems and a description of industry- and society-oriented aspects of the green IT engineering. A chapter-oriented structure has been adopted for this book ...
Modelling methodology for engineering of complex sociotechnical systems
CSIR Research Space (South Africa)
Oosthuizen, R
2014-10-01
Full Text Available Different systems engineering techniques and approaches are applied to design and develop complex sociotechnical systems for complex problems. In a complex sociotechnical system cognitive and social humans use information technology to make sense...
Computational model of dose response for low-LET-induced complex chromosomal aberrations
International Nuclear Information System (INIS)
Eidelman, Y.A.; Andreev, S.G.
2015-01-01
Experiments with full-colour mFISH chromosome painting have revealed high yield of radiation-induced complex chromosomal aberrations (CAs). The ratio of complex to simple aberrations is dependent on cell type and linear energy transfer. Theoretical analysis has demonstrated that the mechanism of CA formation as a result of interaction between lesions at a surface of chromosome territories does not explain high complexes-to-simples ratio in human lymphocytes. The possible origin of high yields of γ-induced complex CAs was investigated in the present work by computer simulation. CAs were studied on the basis of chromosome structure and dynamics modelling and the hypothesis of CA formation on nuclear centres. The spatial organisation of all chromosomes in a human interphase nucleus was predicted by simulation of mitosis-to-interphase chromosome structure transition. Two scenarios of CA formation were analysed, 'static' (existing in a nucleus prior to irradiation) centres and 'dynamic' (formed in response to irradiation) centres. The modelling results reveal that under certain conditions, both scenarios explain quantitatively the dose-response relationships for both simple and complex γ-induced inter-chromosomal exchanges observed by mFISH chromosome painting in the first post-irradiation mitosis in human lymphocytes. (authors)
Contrasting model complexity under a changing climate in a headwaters catchment.
Foster, L.; Williams, K. H.; Maxwell, R. M.
2017-12-01
Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater
A modeling process to understand complex system architectures
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two
Equation-free model reduction for complex dynamical systems
International Nuclear Information System (INIS)
Le Maitre, O. P.; Mathelin, L.; Le Maitre, O. P.
2010-01-01
This paper presents a reduced model strategy for simulation of complex physical systems. A classical reduced basis is first constructed relying on proper orthogonal decomposition of the system. Then, unlike the alternative approaches, such as Galerkin projection schemes for instance, an equation-free reduced model is constructed. It consists in the determination of an explicit transformation, or mapping, for the evolution over a coarse time-step of the projection coefficients of the system state on the reduced basis. The mapping is expressed as an explicit polynomial transformation of the projection coefficients and is computed once and for all in a pre-processing stage using the detailed model equation of the system. The reduced system can then be advanced in time by successive applications of the mapping. The CPU cost of the method lies essentially in the mapping approximation which is performed offline, in a parallel fashion, and only once. Subsequent application of the mapping to perform a time-integration is carried out at a low cost thanks to its explicit character. Application of the method is considered for the 2-D flow around a circular cylinder. We investigate the effectiveness of the reduced model in rendering the dynamics for both asymptotic state and transient stages. It is shown that the method leads to a stable and accurate time-integration for only a fraction of the cost of a detailed simulation, provided that the mapping is properly approximated and the reduced basis remains relevant for the dynamics investigated. (authors)
Real-time modeling of complex atmospheric releases in urban areas
International Nuclear Information System (INIS)
Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.
1994-08-01
If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models
Real-time modelling of complex atmospheric releases in urban areas
International Nuclear Information System (INIS)
Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.
2000-01-01
If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralised dispersion modelling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models. (author)
Katpatal, Yashwant B.; Rishma, C.; Singh, Chandan K.
2018-05-01
The Gravity Recovery and Climate Experiment (GRACE) satellite mission is aimed at assessment of groundwater storage under different terrestrial conditions. The main objective of the presented study is to highlight the significance of aquifer complexity to improve the performance of GRACE in monitoring groundwater. Vidarbha region of Maharashtra, central India, was selected as the study area for analysis, since the region comprises a simple aquifer system in the western region and a complex aquifer system in the eastern region. Groundwater-level-trend analyses of the different aquifer systems and spatial and temporal variation of the terrestrial water storage anomaly were studied to understand the groundwater scenario. GRACE and its field application involve selecting four pixels from the GRACE output with different aquifer systems, where each GRACE pixel encompasses 50-90 monitoring wells. Groundwater storage anomalies (GWSA) are derived for each pixel for the period 2002 to 2015 using the Release 05 (RL05) monthly GRACE gravity models and the Global Land Data Assimilation System (GLDAS) land-surface models (GWSAGRACE) as well as the actual field data (GWSAActual). Correlation analysis between GWSAGRACE and GWSAActual was performed using linear regression. The Pearson and Spearman methods show that the performance of GRACE is good in the region with simple aquifers; however, performance is poorer in the region with multiple aquifer systems. The study highlights the importance of incorporating the sensitivity of GRACE in estimation of groundwater storage in complex aquifer systems in future studies.
Energy Technology Data Exchange (ETDEWEB)
Li, Haiyan [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Huang, Yunbao, E-mail: Huangyblhy@gmail.com [Mechatronics Engineering School of Guangdong University of Technology, Guangzhou 510006 (China); Jiang, Shaoen, E-mail: Jiangshn@vip.sina.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Jing, Longfei, E-mail: scmyking_2008@163.com [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China); Tianxuan, Huang; Ding, Yongkun [Laser Fusion Research Center, China Academy of Engineering Physics, Mianyang 621900 (China)
2015-11-15
Highlights: • A unified modeling approach for physical experiment design is presented. • Any laser facility can be flexibly defined and included with two scripts. • Complex targets and laser beams can be parametrically modeled for optimization. • Automatically mapping of laser beam energy facilitates targets shape optimization. - Abstract: Physical experiment design and optimization is very essential for laser driven inertial confinement fusion due to the high cost of each shot. However, only limited experiments with simple structure or shape on several laser facilities can be designed and evaluated in available codes, and targets are usually defined by programming, which may lead to it difficult for complex shape target design and optimization on arbitrary laser facilities. A unified modeling approach for physical experiment design and optimization on any laser facilities is presented in this paper. Its core idea includes: (1) any laser facility can be flexibly defined and included with two scripts, (2) complex shape targets and laser beams can be parametrically modeled based on features, (3) an automatically mapping scheme of laser beam energy onto discrete mesh elements of targets enable targets or laser beams be optimized without any additional interactive modeling or programming, and (4) significant computation algorithms are additionally presented to efficiently evaluate radiation symmetry on the target. Finally, examples are demonstrated to validate the significance of such unified modeling approach for physical experiments design and optimization in laser driven inertial confinement fusion.
Educational complex of light-colored modeling of urban environment
Directory of Open Access Journals (Sweden)
Karpenko Vladimir E.
2018-01-01
Full Text Available Mechanisms, methodological tools and structure of a training complex of light-colored modeling of the urban environment are developed in this paper. The following results of the practical work of students are presented: light composition and installation, media facades, lighting of building facades, city streets and embankment. As a result of modeling, the structure of the light form is determined. Light-transmitting materials and causing characteristic optical illusions, light-visual and light-dynamic effects (video-dynamics and photostatics, basic compositional techniques of light form are revealed. The main elements of the light installation are studied, including a light projection, an electronic device, interactivity and relationality of the installation, and the mechanical device which becomes a part of the installation composition. The meaning of modern media facade technology is the transformation of external building structures and their facades into a changing information cover, into a media content translator using LED technology. Light tectonics and the light rhythm of the plastics of the architectural object are built up through point and local illumination, modeling of the urban ensemble assumes the structural interaction of several light building models with special light-composition techniques. When modeling the social and pedestrian environment, the lighting parameters depend on the scale of the chosen space and are adapted taking into account the visual perception of the pedestrian, and the atmospheric effects of comfort and safety of the environment are achieved with the help of special light compositional techniques. With the aim of realizing the tasks of light modeling, a methodology has been created, including the mechanisms of models, variability and complementarity. The perspectives of light modeling in the context of structural elements of the city, neuropsychology, wireless and bioluminescence technologies are proposed
Energy Technology Data Exchange (ETDEWEB)
Veselská, Veronika, E-mail: veselskav@fzp.czu.cz [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Fajgar, Radek [Department of Analytical and Material Chemistry, Institute of Chemical Process Fundamentals of the CAS, v.v.i., Rozvojová 135/1, CZ-16502, Prague (Czech Republic); Číhalová, Sylva [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Bolanz, Ralph M. [Institute of Geosciences, Friedrich-Schiller-University Jena, Carl-Zeiss-Promenade 10, DE-07745, Jena (Germany); Göttlicher, Jörg; Steininger, Ralph [ANKA Synchrotron Radiation Facility, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, DE-76344, Eggenstein-Leopoldshafen (Germany); Siddique, Jamal A.; Komárek, Michael [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic)
2016-11-15
Highlights: • Study of Cr(VI) adsorption on soil minerals over a large range of conditions. • Combined surface complexation modeling and spectroscopic techniques. • Diffuse-layer and triple-layer models used to obtain fits to experimental data. • Speciation of Cr(VI) and Cr(III) was assessed. - Abstract: This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3–10), ionic strengths (0.001–0.1 M KNO{sub 3}), sorbate concentrations (10{sup −4}, 10{sup −5}, and 10{sup −6} M Cr(VI)), and sorbate/sorbent ratios (50–500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes.
Adaptive Surface Modeling of Soil Properties in Complex Landforms
Directory of Open Access Journals (Sweden)
Wei Liu
2017-06-01
Full Text Available Abstract: Spatial discontinuity often causes poor accuracy when a single model is used for the surface modeling of soil properties in complex geomorphic areas. Here we present a method for adaptive surface modeling of combined secondary variables to improve prediction accuracy during the interpolation of soil properties (ASM-SP. Using various secondary variables and multiple base interpolation models, ASM-SP was used to interpolate soil K+ in a typical complex geomorphic area (Qinghai Lake Basin, China. Five methods, including inverse distance weighting (IDW, ordinary kriging (OK, and OK combined with different secondary variables (e.g., OK-Landuse, OK-Geology, and OK-Soil, were used to validate the proposed method. The mean error (ME, mean absolute error (MAE, root mean square error (RMSE, mean relative error (MRE, and accuracy (AC were used as evaluation indicators. Results showed that: (1 The OK interpolation result is spatially smooth and has a weak bull's-eye effect, and the IDW has a stronger ‘bull’s-eye’ effect, relatively. They both have obvious deficiencies in depicting spatial variability of soil K+. (2 The methods incorporating combinations of different secondary variables (e.g., ASM-SP, OK-Landuse, OK-Geology, and OK-Soil were associated with lower estimation bias. Compared with IDW, OK, OK-Landuse, OK-Geology, and OK-Soil, the accuracy of ASM-SP increased by 13.63%, 10.85%, 9.98%, 8.32%, and 7.66%, respectively. Furthermore, ASM-SP was more stable, with lower MEs, MAEs, RMSEs, and MREs. (3 ASM-SP presents more details than others in the abrupt boundary, which can render the result consistent with the true secondary variables. In conclusion, ASM-SP can not only consider the nonlinear relationship between secondary variables and soil properties, but can also adaptively combine the advantages of multiple models, which contributes to making the spatial interpolation of soil K+ more reasonable.
Modeling Cu{sup 2+}-Aβ complexes from computational approaches
Energy Technology Data Exchange (ETDEWEB)
Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)
2015-09-15
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Evidence of complex contagion of information in social media: An experiment using Twitter bots
DEFF Research Database (Denmark)
Mønsted, Bjarke Mørch; Sapiezynski, Piotr; Ferrara, Emilio
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex......, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts...
Toxicological risk assessment of complex mixtures through the Wtox model
Directory of Open Access Journals (Sweden)
William Gerson Matias
2015-01-01
Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.
Integrated modeling tool for performance engineering of complex computer systems
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Atmospheric dispersion modelling over complex terrain at small scale
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Padhi, S.; Tokunaga, T.
2017-12-01
Adsorption of fluoride (F) on soil can control the mobility of F and subsequent contamination of groundwater. Hence, accurate evaluation of adsorption equilibrium is a prerequisite for understanding transport and fate of F in the subsurface. While there have been studies for the adsorption behavior of F with respect to single mineral constituents based on surface complexation models (SCM), F adsorption to natural soil in the presence of complexing agents needs much investigation. We evaluated the adsorption processes of F on a natural granitic soil from Tsukuba, Japan, as a function of initial F concentration, ionic strength, and initial pH. A SCM was developed to model F adsorption behavior. Four possible surface complexation reactions were postulated with and without including dissolved aluminum (Al) and Al-F complex sorption. Decrease in F adsorption with the increase in initial pH was observed in between the initial pH range of 4 to 9, and a decrease in the rate of the reduction of adsorbed F with respect to the increase in the initial pH was observed in the initial pH range of 5 to 7. Ionic strength variation in the range of 0 to 100mM had insignificant effect on F removal. Changes in solution pH were observed by comparing the solution before and after F adsorption experiments. At acidic pH, the solution pH increased, whereas at alkaline pH, the solution pH decreased after equilibrium. The SCM including dissolved Al and the adsorption of Al-F complex can simulate the experimental results quite successfully. Also, including dissolved Al and the adsorption of Al-F complex to the model explained the change in solution pH after F adsorption.
Complex accident scenarios modelled and analysed by Stochastic Petri Nets
International Nuclear Information System (INIS)
Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš
2015-01-01
This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)
Experiments and modeling of single plastic particle conversion in suspension
DEFF Research Database (Denmark)
Nakhaei, Mohammadhadi; Wu, Hao; Grévain, Damien
2018-01-01
Conversion of single high density polyethylene (PE) particles has been studied by experiments and modeling. The experiments were carried out in a single particle combustor for five different shapes and masses of particles at temperature conditions of 900 and 1100°C. Each experiment was recorded...... against the experiments as well as literature data. Furthermore, a simplified isothermal model appropriate for CFD applications was developed, in order to model the combustion of plastic particles in cement calciners. By comparing predictions with the isothermal and the non–isothermal models under typical...
3D modeling and visualization software for complex geometries
International Nuclear Information System (INIS)
Guse, Guenter; Klotzbuecher, Michael; Mohr, Friedrich
2011-01-01
The reactor safety depends on reliable nondestructive testing of reactor components. For 100% detection probability of flaws and the determination of their size using ultrasonic methods the ultrasonic waves have to hit the flaws within a specific incidence and squint angle. For complex test geometries like testing of nozzle welds from the outside of the component these angular ranges can only be determined using elaborate mathematical calculations. The authors developed a 3D modeling and visualization software tool that allows to integrate and present ultrasonic measuring data into the 3D geometry. The software package was verified using 1:1 test samples (example: testing of the nozzle edge of the feedwater nozzle of a steam generator from the outside; testing of the reactor pressure vessel nozzle edge from the inside).
The inherent complexity in nonlinear business cycle model in resonance
International Nuclear Information System (INIS)
Ma Junhai; Sun Tao; Liu Lixia
2008-01-01
Based on Abraham C.-L. Chian's research, we applied nonlinear dynamic system theory to study the first-order and second-order approximate solutions to one category of the nonlinear business cycle model in resonance condition. We have also analyzed the relation between amplitude and phase of second-order approximate solutions as well as the relation between outer excitements' amplitude, frequency approximate solutions, and system bifurcation parameters. Then we studied the system quasi-periodical solutions, annulus periodical solutions and the path leading to system bifurcation and chaotic state with different parameter combinations. Finally, we conducted some numerical simulations for various complicated circumstances. Therefore this research will lay solid foundation for detecting the complexity of business cycles and systems in the future
Decision dynamics of departure times: Experiments and modeling
Sun, Xiaoyan; Han, Xiao; Bao, Jian-Zhang; Jiang, Rui; Jia, Bin; Yan, Xiaoyong; Zhang, Boyu; Wang, Wen-Xu; Gao, Zi-You
2017-10-01
A fundamental problem in traffic science is to understand user-choice behaviors that account for the emergence of complex traffic phenomena. Despite much effort devoted to theoretically exploring departure time choice behaviors, relatively large-scale and systematic experimental tests of theoretical predictions are still lacking. In this paper, we aim to offer a more comprehensive understanding of departure time choice behaviors in terms of a series of laboratory experiments under different traffic conditions and feedback information provided to commuters. In the experiment, the number of recruited players is much larger than the number of choices to better mimic the real scenario, in which a large number of commuters will depart simultaneously in a relatively small time window. Sufficient numbers of rounds are conducted to ensure the convergence of collective behavior. Experimental results demonstrate that collective behavior is close to the user equilibrium, regardless of different scales and traffic conditions. Moreover, the amount of feedback information has a negligible influence on collective behavior but has a relatively stronger effect on individual choice behaviors. Reinforcement learning and Fermi learning models are built to reproduce the experimental results and uncover the underlying mechanism. Simulation results are in good agreement with the experimentally observed collective behaviors.
Micro-meteorological data from the Guardo dispersion experiment in complex terrain
Energy Technology Data Exchange (ETDEWEB)
Nielsen, M.; Mikkelsen, T.
1992-11-01
The present report contains micrometeorological data from an atmospheric dispersion experiment in complex terrain. The experiment took place near the Guardo power plant, Palencia, Spain under various atmospheric conditions during the month of November 1990. It consisted of 14 tracer releases either from the power plant chimney or from the valley floor north of the town. Two kinds of observations are presented: (1) The 25 m meteorological mast at the Vivero site in the central part of the experimental area measured surface-layer profiles of wind velocity, wind direction, temperature and thermal stability together with turbulent wind and temperature fluctuations at the top level. (2) A radiosonde on a tethered balloon was launched at Camporredondo de Alba in the northern part of the area and measured boundary-layer profiles of pressure, temperature, humidity, wind speed and wind direction. (au) (4 tabs., 227 ills., 7 refs.).
Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters
Directory of Open Access Journals (Sweden)
L. A. Lee
2011-12-01
Full Text Available Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process
Experiments and Modelling of Coal Devolatilization
Institute of Scientific and Technical Information of China (English)
QiuKuanrong; LiuQianxin
1994-01-01
The coal devolatilization process of different coals was studied by means of thermogravimetric analysis method.The experimental results and the kinetic parameters of devolatilization.K and E,have been obtained. A mathematical model for coal devolatiliztion has been proposed.and the model is simple and practical.The predictions of the model are shown to be in agreement with experimental results.
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Service user experiences of REFOCUS: a process evaluation of a pro-recovery complex intervention.
Wallace, Genevieve; Bird, Victoria; Leamy, Mary; Bacon, Faye; Le Boutillier, Clair; Janosik, Monika; MacPherson, Rob; Williams, Julie; Slade, Mike
2016-09-01
Policy is increasingly focused on implementing a recovery-orientation within mental health services, yet the subjective experience of individuals receiving a pro-recovery intervention is under-studied. The aim of this study was to explore the service user experience of receiving a complex, pro-recovery intervention (REFOCUS), which aimed to encourage the use of recovery-supporting tools and support recovery-promoting relationships. Interviews (n = 24) and two focus groups (n = 13) were conducted as part of a process evaluation and included a purposive sample of service users who received the complex, pro-recovery intervention within the REFOCUS randomised controlled trial (ISRCTN02507940). Thematic analysis was used to analyse the data. Participants reported that the intervention supported the development of an open and collaborative relationship with staff, with new conversations around values, strengths and goals. This was experienced as hope-inspiring and empowering. However, others described how the recovery tools were used without context, meaning participants were unclear of their purpose and did not see their benefit. During the interviews, some individuals struggled to report any new tasks or conversations occurring during the intervention. Recovery-supporting tools can support the development of a recovery-promoting relationship, which can contribute to positive outcomes for individuals. The tools should be used in a collaborative and flexible manner. Information exchanged around values, strengths and goals should be used in care-planning. As some service users struggled to report their experience of the intervention, alternative evaluation approaches need to be considered if the service user experience is to be fully captured.
De Angelis, S.; Rietbrock, A.; Lavallée, Y.; Lamb, O. D.; Lamur, A.; Kendrick, J. E.; Hornby, A. J.; von Aulock, F. W.; Chigna, G.
2016-12-01
Understanding the complex processes that drive volcanic unrest is crucial to effective risk mitigation. Characterization of these processes, and the mechanisms of volcanic eruptions, is only possible when high-resolution geophysical and geological observations are available over comparatively long periods of time. In November 2014, the Liverpool Earth Observatory, UK, in collaboration with the Instituto Nacional de Sismologia, Meteorologia e Hidrologia (INSIVUMEH), Guatemala, established a multi-parameter geophysical network at Santiaguito, one of the most active volcanoes in Guatemala. Activity at Santiaguito throughout the past decade, until the summer of 2015, was characterized by nearly continuous lava dome extrusion accompanied by frequent and regular small-to-moderate gas or gas-and-ash explosions. Over the past two years our network collected a wealth of seismic, acoustic and deformation data, complemented by campaign visual and thermal infrared measurements, and rock and ash samples. Here we present preliminary results from the analysis of this unique dataset. Using acoustic and thermal data collected during 2014-2015 we were able to assess volume fractions of ash and gas in the eruptive plumes. The small proportion of ash inferred in the plumes confirms estimates from previous, independent, studies, and suggests that these events did not involve significant magma fragmentation in the conduit. The results also agree with the suggestion that sacrificial fragmentation along fault zones in the conduit region, due to shear-induced thermal vesiculation, may be at the origin of such events. Finally, starting in the summer of 2015, our experiment captured the transition to a new phase of activity characterized by vigorous vulcanian-style explosions producing large, ash-rich, plumes and frequent hazardous pyroclastic flows, as well as the formation a large summit crater. We present evidence of this transition in the geophysical and geological data, and discuss its
Optimum coagulant forecasting by modeling jar test experiments using ANNs
Haghiri, Sadaf; Daghighi, Amin; Moharramzadeh, Sina
2018-01-01
Currently, the proper utilization of water treatment plants and optimizing their use is of particular importance. Coagulation and flocculation in water treatment are the common ways through which the use of coagulants leads to instability of particles and the formation of larger and heavier particles, resulting in improvement of sedimentation and filtration processes. Determination of the optimum dose of such a coagulant is of particular significance. A high dose, in addition to adding costs, can cause the sediment to remain in the filtrate, a dangerous condition according to the standards, while a sub-adequate dose of coagulants can result in the reducing the required quality and acceptable performance of the coagulation process. Although jar tests are used for testing coagulants, such experiments face many constraints with respect to evaluating the results produced by sudden changes in input water because of their significant costs, long time requirements, and complex relationships among the many factors (turbidity, temperature, pH, alkalinity, etc.) that can influence the efficiency of coagulant and test results. Modeling can be used to overcome these limitations; in this research study, an artificial neural network (ANN) multi-layer perceptron (MLP) with one hidden layer has been used for modeling the jar test to determine the dosage level of used coagulant in water treatment processes. The data contained in this research have been obtained from the drinking water treatment plant located in Ardabil province in Iran. To evaluate the performance of the model, the mean squared error (MSE) and correlation coefficient (R2) parameters have been used. The obtained values are within an acceptable range that demonstrates the high accuracy of the models with respect to the estimation of water-quality characteristics and the optimal dosages of coagulants; so using these models will allow operators to not only reduce costs and time taken to perform experimental jar tests
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Assessment for Complex Learning Resources: Development and Validation of an Integrated Model
Directory of Open Access Journals (Sweden)
Gudrun Wesiak
2013-01-01
Full Text Available Today’s e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assessment forms. Therefore, learning experiences enriched with complex didactic resources - such as virtualized collaborations and serious games - have emerged. In this extension of [1] an integrated model for e-assessment (IMA is outlined, which incorporates complex learning resources and assessment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a group of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.
Some Experiences with Numerical Modelling of Overflows
DEFF Research Database (Denmark)
Larsen, Torben; Nielsen, L.; Jensen, B.
2007-01-01
across the edge of the overflow. To ensure critical flow across the edge, the upstream flow must be subcritical whereas the downstream flow is either supercritical or a free jet. Experimentally overflows are well studied. Based on laboratory experiments and Froude number scaling, numerous accurate...
Quality of experience models for multimedia streaming
Menkovski, V.; Exarchakos, G.; Liotta, A.; Cuadra Sánchez, A.
2010-01-01
Understanding how quality is perceived by viewers of multimedia streaming services is essential for efficient management of those services. Quality of Experience (QoE) is a subjective metric that quantifies the perceived quality, which is crucial in the process of optimizing tradeoff between quality
A Cross-Discipline Modeling Capstone Experience
Frazier, Marian L.; LoFaro, Thomas; Pillers Dobler, Carolyn
2018-01-01
The Mathematical Association of America (MAA) and the American Statistical Association (ASA) have both updated and revised their curriculum guidelines. The guidelines of both associations recommend that students engage in a "capstone" experience, be exposed to applications, and have opportunities to communicate mathematical and…
Elements of complexity in subsurface modeling, exemplified with three case studies
Energy Technology Data Exchange (ETDEWEB)
Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2017-04-03
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.
Mathematical Modeling: Are Prior Experiences Important?
Czocher, Jennifer A.; Moss, Diana L.
2017-01-01
Why are math modeling problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math modeling problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…
Towards Generic Models of Player Experience
DEFF Research Database (Denmark)
Shaker, Noor; Shaker, Mohammad; Abou-Zleikha, Mohamed
2015-01-01
Context personalisation is a flourishing area of research with many applications. Context personalisation systems usually employ a user model to predict the appeal of the context to a particular user given a history of interactions. Most of the models used are context-dependent and their applicab...
BRISENT: An Entropy-Based Model for Bridge-Pier Scour Estimation under Complex Hydraulic Scenarios
Directory of Open Access Journals (Sweden)
Alonso Pizarro
2017-11-01
Full Text Available The goal of this paper is to introduce the first clear-water scour model based on both the informational entropy concept and the principle of maximum entropy, showing that a variational approach is ideal for describing erosional processes under complex situations. The proposed bridge–pier scour entropic (BRISENT model is capable of reproducing the main dynamics of scour depth evolution under steady hydraulic conditions, step-wise hydrographs, and flood waves. For the calibration process, 266 clear-water scour experiments from 20 precedent studies were considered, where the dimensionless parameters varied widely. Simple formulations are proposed to estimate BRISENT’s fitting coefficients, in which the ratio between pier-diameter and sediment-size was the most critical physical characteristic controlling scour model parametrization. A validation process considering highly unsteady and multi-peaked hydrographs was carried out, showing that the proposed BRISENT model reproduces scour evolution with high accuracy.
Physical approach to air pollution climatological modelling in a complex site
Energy Technology Data Exchange (ETDEWEB)
Bonino, G [Torino, Universita; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Longhetto, A [Ente Nazionale per l' Energia Elettrica, Centro di Ricerca Termica e Nucleare, Milan; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Runca, E [International Institute for Applied Systems Analysis, Laxenburg, Austria
1980-09-01
A Gaussian climatological model which takes into account physical factors affecting air pollutant dispersion, such as nocturnal radiative inversion and mixing height evolution, associated with land breeze and sea breeze regimes, has been applied to the topographically complex area of La Spezia. The measurements of the dynamic and thermodynamic structure of the lower atmosphere obtained by field experiments are utilized in the model to calculate the SO/sub 2/ seasonal average concentrations. The model has been tested on eight three-monthly periods by comparing the simulated values with the ones measured at the SO/sub 2/ stations of the local air pollution monitoring network. Comparison of simulated and measured values was satisfactory and proved the applicability of the model for urban planning and establishment of air quality strategies.
An artificial intelligence tool for complex age-depth models
Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.
2017-12-01
CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.
International Nuclear Information System (INIS)
Belov, N.S.; Trebin, I.S.; Sorokovikova, O.
1998-01-01
Sour natural gas fields are the unique raw material base for setting up such large enterprises as gas chemical complexes. The presence of high toxic H 2 S in natural gas results in widening a range of dangerous and harmful factors for biosphere. Emission of such gases into atmosphere during accidents at gas wells and gas pipelines is of especial danger for environment and first of all for people. Development of mathematical forecast models for assessment of accidents progression and consequences is one of the main elements of works on safety analysis and risk assessment. The critical step in development of such models is their validation using the experimental material. Full-scale experiments have been conducted by the All-Union Scientific-Research institute of Natural Gases and Gas Technology (VNIIGAZ) for grounding of sizes of hazard zones in case of the severe accidents with the gas pipelines. The source of emergency gas release was the working gas pipelines with 100 mm dia. And 110 km length. This pipeline was used for transportation of natural gas with significant amount of hydrogen sulphide. During these experiments significant quantities of the gas including H 2 S were released into the atmosphere and then concentrations of gas and H 2 S were measured in the accident region. The results of these experiments are used for validation of atmospheric dispersion models including the new Lagrangian trace stochastic model that takes into account a wide range of meteorological factors. This model was developed as a part of computer system for decision-making support in case of accident release of toxic gases into atmosphere at the enterprises of Russian gas industry. (authors)
Current experiments using polarized beams of the JINR LHE accelerator complex
International Nuclear Information System (INIS)
Lehar, F.
2001-01-01
The present review is devoted to the spin-dependent experiments carried out or prepared at the JINR LHE Synchrocyclotron. The acceleration of polarized deuterons, and experiments using the internal targets, the beam extraction and the polarimetry are briefly described. Then, representative experiments using either the extracted deuteron beam or secondary beams of polarized nucleons produced by polarized deuterons are treated. Three current experiments: 'DELTA-SIGMA', 'DELTA' and 'pp-SINGLET', require the polarized nucleon beams in conjunction with the Dubna polarized proton target. Already available Δσ L (np) results from the first experiment show unexpected energy dependence. Experiment 'DELTA' should investigate the nucleon strangeness. The aim of the third experiment is to study a possible resonant behavior of the spin-singlet pp scattering amplitude. For all other Dubna experiments unpolarized nucleon or nuclei targets are used. The polarized deuteron beam allows determining spin-dependent observable necessary for understanding the deuteron structure, as well as the nucleon substructure. One part of investigations concerns deuteron break-up reactions and deuteron proton backward elastic scattering. A considerable amount of data was obtained in this domain. Another part is dedicated to the measurements of the same spin-dependent observable in a 'cumulative' region. Interesting results were obtained for proton or pion productions in inclusive and semi-inclusive measurements. In the field of inelastic deuteron reactions, the analyzing power measurements were performed in the region covering Roper resonances. Many existing models are in disagreement with observed momentum dependences of different results. Finally, the proton-carbon analyzing power measurements extended the momentum region of rescattering observables. Some inclusive Dubna results are compared to exclusive Saclay data, and to lepton-deuteron measurements. Most of the JINR LHE experiments are
Kathawate, Laxmi; Gejji, Shridhar P.; Yeole, Sachin D.; Verma, Prakash L.; Puranik, Vedavati G.; Salunke-Gawali, Sunita
2015-05-01
Synthesis and characterization of potassium complex of 2-hydroxy-3-methyl-1,4-naphthoquinone (phthiocol), the vitamin K3 analog, has been carried out using FT-IR, UV-Vis, 1H and 13C NMR, EPR, cyclic voltammetry and single crystal X-ray diffraction experiments combined with the density functional theory. It has been observed that naphthosemiquinone binds to two K+ ions extending the polymeric chain through bridging oxygens O(2) and O(3). The crystal network possesses hydrogen bonding interactions from coordinated water molecules showing water channels along the c-axis. 13C NMR spectra revealed that the complexation of phthiocol with potassium ion engenders deshielding of C(2) signals, which appear at δ = ∼14.6 ppm whereas those of C(3) exhibit up-field signals near δ ∼ 6.9 ppm. These inferences are supported by the M06-2x based density functional theory. Electrochemical experiments further suggest that reduction of naphthosemiquinone results in only a cathodic peak from catechol. A triplet state arising from interactions between neighboring phthiocol anion lead to a half field signal at g = 4.1 in the polycrystalline X-band EPR spectra at 133 K.
Cherian, Mathew P; Yadav, Manish Kumar; Mehta, Pankaj; Vijayan, K; Arulselvan, V; Jayabalan, Suresh
2014-01-01
Flow diversion is a novel method of therapy wherein an endoluminal sleeve, the flow diverter stent is placed across the neck of complex aneurysms to curatively reconstruct abnormal vasculature. We present the first Indian single center experience with the pipeline embolization device (PED) and 6 months follow-up results of 5 patients. Five complex or recurrent intracranial aneurysms in five patients were treated with PED. The patients were followed-up with magnetic resonance angiography (MRA) after 4 weeks and conventional angiography after 6 months. Feasibility, complications, clinical outcome, early 1-month MRA and 6 months conventional angiographic follow-up results were analyzed. Of the five aneurysms treated, four were in the anterior circulation and one in the posterior circulation. All five patients were treated with a single PED in each, and additionally coils were used in one patient. At 1-month MRA follow-up, complete occlusion was seen in 2 (40%) of the five cases. Post 6 months conventional angiography showed complete occlusion of the aneurysm sac in all five cases (100%). Side branch ostia were covered in three patients, all of which were patent (100%). There was no incidence of major neurological morbidity or mortality. One patient (20%) who had basilar top aneurysm experienced minor neurological disability after 5 days which partially improved. Pipeline embolization device for complex and recurrent aneurysms is technically feasible, safe, offers low complication rate, and definitive vascular reconstruction. PED can be used without fear of occlusion of covered eloquent side branches and perforators.
Experimental and Numerical Modelling of Flow over Complex Terrain: The Bolund Hill
Conan, Boris; Chaudhari, Ashvinkumar; Aubrun, Sandrine; van Beeck, Jeroen; Hämäläinen, Jari; Hellsten, Antti
2016-02-01
In the wind-energy sector, wind-power forecasting, turbine siting, and turbine-design selection are all highly dependent on a precise evaluation of atmospheric wind conditions. On-site measurements provide reliable data; however, in complex terrain and at the scale of a wind farm, local measurements may be insufficient for a detailed site description. On highly variable terrain, numerical models are commonly used but still constitute a challenge regarding simulation and interpretation. We propose a joint state-of-the-art study of two approaches to modelling atmospheric flow over the Bolund hill: a wind-tunnel test and a large-eddy simulation (LES). The approach has the particularity of describing both methods in parallel in order to highlight their similarities and differences. The work provides a first detailed comparison between field measurements, wind-tunnel experiments and numerical simulations. The systematic and quantitative approach used for the comparison contributes to a better understanding of the strengths and weaknesses of each model and, therefore, to their enhancement. Despite fundamental modelling differences, both techniques result in only a 5 % difference in the mean wind speed and 15 % in the turbulent kinetic energy (TKE). The joint comparison makes it possible to identify the most difficult features to model: the near-ground flow and the wake of the hill. When compared to field data, both models reach 11 % error for the mean wind speed, which is close to the best performance reported in the literature. For the TKE, a great improvement is found using the LES model compared to previous studies (20 % error). Wind-tunnel results are in the low range of error when compared to experiments reported previously (40 % error). This comparison highlights the potential of such approaches and gives directions for the improvement of complex flow modelling.
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling
Liu, D.
2013-01-01
The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our
Jain, Vaibhav; Maiti, Prabal K.; Bharatam, Prasad V.
2016-09-01
Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH2) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH2) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH2) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an
Analysis of a Mouse Skin Model of Tuberous Sclerosis Complex.
Directory of Open Access Journals (Sweden)
Yanan Guo
Full Text Available Tuberous Sclerosis Complex (TSC is an autosomal dominant tumor suppressor gene syndrome in which patients develop several types of tumors, including facial angiofibroma, subungual fibroma, Shagreen patch, angiomyolipomas, and lymphangioleiomyomatosis. It is due to inactivating mutations in TSC1 or TSC2. We sought to generate a mouse model of one or more of these tumor types by targeting deletion of the Tsc1 gene to fibroblasts using the Fsp-Cre allele. Mutant, Tsc1ccFsp-Cre+ mice survived a median of nearly a year, and developed tumors in multiple sites but did not develop angiomyolipoma or lymphangioleiomyomatosis. They did develop a prominent skin phenotype with marked thickening of the dermis with accumulation of mast cells, that was minimally responsive to systemic rapamycin therapy, and was quite different from the pathology seen in human TSC skin lesions. Recombination and loss of Tsc1 was demonstrated in skin fibroblasts in vivo and in cultured skin fibroblasts. Loss of Tsc1 in fibroblasts in mice does not lead to a model of angiomyolipoma or lymphangioleiomyomatosis.
Electromagnetic modelling of Ground Penetrating Radar responses to complex targets
Pajewski, Lara; Giannopoulos, Antonis
2014-05-01
This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be
Finite element modeling of piezoelectric elements with complex electrode configuration
International Nuclear Information System (INIS)
Paradies, R; Schläpfer, B
2009-01-01
It is well known that the material properties of piezoelectric materials strongly depend on the state of polarization of the individual element. While an unpolarized material exhibits mechanically isotropic material properties in the absence of global piezoelectric capabilities, the piezoelectric material properties become transversally isotropic with respect to the polarization direction after polarization. Therefore, for evaluating piezoelectric elements the material properties, including the coupling between the mechanical and the electromechanical behavior, should be addressed correctly. This is of special importance for the micromechanical description of piezoelectric elements with interdigitated electrodes (IDEs). The best known representatives of this group are active fiber composites (AFCs), macro fiber composites (MFCs) and the radial field diaphragm (RFD), respectively. While the material properties are available for a piezoelectric wafer with a homogeneous polarization perpendicular to its plane as postulated in the so-called uniform field model (UFM), the same information is missing for piezoelectric elements with more complex electrode configurations like the above-mentioned ones with IDEs. This is due to the inhomogeneous field distribution which does not automatically allow for the correct assignment of the material, i.e. orientation and property. A variation of the material orientation as well as the material properties can be accomplished by including the polarization process of the piezoelectric transducer in the finite element (FE) simulation prior to the actual load case to be investigated. A corresponding procedure is presented which automatically assigns the piezoelectric material properties, e.g. elasticity matrix, permittivity, and charge vector, for finite element models (FEMs) describing piezoelectric transducers according to the electric field distribution (field orientation and strength) in the structure. A corresponding code has been
Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu
2017-06-01
Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).
Hybrid rocket engine, theoretical model and experiment
Chelaru, Teodor-Viorel; Mingireanu, Florin
2011-06-01
The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.
Facing urban complexity : towards cognitive modelling. Part 1. Modelling as a cognitive mediator
Directory of Open Access Journals (Sweden)
Sylvie Occelli
2002-03-01
Full Text Available Over the last twenty years, complexity issues have been a central theme of enquiry for the modelling field. Whereas contributing to both a critical revisiting of the existing methods and opening new ways of reasoning, the effectiveness (and sense of modelling activity was rarely questioned. Acknowledgment of complexity however has been a fruitful spur new and more sophisticated methods in order to improve understanding and advance geographical sciences. However its contribution to tackle urban problems in everyday life has been rather poor and mainly limited to rhetorical claims about the potentialities of the new approach. We argue that although complexity has put the classical modelling activity in serious distress, it is disclosing new potentialities, which are still largely unnoticed. These are primarily related to what the authors has called the structural cognitive shift, which involves both the contents and role of modelling activity. This paper is a first part of a work aimed to illustrate the main features of this shift and discuss its main consequences on the modelling activity. We contend that a most relevant aspect of novelty lies in the new role of modelling as a cognitive mediator, i.e. as a kind of interface between the various components of a modelling process and the external environment to which a model application belongs.
Large scale experiments as a tool for numerical model development
DEFF Research Database (Denmark)
Kirkegaard, Jens; Hansen, Erik Asp; Fuchs, Jesper
2003-01-01
Experimental modelling is an important tool for study of hydrodynamic phenomena. The applicability of experiments can be expanded by the use of numerical models and experiments are important for documentation of the validity of numerical tools. In other cases numerical tools can be applied...
Effects of complex magnetic ripple on fast ions in JFT-2M ferritic insert experiments
International Nuclear Information System (INIS)
Shinohara, Kouji; Kawashima, H.; Tsuzuki, K.
2003-01-01
In JFT-2M, the ferritic steel plates (FPs) were installed inside the vacuum vessel all over vacuum vessel, which is named Ferritic Inside Wall (FIW), as the third step of the Advanced Material Tokamak Experiment (AMTEX) program. A toroidal field ripple was reduced, however the magnetic field structure has become the complex ripple structure with a non-periodic feature in the toroidal direction because of the existence of other components and ports that limit the periodic installation of FPs. Under the complex magnetic ripple, we investigated its effect on the heat flux to the first wall due to the fast ion loss. The small heat flux was observed as the result of the reduced magnetic ripple by FIW. Additional FPs were also installed outside the vacuum vessel to produce the localized larger ripple. The small ripple trapped loss was observed when the shallow ripple well exist in the poloidal cross section, and the large ripple trapped loss was observed when the ripple well hollow out the plasma region deeply. The experimental results were almost consistent with the newly developed Fully three Dimensional magnetic field Orbit-Following Monte-Carlo (F3D OFMC) code including the three dimensional complex structure of the toroidal field ripple and the non-axisymmetric first wall geometry. By using F3D OFMC, we investigated the effect on the ripple trapped loss of the localized larger ripple produced by FPs in detail. The ripple well structure, e.g. the thickness of the ripple well, is important for ripple trapped loss in complex magnetic ripple rather than the value defined at one position in a poloidal cross section. (author)
Roubelakis, Apostolos; Karangelis, Dimos; Sadeque, Syed; Yanagawa, Bobby; Modi, Amit; Barlow, Clifford W; Livesey, Steven A; Ohri, Sunil K
2017-07-01
The treatment of complex prosthetic valve endocarditis (PVE) with aortic root abscess remains a surgical challenge. Several studies support the use of biological tissues to minimize the risk of recurrent infection. We present our initial surgical experience with the use of an aortic xenograft conduit for aortic valve and root replacement. Between October 2013 and August 2015, 15 xenograft bioconduits were implanted for complex PVE with abscess (13.3% female). In 6 patients, concomitant procedures were performed: coronary bypass (n=1), mitral valve replacement (n=5) and tricuspid annuloplasty (n=1). The mean age at operation was 60.3±15.5 years. The mean Logistic European system for cardiac operating risk evaluation (EuroSCORE) was 46.6±23.6. The median follow-up time was 607±328 days (range: 172-1074 days). There were two in-hospital deaths (14.3% mortality), two strokes (14.3%) and seven patients required permanent pacemaker insertion for conduction abnormalities (46.7%). The mean length of hospital stay was 26 days. At pre-discharge echocardiography, the conduit mean gradient was 9.3±3.3mmHg and there was either none (n=6), trace (n=6) or mild aortic insufficiency (n=1). There was no incidence of mid-term death, prosthesis-related complications or recurrent endocarditis. Xenograft bioconduits may be safe and effective for aortic valve and root replacement for complex PVE with aortic root abscess. Although excess early mortality reflects the complexity of the patient population, there was good valve hemodynamics, with no incidence of recurrent endocarditis or prosthesis failure in the mid-term. Our data support the continued use and evaluation of this biological prosthesis in this high-risk patient cohort.
The Noah's Ark experiment: species dependent biodistributions of cationic 99mTc complexes
International Nuclear Information System (INIS)
Deutsch, Edward; Ketring, A.R.; Libson, Karen; Vanderheyden, J.-L.; Hirth, W.W.
1989-01-01
The time dependent biodistributions of three related 99m Tc complexes of 1, 2-bis(dimethylphosphino)ethane (DMPE) were evaluated in several animal species including humans: trans-[ 99m Tc v (DMPE) 2 O 2 ] + , trans-[ 99m Tc III (DMPE) 2 Cl 2 ] + and [ 99m Tc I (DMPE) 3 ] + . Imaging studies were performed in 10 animal species to evaluate these complexes as myocardial perfusion imaging agents. Animal models adequately predict the uninteresting behaviour of the Tc(V) cation in humans, predict to only a very limited extent the behaviour of the Tc(III) cation in humans and totally fail to predict the behaviour of the Tc(I) cation in humans. (U.K.)
Realistic modelling of observed seismic motion in complex sedimentary basins
International Nuclear Information System (INIS)
Faeh, D.; Panza, G.F.
1994-03-01
Three applications of a numerical technique are illustrated to model realistically the seismic ground motion for complex two-dimensional structures. First we consider a sedimentary basin in the Friuli region, and we model strong motion records from an aftershock of the 1976 earthquake. Then we simulate the ground motion caused in Rome by the 1915, Fucino (Italy) earthquake, and we compare our modelling with the damage distribution observed in the town. Finally we deal with the interpretation of ground motion recorded in Mexico City, as a consequence of earthquakes in the Mexican subduction zone. The synthetic signals explain the major characteristics (relative amplitudes, spectral amplification, frequency content) of the considered seismograms, and the space distribution of the available macroseismic data. For the sedimentary basin in the Friuli area, parametric studies demonstrate the relevant sensitivity of the computed ground motion to small changes in the subsurface topography of the sedimentary basin, and in the velocity and quality factor of the sediments. The total energy of ground motion, determined from our numerical simulation in Rome, is in very good agreement with the distribution of damage observed during the Fucino earthquake. For epicentral distances in the range 50km-100km, the source location and not only the local soil conditions control the local effects. For Mexico City, the observed ground motion can be explained as resonance effects and as excitation of local surface waves, and the theoretical and the observed maximum spectral amplifications are very similar. In general, our numerical simulations permit the estimate of the maximum and average spectral amplification for specific sites, i.e. are a very powerful tool for accurate micro-zonation. (author). 38 refs, 19 figs, 1 tab
Indian Ocean experiments with a coupled model
Energy Technology Data Exchange (ETDEWEB)
Wainer, I. [Sao Paulo, Univ. (Brazil). Dept. of Oceanography
1997-03-01
A coupled ocean-atmosphere model is used to investigate the equatorial Indian Ocean response to the seasonally varying monsoon winds. Special attention is given to the oceanic response to the spatial distribution and changes in direction of the zonal winds. The Indian Ocean is surrounded by an Asian land mass to the North and an African land mass to the West. The model extends latitudinally between 41 N and 41 S. The asymmetric atmospheric model is driven by a mass source/sink term that is proportional to the sea surface temperature (SST) over the oceans and the heat balance over the land. The ocean is modeled using the Anderson and McCreary reduced-gravity transport model that includes a prognostic equation for the SST. The coupled system is driven by the annual cycle as manifested by zonally symmetric and asymmetric land and ocean heating. They explored the different nature of the equatorial ocean response to various patterns of zonal wind stress forcing in order to isolate the impact of the remote response on the Somali current. The major conclusions are : i) the equatorial response is fundamentally different for easterlies and westerlies, ii) the impact of the remote forcing on the Somali current is a function of the annual cycle, iii) the size of the basin sets the phase of the interference of the remote forcing on the Somali current relative to the local forcing.
Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan
2014-01-01
Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.
Modeling and experiments of biomass combustion in a large-scale grate boiler
DEFF Research Database (Denmark)
Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen
2007-01-01
is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...
Model Experiments for the Determination of Airflow in Large Spaces
DEFF Research Database (Denmark)
Nielsen, Peter V.
Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....
Silicon Carbide Derived Carbons: Experiments and Modeling
Energy Technology Data Exchange (ETDEWEB)
Kertesz, Miklos [Georgetown University, Washington DC 20057
2011-02-28
The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.
Deformation of wrought uranium: Experiments and modeling
Energy Technology Data Exchange (ETDEWEB)
McCabe, R.J., E-mail: rmccabe@lanl.gov [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Capolungo, L. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)] [UMI 2958 Georgia Tech - CNRS, 57070 Metz (France); Marshall, P.E.; Cady, C.M.; Tome, C.N. [Materials Science and Technology Division, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2010-09-15
The room temperature deformation behavior of wrought polycrystalline uranium is studied using a combination of experimental techniques and polycrystal modeling. Electron backscatter diffraction is used to analyze the primary deformation twinning modes for wrought alpha-uranium. The {l_brace}1 3 0{r_brace}<3 1 0> twinning mode is found to be the most prominent twinning mode, with minor contributions from the '{l_brace}1 7 2{r_brace}'<3 1 2> and {l_brace}1 1 2{r_brace}'<3 7 2>' twin modes. Because of the large number of deformation modes, each with limited deformation systems, a polycrystalline model is employed to identify and quantify the activity of each mode. Model predictions of the deformation behavior and texture development agree reasonably well with experimental measures and provide reliable information about deformation systems.
Lewis, Brian A
2010-01-15
The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.
Modeling of modification experiments involving neutral-gas release
International Nuclear Information System (INIS)
Bernhardt, P.A.
1983-01-01
Many experiments involve the injection of neutral gases into the upper atmosphere. Examples are critical velocity experiments, MHD wave generation, ionospheric hole production, plasma striation formation, and ion tracing. Many of these experiments are discussed in other sessions of the Active Experiments Conference. This paper limits its discussion to: (1) the modeling of the neutral gas dynamics after injection, (2) subsequent formation of ionosphere holes, and (3) use of such holes as experimental tools
Directory of Open Access Journals (Sweden)
Daniel D Wiegmann
2016-03-01
Full Text Available Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as a model system for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms.
Model of an Evaporating Drop Experiment
Rodriguez, Nicolas
2017-11-01
A computational model of an experimental procedure to measure vapor distributions surrounding sessile drops is developed to evaluate the uncertainty in the experimental results. Methanol, which is expected to have predominantly diffusive vapor transport, is chosen as a validation test for our model. The experimental process first uses a Fourier transform infrared spectrometer to measure the absorbance along lines passing through the vapor cloud. Since the measurement contains some errors, our model allows adding random noises to the computational integrated absorbance to mimic this. Then the resulting data are interpolated before passing through a computed tomography routine to generate the vapor distribution. Next, the gradients of the vapor distribution are computed along a given control volume surrounding the drop so that the diffusive flux can be evaluated as the net rate of diffusion out of the control volume. Our model of methanol evaporation shows that the accumulated errors of the whole experimental procedure affect the diffusive fluxes at different control volumes and are sensitive to how the noisy data of integrated absorbance are interpolated. This indicates the importance of investigating a variety of data fitting methods to choose which is best to present the data. Trinity University Mach Fellowship.
Evaporation experiments and modelling for glass melts
Limpt, J.A.C. van; Beerkens, R.G.C.
2007-01-01
A laboratory test facility has been developed to measure evaporation rates of different volatile components from commercial and model glass compositions. In the set-up the furnace atmosphere, temperature level, gas velocity and batch composition are controlled. Evaporation rates have been measured
International Nuclear Information System (INIS)
Hammond, Glenn E.; Cygan, Randall Timothy
2007-01-01
Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given
Micro- and nanoflows modeling and experiments
Rudyak, Valery Ya; Maslov, Anatoly A; Minakov, Andrey V; Mironov, Sergey G
2018-01-01
This book describes physical, mathematical and experimental methods to model flows in micro- and nanofluidic devices. It takes in consideration flows in channels with a characteristic size between several hundreds of micrometers to several nanometers. Methods based on solving kinetic equations, coupled kinetic-hydrodynamic description, and molecular dynamics method are used. Based on detailed measurements of pressure distributions along the straight and bent microchannels, the hydraulic resistance coefficients are refined. Flows of disperse fluids (including disperse nanofluids) are considered in detail. Results of hydrodynamic modeling of the simplest micromixers are reported. Mixing of fluids in a Y-type and T-type micromixers is considered. The authors present a systematic study of jet flows, jets structure and laminar-turbulent transition. The influence of sound on the microjet structure is considered. New phenomena associated with turbulization and relaminarization of the mixing layer of microjets are di...
Previous Experience a Model of Practice UNAE
Ormary Barberi Ruiz; María Dolores Pesántez Palacios
2017-01-01
The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP) of the National University of Education (UNAE) of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials), pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subj...
Pyroelectric Energy Harvesting: Model and Experiments
2016-05-01
consisting of a current source for the pyroelectric current, a dielectric capacitor for the adiabatic charging and discharging, and optionally a resistor to...polarization) in a piezoelectric material. To extract work from the pyroelectric effect, the material acts as the dielectric in a capacitor that is...amplifier was chosen for the setup. The pyroelectric element is commonly modeled as a dielectric capacitor and a current source in parallel, as seen in
Multi-Sensor As-Built Models of Complex Industrial Architectures
Directory of Open Access Journals (Sweden)
Jean-François Hullo
2015-12-01
Full Text Available In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF, is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models. In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”.
Beyond the Standard Model Higgs boson searches using the ATLAS Experiment
Tsukerman, Ilya; The ATLAS collaboration
2014-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond the Standard Model (BSM) Higgs boson searches are outlined. The results are interpreted in well-motivated BSM Higgs frameworks.
Monitoring complex detectors: the uSOP approach in the Belle II experiment
International Nuclear Information System (INIS)
Capua, F. Di; Aloisio, A.; Giordano, R.; Ameli, F.; Anastasio, A.; Izzo, V.; Tortone, G.; Branchini, P.
2017-01-01
uSOP is a general purpose single board computer designed for deep embedded applications in control and monitoring of detectors, sensors and complex laboratory equipments. It is based on the AM3358 (1 GHz ARM Cortex A8 processor), equipped with USB and Ethernet interfaces. On-board RAM and solid state storage allows hosting a full LINUX distribution. In this paper we discuss the main aspects of the hardware and software design and the expandable peripheral architecture built around field busses. We report on several applications of uSOP system in the Belle II experiment, presently under construction at KEK (Tsukuba, Japan). In particular we will report the deployment of uSOP in the monitoring system framework of the endcap electromagnetic calorimeter.
Monitoring complex detectors: the uSOP approach in the Belle II experiment
Di Capua, F.; Aloisio, A.; Ameli, F.; Anastasio, A.; Branchini, P.; Giordano, R.; Izzo, V.; Tortone, G.
2017-08-01
uSOP is a general purpose single board computer designed for deep embedded applications in control and monitoring of detectors, sensors and complex laboratory equipments. It is based on the AM3358 (1 GHz ARM Cortex A8 processor), equipped with USB and Ethernet interfaces. On-board RAM and solid state storage allows hosting a full LINUX distribution. In this paper we discuss the main aspects of the hardware and software design and the expandable peripheral architecture built around field busses. We report on several applications of uSOP system in the Belle II experiment, presently under construction at KEK (Tsukuba, Japan). In particular we will report the deployment of uSOP in the monitoring system framework of the endcap electromagnetic calorimeter.
Kondo, Yoshiko; Takeda, Shigenobu; Nishioka, Jun; Obata, Hajime; Furuya, Ken; Johnson, William Keith; Wong, C. S.
2008-06-01
Complexation of iron (III) with natural organic ligands was investigated during a mesoscale iron enrichment experiment in the western subarctic North Pacific (SEEDS II). After the iron infusions, ligand concentrations increased rapidly with subsequent decreases. While the increases of ligands might have been partly influenced by amorphous iron colloids formation (12-29%), most in-situ increases were attributable to the Dilution of the fertilized patch may have contributed to the rapid decreases of the ligands. During the bloom decline, ligand concentration increased again, and the high concentrations persisted for 10 days. The conditional stability constant was not different between inside and outside of the fertilized patch. These results suggest that the chemical speciation of the released iron was strongly affected by formation of the ligands; the production of ligands observed during the bloom decline will strongly impact the iron cycle and bioavailability in the surface water.
Experiment and computation: a combined approach to study the van der Waals complexes
Directory of Open Access Journals (Sweden)
Surin L.A.
2017-01-01
Full Text Available A review of recent results on the millimetre-wave spectroscopy of weakly bound van der Waals complexes, mostly those which contain H2 and He, is presented. In our work, we compared the experimental spectra to the theoretical bound state results, thus providing a critical test of the quality of the M–H2 and M–He potential energy surfaces (PESs which are a key issue for reliable computations of the collisional excitation and de-excitation of molecules (M = CO, NH3, H2O in the dense interstellar medium. The intermolecular interactions with He and H2 play also an important role for high resolution spectroscopy of helium or para-hydrogen clusters doped by a probe molecule (CO, HCN. Such experiments are directed on the detection of superfluid response of molecular rotation in the He and p-H2 clusters.
DEFF Research Database (Denmark)
Harremoës, P.; Madsen, H.
1999-01-01
Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable by calibr......Where is the balance between simplicity and complexity in model prediction of urban drainage structures? The calibration/verification approach to testing of model performance gives an exaggerated sense of certainty. Frequently, the model structure and the parameters are not identifiable...... by calibration/verification on the basis of the data series available, which generates elements of sheer guessing - unless the universality of the model is be based on induction, i.e. experience from the sum of all previous investigations. There is a need to deal more explicitly with uncertainty...
The fence experiment - a first evaluation of shelter models
DEFF Research Database (Denmark)
Peña, Alfredo; Bechmann, Andreas; Conti, Davide
2016-01-01
We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direct...
Complex negotiations: the lived experience of enacting agency after a stroke.
Bergström, Aileen L; Eriksson, Gunilla; Asaba, Eric; Erikson, Anette; Tham, Kerstin
2015-01-01
This qualitative, longitudinal, descriptive study aimed to understand the lived experience of enacting agency, and to describe the phenomenon of agency and the meaning structure of the phenomenon during the year after a stroke. Agency is defined as making things happen in everyday life through one's actions. This study followed six persons (three men and three women, ages 63 to 89), interviewed on four separate occasions. Interview data were analysed using the Empirical Phenomenological Psychological method. The main findings showed that the participants experienced enacting agency in their everyday lives after stroke as negotiating different characteristics over a span of time, a range of difficulty, and in a number of activities, making these negotiations complex. The four characteristics described how the participants made things happen in their everyday lives through managing their disrupted bodies, taking into account their past and envisioning their futures, dealing with the world outside themselves, and negotiating through internal dialogues. This empirical evidence regarding negotiations challenges traditional definitions of agency and a new definition of agency is proposed. Understanding clients' complex negotiations and offering innovative solutions to train in real-life situations may help in the process of enabling occupations after a stroke.
Use of penile skin flap in complex anterior urethral stricture repair: our experience
International Nuclear Information System (INIS)
Nadeem, A.; Asghar, M.; Kiani, F.; Alvi, M.S.
2017-01-01
Objective: To present our experience of treatment of complex anterior urethral strictures using penile skin flap. Study Design: Descriptive, case series. Place and Duration of Study: Department of urology Combined Military Hospital Malir Cantonment, Karachi and Armed Forces Institute of Urology, Rawalpindi from Jan 2012 to Feb 2014. Material and Methods: Total 18 patients with complex anterior urethral strictures and combined anterior and bulborurethral strictures were included. Patients underwent repair using Orandi or circularfacio-cutaneous penile skin flap depending upon the size and site of stricture. First dressing was changed after two days and an in dwelling silicone two way foleycatheter was kept in place for three weeks. Graft was assessed with regards to local infection, fistula formation and restricturing. Re-stricture was assessed by performing uroflowmetery at 6 months and 1 year. Ascending urethrogram was reserved for cases with less than 10 ml/sec Q max on uroflowmetery. Repair failure was considered when there was a need for any subsequent urethral procedure asurethral dilatation, dorsal visual internal urethrotomy, or urethroplasty. Results: Overall success rate was 83.3 percent. Of all the patients operated 1(5.6 percent) had infection with loss of flap, 3(16.7 percent) had urethral fistula and none had re stricture confirmed by uroflowmetery. Conclusion: In our study the excellent results of the penile skin flap both in anterior urethral strictures and combined anterior and bulbar urethral strictures are quite encouraging. It is easy to harvest and seems anatomically more logical. (author)
Previous Experience a Model of Practice UNAE
Directory of Open Access Journals (Sweden)
Ormary Barberi Ruiz
2017-02-01
Full Text Available The statements presented in this article represents a preliminary version of the proposed model of pre-professional practices (PPP of the National University of Education (UNAE of Ecuador, an urgent institutional necessity is revealed in the descriptive analyzes conducted from technical support - administrative (reports, interviews, testimonials, pedagogical foundations of UNAE (curricular directionality, transverse axes in practice, career plan, approach and diagnostic examination as subject nature of the pre professional practice and the demand of socio educational contexts where the practices have been emerging to resize them. By relating these elements allowed conceiving the modeling of the processes of the pre-professional practices for the development of professional skills of future teachers through four components: contextual projective, implementation (tutoring, accompaniment (teaching couple and monitoring (meetings at the beginning, during and end of practice. The initial training of teachers is inherent to teaching (academic and professional training, research and links with the community, these are fundamental pillars of Ecuadorian higher education.
The OECI model: the CRO Aviano experience.
Da Pieve, Lucia; Collazzo, Raffaele; Masutti, Monica; De Paoli, Paolo
2015-01-01
In 2012, the "Centro di Riferimento Oncologico" (CRO) National Cancer Institute joined the accreditation program of the Organisation of European Cancer Institutes (OECI) and was one of the first institutes in Italy to receive recognition as a Comprehensive Cancer Center. At the end of the project, a strengths, weaknesses, opportunities, and threats (SWOT) analysis aimed at identifying the pros and cons, both for the institute and of the accreditation model in general, was performed. The analysis shows significant strengths, such as the affinity with other improvement systems and current regulations, and the focus on a multidisciplinary approach. The proposed suggestions for improvement concern mainly the structure of the standards and aim to facilitate the assessment, benchmarking, and sharing of best practices. The OECI accreditation model provided a valuable executive tool and a framework in which we can identify several important development projects. An additional impact for our institute is the participation in the project BenchCan, of which the OECI is lead partner.
Simulation and Analysis of Complex Biological Processes: an Organisation Modelling Perspective
Bosse, T.; Jonker, C.M.; Treur, J.
2005-01-01
This paper explores how the dynamics of complex biological processes can be modelled and simulated as an organisation of multiple agents. This modelling perspective identifies organisational structure occurring in complex decentralised processes and handles complexity of the analysis of the dynamics
Divertor plasma studies on DIII-D: Experiment and modeling
International Nuclear Information System (INIS)
West, W.P.; Brooks, N.H.; Allen, S.L.
1996-09-01
In a magnetically diverted tokamak, the scrape-off layer (SOL) and divertor plasma provides separation between the first wall and the core plasma, intercepting impurities generated at the wall before they reach the core plasma. The divertor plasma can also serve to spread the heat and particle flux over a large area of divertor structure wall using impurity radiation and neutral charge exchange, thus reducing peak heat and particle fluxes at the divertor strike plate. Such a reduction will be required in the next generation of tokamaks, for without it, the divertor engineering requirements are very demanding. To successfully demonstrate a radiative divertor, a highly radiative condition with significant volume recombination must be achieved in the divertor, while maintaining a low impurity content in the core plasma. Divertor plasma properties are determined by a complex interaction of classical parallel transport, anomalous perpendicular transport, impurity transport and radiation, and plasma wall interaction. In this paper the authors describe a set of experiments on DIII-D designed to provide detailed two dimensional documentation of the divertor and SOL plasma. Measurements have been made in operating modes where the plasma is attached to the divertor strike plate and in highly radiating cases where the plasma is detached from the divertor strike plate. They also discuss the results of experiments designed to influence the distribution of impurities in the plasma using enhanced SOL plasma flow. Extensive modeling efforts will be described which are successfully reproducing attached plasma conditions and are helping to elucidate the important plasma and atomic physics involved in the detachment process
Hydrodynamics of Explosion Experiments and Models
Kedrinskii, Valery K
2005-01-01
Hydronamics of Explosion presents the research results for the problems of underwater explosions and contains a detailed analysis of the structure and the parameters of the wave fields generated by explosions of cord and spiral charges, a description of the formation mechanisms for a wide range of cumulative flows at underwater explosions near the free surface, and the relevant mathematical models. Shock-wave transformation in bubbly liquids, shock-wave amplification due to collision and focusing, and the formation of bubble detonation waves in reactive bubbly liquids are studied in detail. Particular emphasis is placed on the investigation of wave processes in cavitating liquids, which incorporates the concepts of the strength of real liquids containing natural microinhomogeneities, the relaxation of tensile stress, and the cavitation fracture of a liquid as the inversion of its two-phase state under impulsive (explosive) loading. The problems are classed among essentially nonlinear processes that occur unde...
Causal Inference and Model Selection in Complex Settings
Zhao, Shandong
Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly
The complexity of role balance: support for the Model of Juggling Occupations.
Evans, Kiah L; Millsteed, Jeannine; Richmond, Janet E; Falkmer, Marita; Falkmer, Torbjorn; Girdler, Sonya J
2014-09-01
This pilot study aimed to establish the appropriateness of the Model of Juggling Occupations in exploring the complex experience of role balance amongst working women with family responsibilities living in Perth, Australia. In meeting this aim, an evaluation was conducted of a case study design, where data were collected through a questionnaire, time diary, and interview. Overall role balance varied over time and across participants. Positive indicators of role balance occurred frequently in the questionnaires and time diaries, despite the interviews revealing a predominance of negative evaluations of role balance. Between-role balance was achieved through compatible role overlap, buffering, and renewal. An exploration of within-role balance factors demonstrated that occupational participation, values, interests, personal causation, and habits were related to role balance. This pilot study concluded that the Model of Juggling Occupations is an appropriate conceptual framework to explore the complex and dynamic experience of role balance amongst working women with family responsibilities. It was also confirmed that the case study design, including the questionnaire, time diary, and interview methods, is suitable for researching role balance from this perspective.
NHL and RCGA Based Multi-Relational Fuzzy Cognitive Map Modeling for Complex Systems
Directory of Open Access Journals (Sweden)
Zhen Peng
2015-11-01
Full Text Available In order to model multi-dimensions and multi-granularities oriented complex systems, this paper firstly proposes a kind of multi-relational Fuzzy Cognitive Map (FCM to simulate the multi-relational system and its auto construct algorithm integrating Nonlinear Hebbian Learning (NHL and Real Code Genetic Algorithm (RCGA. The multi-relational FCM fits to model the complex system with multi-dimensions and multi-granularities. The auto construct algorithm can learn the multi-relational FCM from multi-relational data resources to eliminate human intervention. The Multi-Relational Data Mining (MRDM algorithm integrates multi-instance oriented NHL and RCGA of FCM. NHL is extended to mine the causal relationships between coarse-granularity concept and its fined-granularity concepts driven by multi-instances in the multi-relational system. RCGA is used to establish high-quality high-level FCM driven by data. The multi-relational FCM and the integrating algorithm have been applied in complex system of Mutagenesis. The experiment demonstrates not only that they get better classification accuracy, but it also shows the causal relationships among the concepts of the system.
Modeling complex flow structures and drag around a submerged plant of varied posture
Boothroyd, Richard J.; Hardy, Richard J.; Warburton, Jeff; Marjoribanks, Timothy I.
2017-04-01
Although vegetation is present in many rivers, the bulk of past work concerned with modeling the influence of vegetation on flow has considered vegetation to be morphologically simple and has generally neglected the complexity of natural plants. Here we report on a combined flume and numerical model experiment which incorporates time-averaged plant posture, collected through terrestrial laser scanning, into a computational fluid dynamics model to predict flow around a submerged riparian plant. For three depth-limited flow conditions (Reynolds number = 65,000-110,000), plant dynamics were recorded through high-definition video imagery, and the numerical model was validated against flow velocities collected with an acoustic Doppler velocimeter. The plant morphology shows an 18% reduction in plant height and a 14% increase in plant length, compressing and reducing the volumetric canopy morphology as the Reynolds number increases. Plant shear layer turbulence is dominated by Kelvin-Helmholtz type vortices generated through shear instability, the frequency of which is estimated to be between 0.20 and 0.30 Hz, increasing with Reynolds number. These results demonstrate the significant effect that the complex morphology of natural plants has on in-stream drag, and allow a physically determined, species-dependent drag coefficient to be calculated. Given the importance of vegetation in river corridor management, the approach developed here demonstrates the necessity to account for plant motion when calculating vegetative resistance.
The Complex Point Cloud for the Knowledge of the Architectural Heritage. Some Experiences
Aveta, C.; Salvatori, M.; Vitelli, G. P.
2017-05-01
The present paper aims to present a series of experiences and experimentations that a group of PhD from the University of Naples Federico II conducted over the past decade. This work has concerned the survey and the graphic restitution of monuments and works of art, finalized to their conservation. The targeted query of complex point cloud acquired by 3D scanners, integrated with photo sensors and thermal imaging, has allowed to explore new possibilities of investigation. In particular, we will present the scientific results of the experiments carried out on some important historical artifacts with distinct morphological and typological characteristics. According to aims and needs that emerged during the connotative process, with the support of archival and iconographic historical research, the laser scanner technology has been used in many different ways. New forms of representation, obtained directly from the point cloud, have been tested for the elaboration of thematic studies for documenting the pathologies and the decay of materials, for correlating visible aspects with invisible aspects of the artifact.
THE COMPLEX POINT CLOUD FOR THE KNOWLEDGE OF THE ARCHITECTURAL HERITAGE. SOME EXPERIENCES
Directory of Open Access Journals (Sweden)
C. Aveta
2017-05-01
Full Text Available The present paper aims to present a series of experiences and experimentations that a group of PhD from the University of Naples Federico II conducted over the past decade. This work has concerned the survey and the graphic restitution of monuments and works of art, finalized to their conservation. The targeted query of complex point cloud acquired by 3D scanners, integrated with photo sensors and thermal imaging, has allowed to explore new possibilities of investigation. In particular, we will present the scientific results of the experiments carried out on some important historical artifacts with distinct morphological and typological characteristics. According to aims and needs that emerged during the connotative process, with the support of archival and iconographic historical research, the laser scanner technology has been used in many different ways. New forms of representation, obtained directly from the point cloud, have been tested for the elaboration of thematic studies for documenting the pathologies and the decay of materials, for correlating visible aspects with invisible aspects of the artifact.
Rapid prototyping of a complex model for the manufacture of plaster molds for slip casting ceramic
Directory of Open Access Journals (Sweden)
D. P. C. Velazco
2014-12-01
Full Text Available Computer assisted designing (CAD is well known for several decades and employed for ceramic manufacturing almost since the beginning, but usually employed in the first part of the projectual ideation processes, neither in the prototyping nor in the manufacturing stages. The rapid prototyping machines, also known as 3D printers, have the capacity to produce in a few hours real pieces using plastic materials of high resistance, with great precision and similarity with respect to the original, based on unprecedented digital models produced by means of modeling with specific design software or from the digitalization of existing parts using the so-called 3D scanners. The main objective of the work is to develop the methodology used in the entire process of building a part in ceramics from the interrelationship between traditional techniques and new technologies for the manufacture of prototypes. And to take advantage of the benefits that allow us this new reproduction technology. The experience was based on the generation of a complex piece, in digital format, which served as the model. A regular 15 cm icosahedron presented features complex enough not to advise the production of the model by means of the traditional techniques of ceramics (manual or mechanical. From this digital model, a plaster mold was made in the traditional way in order to slip cast clay based slurries, freely dried in air and fired and glazed in the traditional way. This experience has shown the working hypothesis and opens up the possibility of new lines of work to academic and technological levels that will be explored in the near future. This technology provides a wide range of options to address the formal aspect of a part to be performed for the field of design, architecture, industrial design, the traditional pottery, ceramic art, etc., which allow you to amplify the formal possibilities, save time and therefore costs when drafting the necessary and appropriate matrixes
Dynamics of Symmetric Conserved Mass Aggregation Model on Complex Networks
Institute of Scientific and Technical Information of China (English)
HUA Da-Yin
2009-01-01
We investigate the dynamical behaviour of the aggregation process in the symmetric conserved mass aggregation model under three different topological structures. The dispersion σ(t, L) = (∑i(mi - ρ0)2/L)1/2 is defined to describe the dynamical behaviour where ρ0 is the density of particle and mi is the particle number on a site. It is found numerically that for a regular lattice and a scale-free network, σ(t, L) follows a power-law scaling σ(t, L) ～ tδ1 and σ(t, L) ～ tδ4 from a random initial condition to the stationary states, respectively. However, for a small-world network, there are two power-law scaling regimes, σ(t, L) ～ tδ2 when t＜T and a(t, L) ～ tδ3 when tT. Moreover, it is found numerically that δ2 is near to δ1 for small rewiring probability q, and δ3 hardly changes with varying q and it is almost the same as δ4. We speculate that the aggregation of the connection degree accelerates the mass aggregation in the initial relaxation stage and the existence of the long-distance interactions in the complex networks results in the acceleration of the mass aggregation when tT for the small-world networks. We also show that the relaxation time T follows a power-law scaling τ Lz and σ(t, L) in the stationary state follows a power-law σs(L) ～ Lσ for three different structures.
Induced polarization of clay-sand mixtures: experiments and modeling
International Nuclear Information System (INIS)
Okay, G.; Leroy, P.; Tournassat, C.; Ghorbani, A.; Jougnot, D.; Cosenza, P.; Camerlynck, C.; Cabrera, J.; Florsch, N.; Revil, A.
2012-01-01
Document available in extended abstract form only. Frequency-domain induced polarization (IP) measurements consist of imposing an alternative sinusoidal electrical current (AC) at a given frequency and measuring the resulting electrical potential difference between two other non-polarizing electrodes. The magnitude of the conductivity and the phase lag between the current and the difference of potential can be expressed into a complex conductivity with the in-phase representing electro-migration and a quadrature conductivity representing the reversible storage of electrical charges (capacitive effect) of the porous material. Induced polarization has become an increasingly popular geophysical method for hydrogeological and environmental applications. These applications include for instance the characterization of clay materials used as permeability barriers in landfills or to contain various types of contaminants including radioactive wastes. The goal of our study is to get a better understanding of the influence of the clay content, clay mineralogy, and pore water salinity upon complex conductivity measurements of saturated clay-sand mixtures in the frequency range ∼1 mHz-12 kHz. The complex conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite in the frequency range 1.4 mHz - 12 kHz. Four different types of samples were used, two containing mainly kaolinite (80% of the mass, the remaining containing 15% of smectite and 5% of illite/muscovite; 95% of kaolinite and 5% of illite/muscovite), and the two others containing mainly Na-smectite or Na-Ca-smectite (95% of the mass; bentonite). The experiments were performed with various clay contents (1, 5, 20, and 100% in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). In total, 44 saturated clay or clay-sand mixtures were prepared. Induced polarization measurements
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required
International Nuclear Information System (INIS)
Ouyang, Min; Zhao, Lijing; Hong, Liu; Pan, Zhezhe
2014-01-01
Recently numerous studies have applied complex network based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards. But how effective are these models to capture their real performance response is still a question worthy of research. Taking the Chinese railway system as an example, this paper selects three typical complex network based models, including purely topological model (PTM), purely shortest path model (PSPM), and weight (link length) based shortest path model (WBSPM), to analyze railway accessibility and flow-based vulnerability and compare their results with those from the real train flow model (RTFM). The results show that the WBSPM can produce the train routines with 83% stations and 77% railway links identical to the real routines and can approach the RTFM the best for railway vulnerability under both single and multiple component failures. The correlation coefficient for accessibility vulnerability from WBSPM and RTFM under single station failures is 0.96 while it is 0.92 for flow-based vulnerability; under multiple station failures, where each station has the same failure probability fp, the WBSPM can produce almost identical vulnerability results with those from the RTFM under almost all failures scenarios when fp is larger than 0.62 for accessibility vulnerability and 0.86 for flow-based vulnerability
An experiment on a ball-lightning model
International Nuclear Information System (INIS)
Ignatovich, F.V.; Ignatovich, V.K.
2010-01-01
We discuss total internal reflection (TIR) from an interface between glass and gainy gaseous media and propose an experiment for strong light amplification related to investigation of a ball-lightning model
Model and Computing Experiment for Research and Aerosols Usage Management
Directory of Open Access Journals (Sweden)
Daler K. Sharipov
2012-09-01
Full Text Available The article deals with a math model for research and management of aerosols released into the atmosphere as well as numerical algorithm used as hardware and software systems for conducting computing experiment.
Modeling and experiment to threshing unit of stripper combine ...
African Journals Online (AJOL)
Modeling and experiment to threshing unit of stripper combine. ... were conducted with the different feed rates and drum rotator speeds for the rice stripped mixtures. ... and damage as well as for threshing unit design and process optimization.
Reactive transport modeling of the ABM experiment with Comsol Multiphysics
International Nuclear Information System (INIS)
Pekala, Marek; Idiart, Andres; Arcos, David
2012-01-01
solution) in a stack of 30 bentonite blocks of 11 distinct initial compositions. In the model, ion diffusion is allowed between the individual bentonite blocks and between the bentonite blocks and a sand layer filling the bentonite-rock gap. The effective diffusion coefficient values for individual bentonite blocks were estimated based on the dry density of the bentonite, and the temperature-dependent evolution of the diffusion coefficients is approximated in the course of the simulation. In order to solve the problem, a set of non-linear algebraic equations (mass action law for the cation-exchange reactions, and charge and mass balance equations) have been coupled with Fickian diffusion equations. As mentioned above, the Finite Element code COMSOL Multiphysics has been used to carry out the simulations. Preliminary results for the studied problem indicate that the effect of diffusion for the studied cations and chloride is significant and has the potential to explain quantitatively the observed patterns of homogenisation in the chemical composition in the bentonite package. However, the work is currently in progress and further analyses, including a sensitivity study of variables such as diffusion coefficients and boundary conditions, are on-going. A model simulating coupled cation-exchange and diffusion of major ions in the Package 1 of the ABM field experiment has been developed. This work demonstrates the feasibility of implementing a reactive transport model directly into Comsol Multiphysics using conservation and mass action equations. Comsol offers an intuitive and at the same time powerful modelling environment for simulating coupled multiphase, multi-species reactive transport phenomena and mechanical effects in complex geometries. For this reason, Amphos 21 has been involved in work aiming to couple Comsol with other codes such as the geochemical code PHREEQC. Such code integration has the potential to provide tools uniquely suited to solving complicated reactive
Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment
Madsen, Alexander; The ATLAS collaboration
2015-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the latest results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment
Vanadia, Marco; The ATLAS collaboration
2015-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the latest Run 1 results from the ATLAS Experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in wellmotivated BSM Higgs frameworks, including the two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment
Scutti, Federico; The ATLAS collaboration
2015-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are summarized. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment
Vanadia, Marco; The ATLAS collaboration
2015-01-01
The discovery of a Higgs boson with a mass of about 125 GeV/$\\rm{c^2}$ has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this report, the latest Run 1 results from the ATLAS Experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well motivated BSM Higgs frameworks, including the two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Beyond-the-Standard Model Higgs Physics using the ATLAS Experiment
Nagata, Kazuki; The ATLAS collaboration
2014-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Beyond-the-Standard Model Higgs physics using the ATLAS experiment
Ernis, G; The ATLAS collaboration
2014-01-01
The discovery of a Higgs boson with a mass of about 125 GeV has prompted the question of whether or not this particle is part of a larger and more complex Higgs sector than that envisioned in the Standard Model. In this talk, the current results from the ATLAS experiment on Beyond-the-Standard Model (BSM) Higgs searches are outlined. Searches for additional Higgs bosons are presented and interpreted in well-motivated BSM Higgs frameworks, such as two-Higgs-doublet Models and the Minimal and Next to Minimal Supersymmetric Standard Model.
Imidazole-based Vanadium Complexes as Haloperoxidase Models ...
African Journals Online (AJOL)
NICO
Excellent conversions of thioanisole (100 %) were obtained under mild room temperature conditions. ... cluding that of sulphides, alkanes, alkenes and alcohols.5,6,10,11 ... This complex was prepared according to a literature method but with ...
Mental Models and the Control of Actions in Complex Environments
DEFF Research Database (Denmark)
Rasmussen, Jens
1987-01-01
of human activities. The need for analysis of complex work scenarios is discussed, together with the necessity of considering several levels of cognitive control depending upon different kinds of internal representations. The development of mental representations during learning and adaptation...
Complex Automated Negotiations Theories, Models, and Software Competitions
Zhang, Minjie; Robu, Valentin; Matsuo, Tokuro
2013-01-01
Complex Automated Negotiations are a widely studied, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. For this book, we solicited papers on all aspects of such complex automated negotiations, which are studied in the field of Autonomous Agents and Multi-Agent Systems. This book includes two parts, which are Part I: Agent-based Complex Automated Negotiations and Part II: Automated Negotiation Agents Competition. Each chapter in Part I is an extended version of ACAN 2011 papers after peer reviews by three PC members. Part II includes ANAC 2011 (The Second Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of...
Engineering teacher training models and experiences
González-Tirados, R. M.
2009-04-01
Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.
Surface Complexation Modeling in Variable Charge Soils: Prediction of Cadmium Adsorption
Directory of Open Access Journals (Sweden)
Giuliano Marchi
2015-10-01
Full Text Available ABSTRACT Intrinsic equilibrium constants for 22 representative Brazilian Oxisols were estimated from a cadmium adsorption experiment. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. Intrinsic equilibrium constants were optimized by FITEQL and by hand calculation using Visual MINTEQ in sweep mode, and Excel spreadsheets. Data from both models were incorporated into Visual MINTEQ. Constants estimated by FITEQL and incorporated in Visual MINTEQ software failed to predict observed data accurately. However, FITEQL raw output data rendered good results when predicted values were directly compared with observed values, instead of incorporating the estimated constants into Visual MINTEQ. Intrinsic equilibrium constants optimized by hand calculation and incorporated in Visual MINTEQ reliably predicted Cd adsorption reactions on soil surfaces under changing environmental conditions.
Energy Technology Data Exchange (ETDEWEB)
Salmina, E.S.; Wondrousch, D. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Institute for Organic Chemistry, Technical University Bergakademie Freiberg, Leipziger Str. 29, 09596 Freiberg (Germany); Kühne, R. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Potemkin, V.A. [Department of Chemistry, South Ural State Medical University, Vorovskogo 64, 454048, Chelyabinsk (Russian Federation); Schüürmann, G. [UFZ Department of Ecological Chemistry, Helmholtz Centre for Environmental Research, Permoserstr. 15, 04318 Leipzig (Germany); Institute for Organic Chemistry, Technical University Bergakademie Freiberg, Leipziger Str. 29, 09596 Freiberg (Germany)
2016-04-15
The present study is motivated by the increasing demand to consider internal partitioning into tissues instead of exposure concentrations for the environmental toxicity assessment. To this end, physiologically based pharmacokinetic (PBPK) models can be applied. We evaluated the variation in accuracy of PBPK model outcomes depending on tissue constituents modeled as sorptive phases and chemical distribution tendencies addressed by molecular descriptors. The model performance was examined using data from 150 experiments for 28 chemicals collected from US EPA databases. The simplest PBPK model is based on the “K{sub ow}-lipid content” approach as being traditional for environmental toxicology. The most elaborated one considers five biological sorptive phases (polar and non-polar lipids, water, albumin and the remaining proteins) and makes use of LSER (linear solvation energy relationship) parameters to describe the compound partitioning behavior. The “K{sub ow}-lipid content”-based PBPK model shows more than one order of magnitude difference in predicted and measured values for 37% of the studied exposure experiments while for the most elaborated model this happens only for 7%. It is shown that further improvements could be achieved by introducing corrections for metabolic biotransformation and compound transmission hindrance through a cellular membrane. The analysis of the interface distribution tendencies shows that polar tissue constituents, namely water, polar lipids and proteins, play an important role in the accumulation behavior of polar compounds with H-bond donating functional groups. For compounds without H-bond donating fragments preferable accumulation phases are storage lipids and water depending on compound polarity. - Highlights: • For reliable predictions, models of a certain complexity should be compared. • For reliable predictions non-lipid fish tissue constituents should be considered. • H-donor compounds preferably accumulate in water
International Nuclear Information System (INIS)
Salmina, E.S.; Wondrousch, D.; Kühne, R.; Potemkin, V.A.; Schüürmann, G.
2016-01-01
The present study is motivated by the increasing demand to consider internal partitioning into tissues instead of exposure concentrations for the environmental toxicity assessment. To this end, physiologically based pharmacokinetic (PBPK) models can be applied. We evaluated the variation in accuracy of PBPK model outcomes depending on tissue constituents modeled as sorptive phases and chemical distribution tendencies addressed by molecular descriptors. The model performance was examined using data from 150 experiments for 28 chemicals collected from US EPA databases. The simplest PBPK model is based on the “K_o_w-lipid content” approach as being traditional for environmental toxicology. The most elaborated one considers five biological sorptive phases (polar and non-polar lipids, water, albumin and the remaining proteins) and makes use of LSER (linear solvation energy relationship) parameters to describe the compound partitioning behavior. The “K_o_w-lipid content”-based PBPK model shows more than one order of magnitude difference in predicted and measured values for 37% of the studied exposure experiments while for the most elaborated model this happens only for 7%. It is shown that further improvements could be achieved by introducing corrections for metabolic biotransformation and compound transmission hindrance through a cellular membrane. The analysis of the interface distribution tendencies shows that polar tissue constituents, namely water, polar lipids and proteins, play an important role in the accumulation behavior of polar compounds with H-bond donating functional groups. For compounds without H-bond donating fragments preferable accumulation phases are storage lipids and water depending on compound polarity. - Highlights: • For reliable predictions, models of a certain complexity should be compared. • For reliable predictions non-lipid fish tissue constituents should be considered. • H-donor compounds preferably accumulate in water, polar
Large scale FCI experiments in subassembly geometry. Test facility and model experiments
International Nuclear Information System (INIS)
Beutel, H.; Gast, K.
A program is outlined for the study of fuel/coolant interaction under SNR conditions. The program consists of a) under water explosion experiments with full size models of the SNR-core, in which the fuel/coolant system is simulated by a pyrotechnic mixture. b) large scale fuel/coolant interaction experiments with up to 5kg of molten UO 2 interacting with liquid sodium at 300 deg C to 600 deg C in a highly instrumented test facility simulating an SNR subassembly. The experimental results will be compared to theoretical models under development at Karlsruhe. Commencement of the experiments is expected for the beginning of 1975
International Nuclear Information System (INIS)
Guilmette, Raymond A.; Parkhurst, Mary Ann
2007-01-01
Because of the lack of existing information needed to evaluate the risks from inhalation exposures to depleted uranium (DU) aerosols of US soldiers during the 1991 Persian Gulf War, the US Department of Defense funded an experimental study to measure the characteristics of DU aerosols created when Abrams tanks and Bradley fighting vehicles are struck with large-caliber DU penetrators, and a dose and risk assessment for individuals present in such vehicles. This paper describes some of the difficulties experienced in dose assessment modelling of the very complex DU aerosols created in the Capstone studies, e.g. high concentrations, heterogeneous aerosol properties, non-lognormal particle size distributions, triphasic in vitro dissolution and rapid time-varying functions of both DU air concentration and particle size. The approaches used to solve these problems along with example results are presented. (authors)
2015-11-01
Gholamreza, and Ester, Martin. “Modeling the Temporal Dynamics of Social Rating Networks Using Bidirectional Effects of Social Relations and Rating...1.1.2 β-disruptor Problems Besides the homogeneous network model consisting of uniform nodes and bidirectional links, the heterogeneous network model... neural and metabolic networks .” Biological Cybernetics 90 (2004): 311–317. 10.1007/s00422-004-0479-1. URL http://dx.doi.org/10.1007/s00422-004-0479-1 [51
INPUT DATA OF BURNING WOOD FOR CFD MODELLING USING SMALL-SCALE EXPERIMENTS
Directory of Open Access Journals (Sweden)
Petr Hejtmánek
2017-12-01
Full Text Available The paper presents an option how to acquire simplified input data for modelling of burning wood in CFD programmes. The option lies in combination of data from small- and molecular-scale experiments in order to describe the material as a one-reaction material property. Such virtual material would spread fire, develop the fire according to surrounding environment and it could be extinguished without using complex reaction molecular description. Series of experiments including elemental analysis, thermogravimetric analysis and difference thermal analysis, and combustion analysis were performed. Then the FDS model of burning pine wood in a cone calorimeter was built. In the model where those values were used. The model was validated to HRR (Heat Release Rate from the real cone calorimeter experiment. The results show that for the purpose of CFD modelling the effective heat of combustion, which is one of the basic material property for fire modelling affecting the total intensity of burning, should be used. Using the net heat of combustion in the model leads to higher values of HRR in comparison to the real experiment data. Considering all the results shown in this paper, it was shown that it is possible to simulate burning of wood using the extrapolated data obtained in small-size experiments.
Numerical simulations and mathematical models of flows in complex geometries
DEFF Research Database (Denmark)
Hernandez Garcia, Anier
The research work of the present thesis was mainly aimed at exploiting one of the strengths of the Lattice Boltzmann methods, namely, the ability to handle complicated geometries to accurately simulate flows in complex geometries. In this thesis, we perform a very detailed theoretical analysis...... and through the Chapman-Enskog multi-scale expansion technique the dependence of the kinetic viscosity on each scheme is investigated. Seeking for optimal numerical schemes to eciently simulate a wide range of complex flows a variant of the finite element, off-lattice Boltzmann method [5], which uses...... the characteristic based integration is also implemented. Using the latter scheme, numerical simulations are conducted in flows of different complexities: flow in a (real) porous network and turbulent flows in ducts with wall irregularities. From the simulations of flows in porous media driven by pressure gradients...
Complexity of repeated game model in electric power triopoly
International Nuclear Information System (INIS)
Ma Junhai; Ji Weizhuo
2009-01-01
According to the repeated game model in electric power duopoly, a triopoly outputs game model is presented. On the basis of some hypotheses, the dynamic characters are demonstrated with theoretical analysis and numerical simulations. The results show that the triopoly model is a chaotic system and it is better than the duopoly model in applications.
Information Geometric Complexity of a Trivariate Gaussian Statistical Model
Directory of Open Access Journals (Sweden)
Domenico Felice
2014-05-01
Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.
TF insert experiment log book. 2nd Experiment of CS model coil
International Nuclear Information System (INIS)
Sugimoto, Makoto; Isono, Takaaki; Matsui, Kunihiro
2001-12-01
The cool down of CS model coil and TF insert was started on August 20, 2001. It took almost one month and immediately started coil charge since September 17, 2001. The charge test of TF insert and CS model coil was completed on October 19, 2001. In this campaign, total shot numbers were 88 and the size of the data file in the DAS (Data Acquisition System) was about 4 GB. This report is a database that consists of the log list and the log sheets of every shot. This is an experiment logbook for 2nd experiment of CS model coil and TF insert for charge test. (author)
Energy Technology Data Exchange (ETDEWEB)
Park, Sang-Won; Leckie, J.O. [Stanford Univ., CA (United States); Siegel, M.D. [Sandia National Labs., Albuquerque, NM (United States)
1995-09-01
Corrensite is the dominant clay mineral in the Culebra Dolomite at the Waste Isolation Pilot Plant. The surface characteristics of corrensite, a mixed chlorite/smectite clay mineral, have been studied. Zeta potential measurements and titration experiments suggest that the corrensite surface contains a mixture of permanent charge sites on the basal plane and SiOH and AlOH sites with a net pH-dependent charge at the edge of the clay platelets. Triple-layer model parameters were determined by the double extrapolation technique for use in chemical speciation calculations of adsorption reactions using the computer program HYDRAQL. Batch adsorption studies showed that corrensite is an effective adsorbent for uranyl. The pH-dependent adsorption behavior indicates that adsorption occurs at the edge sites. Adsorption studies were also conducted in the presence of competing cations and complexing ligands. The cations did not affect uranyl adsorption in the range studied. This observation lends support to the hypothesis that uranyl adsorption occurs at the edge sites. Uranyl adsorption was significantly hindered by carbonate. It is proposed that the formation of carbonate uranyl complexes inhibits uranyl adsorption and that only the carbonate-free species adsorb to the corrensite surface. The presence of the organic complexing agents EDTA and oxine also inhibits uranyl sorption.
International Nuclear Information System (INIS)
Park, Sang-Won; Leckie, J.O.; Siegel, M.D.
1995-09-01
Corrensite is the dominant clay mineral in the Culebra Dolomite at the Waste Isolation Pilot Plant. The surface characteristics of corrensite, a mixed chlorite/smectite clay mineral, have been studied. Zeta potential measurements and titration experiments suggest that the corrensite surface contains a mixture of permanent charge sites on the basal plane and SiOH and AlOH sites with a net pH-dependent charge at the edge of the clay platelets. Triple-layer model parameters were determined by the double extrapolation technique for use in chemical speciation calculations of adsorption reactions using the computer program HYDRAQL. Batch adsorption studies showed that corrensite is an effective adsorbent for uranyl. The pH-dependent adsorption behavior indicates that adsorption occurs at the edge sites. Adsorption studies were also conducted in the presence of competing cations and complexing ligands. The cations did not affect uranyl adsorption in the range studied. This observation lends support to the hypothesis that uranyl adsorption occurs at the edge sites. Uranyl adsorption was significantly hindered by carbonate. It is proposed that the formation of carbonate uranyl complexes inhibits uranyl adsorption and that only the carbonate-free species adsorb to the corrensite surface. The presence of the organic complexing agents EDTA and oxine also inhibits uranyl sorption
Brandt, Alexander Sascha; von Rundstedt, F-C; Lazica, D A; Roth, S
2010-07-01
The rendezvous procedure for re-establishing ureteral continuity after complex ureteral injuries is introduced and we present our experience with this technique. Aspects of the technique are described in a detailed step-by-step instruction using intraoperative radiographs. We evaluated our patient data from 1998 until 2009 for cases in which the rendezvous procedure was attempted. The rendezvous procedure was used in a total of 11 patients. Realignment was successful in 10 cases (90.9 %) and the initial nephrostomy could be removed. In 3 of 7 cases postoperative removal of the JJ ureteric stent was successful. In 7 patients the final surgical ureter reconstruction was performed after a medium period of 7 months. 5 cases of ureteroneocystostomy and 2 cases of reconstruction of the ureter either with colon or ileum segments were accomplished. In 1 patient a permanent maintenance of the DJ ureteral stent was necessary. Ureteral realignment with the rendezvous procedure enables disposition of the ureteral stent in many cases, exclusively antegrade or retrograde procedures failed. By this means nephrostomy could be spared as a temporary or permanent solution and a better chance of restitutio ad integrum could be realised. Georg Thieme Verlag KG Stuttgart * New York.
Experiences in the D ampersand D of the EBWR reactor complex at Argonne National Laboratory
International Nuclear Information System (INIS)
Bhattacharyya, S.K.; Boing, L.E.; Fellhauer, C.R.
1995-02-01
EBWR went critical in Dec 1957 at 20 MW(t), was upgraded to 100 MW(t) operation. EBWR was shut down July 1967 and placed in dry lay-up. In 1986, the D ampersand D work was planned in 4 phases: final planning and preparations for D ampersand D, removal of reactor systems, removal of reactor vessel complex, and final decontamination and project closeout. Despite precautions, there was an uptake of 241 Am by D ampersand D workers following underwater plasma arc cutting within the pool; the cause was traced to an experimental 241 Pu foil (200 μg) that was lost in the mid-1960s in the reactor vessel. Several major lessons were learned from this episode, among which is the fact that research facilities often involve unusual experiments which may not be recorded. Safety analysis and review procedure for D ampersand D operations need to be carefully considered since they represent considerably different situations than reactor operations. EBWR is one of the very few cases of a prototypic reactor facility designed, operated, tested and now D ampersand D'd by one organization
International Nuclear Information System (INIS)
Hembree, D.M.; Carter, J.A.; Hinton, E.R. Jr.
2002-01-01
Full text: The Oak Ridge Y-12 National Security Complex has been involved in the U.S. nuclear weapons program since the program's inception in the 1940's. Known as the U.S. 'Fort Knox of uranium', the site is also a repository of unique expertise and experience related to enriched uranium and other weapons-related materials. Y-12's Analytical Chemistry Organization (ACO) contains a wide range of analytical instrumentation that has demonstrated the ability to provide important forensic information in a short period of time. This rapid response capability is in part due to having all of the analytical instrumentation and expertise contained in one building, within one organization. Rapid-response teams are easily formed to quickly obtain key information. The infrastructure to handle nuclear materials, e.g. chain-of-custody, radiological control, information management, etc. is maintained for normal operations. As a result, the laboratory has demonstrated the capability for rapid response times for nuclear forensic samples. This poster presentation will discuss Y-12's analytical capabilities and the importance of key instruments and highly trained personnel in providing critical information. The laboratory has collaborated with both state and federal law enforcement agencies to analyze non-nuclear forensic evidence. Y-12's participation in two nuclear forensic events, as part of multi-laboratory teams, will be described. (author)