The First Example of a Nitrile Hydratase Model Complex that Reversibly Binds Nitriles
Shearer, Jason; Jackson, Henry L.; Schweitzer, Dirk; Rittenberg, Durrell K.; Leavy, Tanya M.; Kaminsky, Werner; Scarrow, Robert C.; Kovacs, Julie A.
2015-01-01
Nitrile hydratase (NHase) is an iron-containing metalloenzyme that converts nitriles to amides. The mechanism by which this biochemical reaction occurs is unknown. One mechanism that has been proposed involves nucleophilic attack of an Fe-bound nitrile by water (or hydroxide). Reported herein is a five-coordinate model compound ([FeIII(S2Me2N3(Et,Pr))]+) containing Fe(III) in an environment resembling that of NHase, which reversibly binds a variety of nitriles, alcohols, amines, and thiocyanate. XAS shows that five-coordinate [FeIII(S2Me2N3(Et,Pr))]+ reacts with both methanol and acetonitrile to afford a six-coordinate solvent-bound complex. Competitive binding studies demonstrate that MeCN preferentially binds over ROH, suggesting that nitriles would be capable of displacing the H2O coordinated to the iron site of NHase. Thermodynamic parameters were determined for acetonitrile (ΔH = −6.2(±0.2) kcal/mol, ΔS = −29.4(±0.8) eu), benzonitrile (−4.2(±0.6) kcal/mol, ΔS = −18(±3) eu), and pyridine (ΔH = −8(±1) kcal/mol, ΔS = −41(±6) eu) binding to [FeIII(S2Me2N3(Et,Pr))]+ using variable-temperature electronic absorption spectroscopy. Ligand exchange kinetics were examined for acetonitrile, iso-propylnitrile, benzonitrile, and 4-tert-butylpyridine using 13C NMR line-broadening analysis, at a variety of temperatures. Activation parameters for ligand exchange were determined to be ΔH‡ = 7.1(±0.8) kcal/mol, ΔS‡ = −10(±1) eu (acetonitrile), ΔH‡ = 5.4(±0.6) kcal/mol, ΔS‡ = −17(±2) eu (iso-propionitrile), ΔH‡ = 4.9(±0.8) kcal/mol, ΔS‡ = −20(±3) eu (benzonitrile), and ΔH‡ = 4.7(±1.4) kcal/mol ΔS‡ = −18(±2) eu (4-tert-butylpyridine). The thermodynamic parameters for pyridine binding to a related complex, [FeIII(S2Me2N3(Pr,Pr))]+ (ΔH = −5.9(±0.8) kcal/mol, ΔS = −24(±3) eu), are also reported, as well as kinetic parameters for 4-tert-butylpyridine exchange (ΔH‡ = 3.1(±0.8) kcal/mol, ΔS‡)−25(±3) eu
Is the tungsten(IV complex (NEt42[WO(mnt2] a functional analogue of acetylene hydratase?
Directory of Open Access Journals (Sweden)
Matthias Schreyer
2017-11-01
Full Text Available The tungsten(IV complex (Et4N2[W(O(mnt2] (1; mnt = maleonitriledithiolate was proposed (Sarkar et al., J. Am. Chem. Soc. 1997, 119, 4315 to be a functional analogue of the active center of the enzyme acetylene hydratase from Pelobacter acetylenicus, which hydrates acetylene (ethyne; 2 to acetaldehyde (ethanal; 3. In the absence of a satisfactory mechanistic proposal for the hydration reaction, we considered the possibility of a metal–vinylidene type activation mode, as it is well established for ruthenium-based alkyne hydration catalysts with anti-Markovnikov regioselectivity. To validate the hypothesis, the regioselectivity of tungsten-catalyzed alkyne hydration of a terminal, higher alkyne had to be determined. However, complex 1 was not a competent catalyst for the hydration of 1-octyne under the conditions tested. Furthermore, we could not observe the earlier reported hydration activity of complex 1 towards acetylene. A critical assessment of, and a possible explanation for the earlier reported results are offered. The title question is answered with "no".
Kumar, Davinder; Nguyen, Tho N; Grapperhaus, Craig A
2014-12-01
Kinetic investigations inspired by the metalloenzyme nitrile hydratase were performed on a series of ruthenium(II) complexes to determine the effect of sulfur oxidation on catalytic nitrile hydration. The rate of benzonitrile hydration was quantified as a function of catalyst, nitrile, and water concentrations. Precatalysts L(n)RuPPh3 (n = 1-3; L(1) = 4,7-bis(2'-methyl-2'-mercapto-propyl)-1-thia-4,7-diazacyclononane; L(2) = 4-(2'-methyl-2'-sulfinatopropyl)-7-(2'-methyl-2'-mercapto-propyl)-1-thia-4,7-diazacyclononane; L(3) = 4-(2'-methyl-2'-sulfinatopropyl)-7-(2'-methyl-2'-sulfenato-propyl)-1-thia-4,7-diazacyclononane) were activated by substitution of triphenylphosphine with substrate in hot dimethylformamide solution. Rate measurements are consistent with a dynamic equilibrium between inactive aqua (L(n)Ru-OH2) and active nitrile (L(n)Ru-NCR) derivatives with K = 21 ± 1, 9 ± 0.9, and 23 ± 3 for L(1) to L(3), respectively. Subsequent hydration of the L(n)Ru-NCR intermediate yields the amide product with measured hydration rate constants (k's) of 0.37 ± 0.01, 0.82 ± 0.07, and 1.59 ± 0.12 M(-1) h(-1) for L(1) to L(3), respectively. Temperature dependent studies reveal that sulfur oxidation lowers the enthalpic barrier by 27 kJ/mol, but increases the entropic barrier by 65 J/(mol K). Density functional theory (DFT) calculations (B3LYP/LanL2DZ (Ru); 6-31G(d) (all other atoms)) support a nitrile bound catalytic cycle with lowering of the reaction barrier as a consequence of sulfur oxidation through enhanced nitrile binding and attack of the water nucleophile through a highly organized transition state.
Bedoyan, Jirair K.; Yang, Samuel P.; Ferdinandusse, Sacha; Jack, Rhona M.; Miron, Alexander; Grahame, George; DeBrosse, Suzanne D.; Hoppel, Charles L.; Kerr, Douglas S.; Wanders, Ronald J. A.
2017-01-01
Mutations in ECHS1 result in short-chain enoyl-CoA hydratase (SCEH) deficiency which mainly affects the catabolism of various amino acids, particularly valine. We describe a case compound heterozygous for ECHS1 mutations c.836T>C (novel) and c.8C>A identified by whole exome sequencing of proband and
Martínková, Ludmila; Chmátal, Martin
2016-10-01
The aim of this study was to design an effective method for the bioremediation of coking wastewaters, specifically for the concurrent elimination of their highly toxic components - cyanide and phenols. Almost full degradation of free cyanide (0.32-20 mM; 8.3-520 mg L(-1)) in the model and the real coking wastewaters was achieved by using a recombinant cyanide hydratase in the first step. The removal of cyanide, a strong inhibitor of tyrosinase, enabled an effective degradation of phenols by this enzyme in the second step. Phenol (16.5 mM, 1,552 mg L(-1)) was completely removed from a real coking wastewater within 20 h and cresols (5.0 mM, 540 mg L(-1)) were removed by 66% under the same conditions. The integration of cyanide hydratase and tyrosinase open up new possibilities for the bioremediation of wastewaters with complex pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Matthias Engleder
Full Text Available Kievitone hydratase catalyzes the addition of water to the double bond of the prenyl moiety of plant isoflavonoid kievitone and, thereby, forms the tertiary alcohol hydroxy-kievitone. In nature, this conversion is associated with a defense mechanism of fungal pathogens against phytoalexins generated by host plants after infection. As of today, a gene sequence coding for kievitone hydratase activity has only been identified and characterized in Fusarium solani f. sp. phaseoli. Here, we report on the identification of a putative kievitone hydratase sequence in Nectria haematococca (NhKHS, the teleomorph state of F. solani, based on in silico sequence analyses. After heterologous expression of the enzyme in the methylotrophic yeast Pichia pastoris, we have confirmed its kievitone hydration activity and have assessed its biochemical properties and substrate specificity. Purified recombinant NhKHS is obviously a homodimeric glycoprotein. Due to its good activity for the readily available chalcone derivative xanthohumol (XN, this compound was selected as a model substrate for biochemical studies. The optimal pH and temperature for hydratase activity were 6.0 and 35°C, respectively, and apparent Vmax and Km values for hydration of XN were 7.16 μmol min-1 mg-1 and 0.98 ± 0.13 mM, respectively. Due to its catalytic properties and apparent substrate promiscuity, NhKHS is a promising enzyme for the biocatalytic production of tertiary alcohols.
Genetics Home Reference: 3-methylglutaconyl-CoA hydratase deficiency
... provide energy for cells. This amino acid is broken down in cell structures called mitochondria , which convert ... 3-methylglutaconyl-CoA hydratase, leucine is not properly broken down, which leads to a buildup of related ...
International Nuclear Information System (INIS)
Phan, Isabelle; Subramanian, Sandhya; Olsen, Christian; Edwards, Thomas E.; Guo, Wenjin; Zhang, Yang; Van Voorhis, Wesley C.; Stewart, Lance J.; Myler, Peter J.
2011-01-01
Fumarate hydratase is an enzyme of the tricarboxylic acid cycle, one of the metabolic pathways characteristic of the mitochondria. The structure of R. prowazekii class II fumarate hydratase is reported at 2.4 Å resolution and is compared with the available structure of the human homolog. Rickettsiae are obligate intracellular parasites of eukaryotic cells that are the causative agents responsible for spotted fever and typhus. Their small genome (about 800 protein-coding genes) is highly conserved across species and has been postulated as the ancestor of the mitochondria. No genes that are required for glycolysis are found in the Rickettsia prowazekii or mitochondrial genomes, but a complete set of genes encoding components of the tricarboxylic acid cycle and the respiratory-chain complex is found in both. A 2.4 Å resolution crystal structure of R. prowazekii fumarate hydratase, an enzyme catalyzing the third step of the tricarboxylic acid cycle pathway that ultimately converts phosphoenolpyruvate into succinyl-CoA, has been solved. A structure alignment with human mitochondrial fumarate hydratase highlights the close similarity between R. prowazekii and mitochondrial enzymes
The aconitate hydratase family from Citrus
Directory of Open Access Journals (Sweden)
Cercos Manuel
2010-10-01
Full Text Available Abstract Background Research on citrus fruit ripening has received considerable attention because of the importance of citrus fruits for the human diet. Organic acids are among the main determinants of taste and organoleptic quality of fruits and hence the control of fruit acidity loss has a strong economical relevance. In citrus, organic acids accumulate in the juice sac cells of developing fruits and are catabolized thereafter during ripening. Aconitase, that transforms citrate to isocitrate, is the first step of citric acid catabolism and a major component of the citrate utilization machinery. In this work, the citrus aconitase gene family was first characterized and a phylogenetic analysis was then carried out in order to understand the evolutionary history of this family in plants. Gene expression analyses of the citrus aconitase family were subsequently performed in several acidic and acidless genotypes to elucidate their involvement in acid homeostasis. Results Analysis of 460,000 citrus ESTs, followed by sequencing of complete cDNA clones, identified in citrus 3 transcription units coding for putatively active aconitate hydratase proteins, named as CcAco1, CcAco2 and CcAco3. A phylogenetic study carried on the Aco family in 14 plant species, shows the presence of 5 Aco subfamilies, and that the ancestor of monocot and dicot species shared at least one Aco gene. Real-time RT-PCR expression analyses of the three aconitase citrus genes were performed in pulp tissues along fruit development in acidic and acidless citrus varieties such as mandarins, oranges and lemons. While CcAco3 expression was always low, CcAco1 and CcAco2 genes were generally induced during the rapid phase of fruit growth along with the maximum in acidity and the beginning of the acid reduction. Two exceptions to this general pattern were found: 1 Clemenules mandarin failed inducing CcAco2 although acid levels were rapidly reduced; and 2 the acidless "Sucreña" orange
57Fe Moessbauer spectroscopic studies on photosensitive nitrile hydratase (NHase)
International Nuclear Information System (INIS)
Kobayashi, Yoshio; Odaka, Masafumi
2001-01-01
57 Fe Moessbauer spectroscopy is a very useful technique for elucidating the chemical properties and biological changes of Fe species located at the reaction centers in various biological systems. We have applied 57 Fe Moessbauer spectroscopy to study the mechanism of photoactivation and the structural change caused by light irradiation of nitrile hydratase (NHase). (author)
The Application of Nitrile Hydratases in Organic Synthesis
Van Pelt, S.
2010-01-01
Nitrile hydratases (NHases, E.C. 4.2.1.84) catalyse the transformation of nitriles into the corresponding amides and were first discovered 30 years ago in studies on the microbial degradation of toxic cyano-group-containing compounds. The use of NHases in synthetic chemistry is especially
Directory of Open Access Journals (Sweden)
Oleg Svatos
2013-01-01
Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.
International Nuclear Information System (INIS)
Brown, T.W.
2010-11-01
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Simulation in Complex Modelling
DEFF Research Database (Denmark)
Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin
2017-01-01
This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....
Boccara, Nino
2010-01-01
Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).
International Nuclear Information System (INIS)
Schreckenberg, M
2004-01-01
This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Predictive Surface Complexation Modeling
Energy Technology Data Exchange (ETDEWEB)
Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Polystochastic Models for Complexity
Iordache, Octavian
2010-01-01
This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...
Germline fumarate hydratase mutations in patients with ovarian mucinous cystadenoma
DEFF Research Database (Denmark)
Ylisaukko-oja, Sanna K.; Cybulski, Cezary; Lehtonen, Rainer
2006-01-01
Germline mutations in the fumarate hydratase (FH) gene were recently shown to predispose to the dominantly inherited syndrome, hereditary leiomyomatosis and renal cell cancer (HLRCC). HLRCC is characterized by benign leiomyomas of the skin and the uterus, renal cell carcinoma, and uterine...... leiomyosarcoma. The aim of this study was to identify new families with FH mutations, and to further examine the tumor spectrum associated with FH mutations. FH germline mutations were screened from 89 patients with RCC, skin leiomyomas or ovarian tumors. Subsequently, 13 ovarian and 48 bladder carcinomas were...
Rosner, Bettina M.; Schink, Bernhard
1995-01-01
Acetylene hydratase of the mesophilic fermenting bacterium Pelobacter acetylenicus catalyzes the hydration of acetylene to acetaldehyde. Growth of P. acetylenicus with acetylene and specific acetylene hydratase activity depended on tungstate or, to a lower degree, molybdate supply in the medium. The specific enzyme activity in cell extract was highest after growth in the presence of tungstate. Enzyme activity was stable even after prolonged storage of the cell extract or of the purified prote...
Zhang, Yu; Zeng, Zhuotong; Zeng, Guangming; Liu, Xuanming; Chen, Ming; Liu, Lifeng; Liu, Zhifeng; Xie, Gengxin
2013-08-01
The continuing discharge of nitriles in various industrial processes has caused serious environmental consequences of nitrile pollution. Microorganisms possess several nitrile-degrading pathways by direct interactions of nitriles with nitrile-degrading enzymes. However, these interactions are largely unknown and difficult to experimentally determine but important for interpretation of nitrile metabolisms and design of nitrile-degrading enzymes with better nitrile-converting activity. Here, we undertook a molecular modeling study of enzyme-substrate binding modes in the bi-enzyme pathway for degradation of nitrile to acid. Docking results showed that the top substrates having favorable interactions with nitrile hydratase from Rhodococcus erythropolis AJ270 (ReNHase), nitrile hydratase from Pseudonocardia thermophila JCM 3095 (PtNHase), and amidase from Rhodococcus sp. N-771 (RhAmidase) were benzonitrile, 3-cyanopyridine, and L-methioninamide, respectively. We further analyzed the interactional profiles of these top poses with corresponding enzymes, showing that specific residues within the enzyme's binding pockets formed diverse contacts with substrates. This information on binding landscapes and interactional profiles is of great importance for the design of nitrile-degrading enzyme mutants with better oxidation activity toward nitriles or amides in the process of pollutant treatments.
Self-subunit swapping occurs in another gene type of cobalt nitrile hydratase.
Directory of Open Access Journals (Sweden)
Yi Liu
Full Text Available Self-subunit swapping is one of the post-translational maturation of the cobalt-containing nitrile hydratase (Co-NHase family of enzymes. All of these NHases possess a gene organization of , which allows the activator protein to easily form a mediatory complex with the α-subunit of the NHase after translation. Here, we discovered that the incorporation of cobalt into another type of Co-NHase, with a gene organization of , was also dependent on self-subunit swapping. We successfully isolated a recombinant NHase activator protein (P14K of Pseudomonas putida NRRL-18668 by adding a Strep-tag N-terminal to the P14K gene. P14K was found to form a complex [α(StrepP14K(2] with the α-subunit of the NHase. The incorporation of cobalt into the NHase of P. putida was confirmed to be dependent on the α-subunit substitution between the cobalt-containing α(StrepP14K(2 and the cobalt-free NHase. Cobalt was inserted into cobalt-free α(StrepP14K(2 but not into cobalt-free NHase, suggesting that P14K functions not only as a self-subunit swapping chaperone but also as a metallochaperone. In addition, NHase from P. putida was also expressed by a mutant gene that was designed with a order. Our findings expand the general features of self-subunit swapping maturation.
Appropriate complexity landscape modeling
Larsen, Laurel G.; Eppinga, Maarten B.; Passalacqua, Paola; Getz, Wayne M.; Rose, Kenneth A.; Liang, Man
Advances in computing technology, new and ongoing restoration initiatives, concerns about climate change's effects, and the increasing interdisciplinarity of research have encouraged the development of landscape-scale mechanistic models of coupled ecological-geophysical systems. However,
Peplowski, Lukasz; Kubiak, Karina; Nowak, Wieslaw
2007-07-01
Nitrile hydratase (NHase) is an enzyme containing non-corrin Co3+ in the non-standard active site. NHases from Pseudonocardia thermophila JCM 3095 catalyse hydration of nitriles to corresponding amides. The efficiency of the enzyme is 100 times higher for aliphatic nitriles then aromatic ones. In order to understand better this selectivity dockings of a series of aliphatic and aromatic nitriles and related amides into a model protein based on an X-ray structure were performed. Substantial differences in binding modes were observed, showing better conformational freedom of aliphatic compounds. Distinct interactions with postranslationally modified cysteines present in the active site of the enzyme were observed. Modeling shows that water molecule activated by a metal ion may easily directly attack the docked acrylonitrile to transform this molecule into acryloamide. Thus docking studies provide support for one of the reaction mechanisms discussed in the literature.
Epidemic modeling in complex realities.
Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro
2007-04-01
In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.
Directory of Open Access Journals (Sweden)
Youfeng Yang
synthesis for rapid proliferation and is essential for defense against increased oxidative stress. This increased NADPH producing PPP activity was shown to be a strong consistent feature in both fumarate hydratase deficient tumors and cell line models.
Biotransformation of benzonitrile herbicides via the nitrile hydratase-amidase pathway in rhodococci
Czech Academy of Sciences Publication Activity Database
Veselá, Alicja Barbara; Pelantová, Helena; Šulc, Miroslav; Macková, M.; Lovecká, P.; Thimová, P.; Pasquarelli, F.; Pičmanová, Martina; Pátek, Miroslav; Bhalla, T. C.; Martínková, Ludmila
2012-01-01
Roč. 39, č. 12 (2012), s. 1811-1819 ISSN 1367-5435 R&D Projects: GA MŠk OC09046; GA ČR(CZ) GAP504/11/0394; GA ČR GD305/09/H008 Keywords : Nitrile hydratase * Amidase * Benzonitrile herbicides Subject RIV: EE - Microbiology, Virology Impact factor: 2.321, year: 2012
Czech Academy of Sciences Publication Activity Database
Martínková, Ludmila; Veselá, Alicja Barbara; Rinágelová, Anna; Chmátal, Martin
2015-01-01
Roč. 99, č. 21 (2015), s. 8875-8882 ISSN 0175-7598 R&D Projects: GA TA ČR TA01021368; GA ČR(CZ) GAP504/11/0394 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Cyanide dihydratase * Enzyme production Subject RIV: CE - Biochemistry Impact factor: 3.376, year: 2015
Nitrile Hydratase CLEAs: The immobilization and stabilization of an industrially important enzyme
Czech Academy of Sciences Publication Activity Database
van Pelt, S.; Quignard, S.; Kubáč, David; Sorokin, D. Y.; van Rantwijk, F.; Sheldon, R. A.
2008-01-01
Roč. 10, č. 4 (2008), s. 395-400 ISSN 1463-9262 R&D Projects: GA MŠk OC D25.002 Institutional research plan: CEZ:AV0Z50200510 Keywords : nitrilase hydratase * clea * cross-linking Subject RIV: CE - Biochemistry Impact factor: 4.542, year: 2008
Computational models of complex systems
Dabbaghian, Vahid
2014-01-01
Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Complex Networks in Psychological Models
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Complex fluids modeling and algorithms
Saramito, Pierre
2016-01-01
This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.
Maksimov, A Iu; Kuznetsova, M V; Ovechkina, G V; Kozlov, S V; Maksimova, Iu G; Demakov, V A
2003-01-01
Effects of some nitriles and amides, as well as glucose and ammonium, on the growth and the nitrile hydratase (EC 4.2.1.84) activity of the Rhodococcus sp. strain gt1 isolated from soil were studied. The activity of nitrile hydratase mainly depended on carbon and nitrogen supply to cells. The activity of nitrile hydratase was high in the presence of glucose and ammonium at medium concentrations and decreased at concentrations of glucose more than 0.3%. Saturated unsubstituted aliphatic nitriles and amides were found to be a good source of nitrogen and carbon. However, the presence of nitriles and amides in the medium was not absolutely necessary for the expression of the activity of nitrile hydratase isolated from the Rhodococcus sp. strain gt1.
Rucká, Lenka; Volkova, Olga; Pavlík, Adam; Kaplan, Ondřej; Kracík, Martin; Nešvera, Jan; Martínková, Ludmila; Pátek, Miroslav
2014-06-01
Bacterial amidases and nitrile hydratases can be used for the synthesis of various intermediates and products in the chemical and pharmaceutical industries and for the bioremediation of toxic pollutants. The aim of this study was to analyze the expression of the amidase and nitrile hydratase genes of Rhodococcus erythropolis and test the stereospecific nitrile hydratase and amidase activities on chiral cyanohydrins. The nucleotide sequences of the gene clusters containing the oxd (aldoxime dehydratase), ami (amidase), nha1, nha2 (subunits of the nitrile hydratase), nhr1, nhr2, nhr3 and nhr4 (putative regulatory proteins) genes of two R. erythropolis strains, A4 and CCM2595, were determined. All genes of both of the clusters are transcribed in the same direction. RT-PCR analysis, primer extension and promoter fusions with the gfp reporter gene showed that the ami, nha1 and nha2 genes of R. erythropolis A4 form an operon transcribed from the Pami promoter and an internal Pnha promoter. The activity of Pami was found to be weakly induced when the cells grew in the presence of acetonitrile, whereas the Pnha promoter was moderately induced by both the acetonitrile or acetamide used instead of the inorganic nitrogen source. However, R. erythropolis A4 cells showed no increase in amidase and nitrile hydratase activities in the presence of acetamide or acetonitrile in the medium. R. erythropolis A4 nitrile hydratase and amidase were found to be effective at hydrolysing cyanohydrins and 2-hydroxyamides, respectively.
Model complexity control for hydrologic prediction
Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.
2008-01-01
A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore
Directory of Open Access Journals (Sweden)
Jake A. LeVieux
2016-12-01
Full Text Available Polycyclic aromatic hydrocarbons (PAHs are highly toxic, pervasive environmental pollutants with mutagenic, teratogenic, and carcinogenic properties. There is interest in exploiting the nutritional capabilities of microbes to remove PAHs from various environments including those impacted by improper disposal or spills. Although there is a considerable body of literature on PAH degradation, the substrates and products for many of the enzymes have never been identified and many proposed activities have never been confirmed. This is particularly true for high molecular weight PAHs (e.g., phenanthrene, fluoranthene, and pyrene. As a result, pathways for the degradation of these compounds are proposed to follow one elucidated for naphthalene with limited experimental verification. In this pathway, ring fission produces a species that can undergo a non-enzymatic cyclization reaction. An isomerase opens the ring and catalyzes a cis to trans double bond isomerization. The resulting product is the substrate for a hydratase-aldolase, which catalyzes the addition of water to the double bond of an α,β-unsaturated ketone, followed by a retro-aldol cleavage. Initial kinetic and mechanistic studies of the hydratase-aldolase in the naphthalene pathway (designated NahE and two hydratase-aldolases in the phenanthrene pathway (PhdG and PhdJ have been completed. Crystallographic work on two of the enzymes (NahE and PhdJ provides a rudimentary picture of the mechanism and a platform for future work to identify the structural basis for catalysis and the individual specificities of these hydratase-aldolases.
An Aeroplysinin-1 Specific Nitrile Hydratase Isolated from the Marine Sponge Aplysina cavernicola
Directory of Open Access Journals (Sweden)
Peter Proksch
2013-08-01
Full Text Available A nitrile hydratase (NHase that specifically accepts the nitrile aeroplysinin-1 (1 as a substrate and converts it into the dienone amide verongiaquinol (7 was isolated, partially purified and characterized from the Mediterranean sponge Aplysina cavernicola; although it is currently not known whether the enzyme is of sponge origin or produced by its symbiotic microorganisms. The formation of aeroplysinin-1 and of the corresponding dienone amide is part of the chemical defence system of A. cavernicola. The latter two compounds that show strong antibiotic activity originate from brominated isoxazoline alkaloids that are thought to protect the sponges from invasion of bacterial pathogens. The sponge was shown to contain at least two NHases as two excised protein bands from a non denaturating Blue Native gel showed nitrile hydratase activity, which was not observed for control samples. The enzymes were shown to be manganese dependent, although cobalt and nickel ions were also able to recover the activity of the nitrile hydratases. The temperature and pH optimum of the studied enzymes were found at 41 °C and pH 7.8. The enzymes showed high substrate specificity towards the physiological substrate aeroplysinin-1 (1 since none of the substrate analogues that were prepared either by partial or by total synthesis were converted in an in vitro assay. Moreover de-novo sequencing by mass spectrometry was employed to obtain information about the primary structure of the studied NHases, which did not reveal any homology to known NHases.
Ferrous and ferric ions-based high-throughput screening strategy for nitrile hydratase and amidase.
Lin, Zhi-Jian; Zheng, Ren-Chao; Lei, Li-Hua; Zheng, Yu-Guo; Shen, Yin-Chu
2011-06-01
Rapid and direct screening of nitrile-converting enzymes is of great importance in the development of industrial biocatalytic process for pharmaceuticals and fine chemicals. In this paper, a combination of ferrous and ferric ions was used to establish a novel colorimetric screening method for nitrile hydratase and amidase with α-amino nitriles and α-amino amides as substrates, respectively. Ferrous and ferric ions reacted sequentially with the cyanide dissociated spontaneously from α-amino nitrile solution, forming a characteristic deep blue precipitate. They were also sensitive to weak basicity due to the presence of amino amide, resulting in a yellow precipitate. When amino amide was further hydrolyzed to amino acid, it gave a light yellow solution. Mechanisms of color changes were further proposed. Using this method, two isolates with nitrile hydratase activity towards 2-amino-2,3-dimethyl butyronitrile, one strain capable of hydrating 2-amino-4-(hydroxymethyl phosphiny) butyronitrile and another microbe exhibiting amidase activity against 2-amino-4-methylsulfanyl butyrlamide were obtained from soil samples and culture collections of our laboratory. Versatility of this method enabled it the first direct and inexpensive high-throughput screening system for both nitrile hydratase and amidase. Copyright © 2011 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Yuan Yao Chen
2016-10-01
Full Text Available Lactobacilli convert linoleic acid to the antifungal compound 10-hydroxy-12-octadecenoic acid (10-HOE by linoleate 10-hydratase (10-LAH. However, the effect of this conversion on cellularmembrane physiology and properties of the cell surface have not been demonstrated. Moreover, L. plantarum produces 13-hydroxy-9-octadecenoic acid (13-HOE in addition to 10-HOE, but the antifungal activity of 13-HOE was unknown. Phylogenetic analyses conducted in this study did not differentiate between 10-LAH and linoleate 13-hydratase (13-LAH. Thus, linoleate hydratases (LAHs must be characterized through their differences in their activities of linoleate conversion. Four genes encoding putative LAHs from lactobacilli were cloned, heterologous expressed, purified and identified as FAD-dependent 10-LAH. The unsaturated fatty acid substrates stimulated the growth of lactobacilli. We also investigated the role of 10-LAH in ethanol tolerance, membrane fluidity and hydrophobicity of cell surfaces in lactobacilli by disruption of 10-lah. Compared with the L. plantarum 10-lah deficient strain, 10-LAH in wild-type strain did not exert effect on cell survival and membrane fluidity under ethanol stress, but influenced the cell surface hydrophobicity. Moreover, deletion of 10-LAH in L. plantarum facilitated purification of 13-HOE and demonstration of its antifungal activity against Penicillium roquefortii and Aspergillus niger.
Nonparametric Bayesian Modeling of Complex Networks
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Mørup, Morten
2013-01-01
an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...
Czech Academy of Sciences Publication Activity Database
Kubáč, David; Kaplan, Ondřej; Elišáková, Veronika; Pátek, Miroslav; Vejvoda, Vojtěch; Slámová, Kristýna; Tóthová, A.; Lemaire, M.; Gallienne, E.; Lutz-Wahl, S.; Fischer, L.; Kuzma, Marek; Pelantová, Helena; van Pelt, S.; Bolte, J.; Křen, Vladimír; Martínková, Ludmila
2008-01-01
Roč. 50, 2-4 (2008), s. 107-113 ISSN 1381-1177 R&D Projects: GA ČR GA203/05/2267; GA MŠk(CZ) LC06010; GA MŠk OC 171 Grant - others:XE(XE) ESF COST D25/0002/02; CZ(CZ) D10-CZ25/06-07; CZ(CZ) D-25 Institutional research plan: CEZ:AV0Z50200510 Keywords : rhodococcus erythropolis * nitrile hydratase * amidase Subject RIV: EE - Microbiology, Virology Impact factor: 2.015, year: 2008
Identification and characterization of an oleate hydratase-encoding gene from Bifidobacterium breve.
O'Connell, Kerry Joan; Motherway, Mary O'Connell; Hennessey, Alan A; Brodhun, Florian; Ross, R Paul; Feussner, Ivo; Stanton, Catherine; Fitzgerald, Gerald F; van Sinderen, Douwe
2013-01-01
Bifidobacteria are common commensals of the mammalian gastrointestinal tract. Previous studies have suggested that a bifidobacterial myosin cross reactive antigen (MCRA) protein plays a role in bacterial stress tolerance, while this protein has also been linked to the biosynthesis of conjugated linoleic acid (CLA) in bifidobacteria. In order to increase our understanding on the role of MCRA in bifidobacteria we created and analyzed an insertion mutant of the MCRA-encoding gene of B. breve NCFB 2258. Our results demonstrate that the MCRA protein of B. breve NCFB 2258 does not appear to play a role in CLA production, yet is an oleate hydratase, which contributes to bifidobacterial solvent stress protection.
Osteosarcoma models : understanding complex disease
Mohseny, Alexander Behzad
2012-01-01
A mesenchymal stem cell (MSC) based osteosarcoma model was established. The model provided evidence for a MSC origin of osteosarcoma. Normal MSCs transformed spontaneously to osteosarcoma-like cells which was always accompanied by genomic instability and loss of the Cdkn2a locus. Accordingly loss of
Thermodynamic modeling of complex systems
DEFF Research Database (Denmark)
Liang, Xiaodong
after an oil spill. Engineering thermodynamics could be applied in the state-of-the-art sonar products through advanced artificial technology, if the speed of sound, solubility and density of oil-seawater systems could be satisfactorily modelled. The addition of methanol or glycols into unprocessed well...... is successfully applied to model the phase behaviour of water, chemical and hydrocarbon (oil) containing systems with newly developed pure component parameters for water and chemicals and characterization procedures for petroleum fluids. The performance of the PCSAFT EOS on liquid-liquid equilibria of water...... with hydrocarbons has been under debate for some vii years. An interactive step-wise procedure is proposed to fit the model parameters for small associating fluids by taking the liquid-liquid equilibrium data into account. It is still far away from a simple task to apply PC-SAFT in routine PVT simulations and phase...
Johnson, William H; Wang, Susan C; Stanley, Thanuja M; Czerwinski, Robert M; Almrud, Jeffrey J; Poelarends, Gerrit J; Murzin, Alexey G; Whitman, Christian P
2004-01-01
A series of 2-fluoro-4-alkene and 2-fluoro-4-alkyne substrate analogues were synthesized and examined as potential inhibitors of three enzymes: 4-oxalocrotonate tautomerase (4-OT) and vinylpyruvate hydratase (VPH) from the catechol meta-fission pathway and a closely related 4-OT homologue found in
Czech Academy of Sciences Publication Activity Database
Rinágelová, Anna; Kaplan, Ondřej; Veselá, Alicja Barbara; Chmátal, Martin; Křenková, Alena; Plíhal, Ondřej; Pasquarelli, Fabrizia; Cantarella, M.; Martínková, Ludmila
2014-01-01
Roč. 49, č. 3 (2014), s. 445-450 ISSN 1359-5113 R&D Projects: GA ČR(CZ) GAP504/11/0394; GA TA ČR TA01021368 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Nitrilase * Aspergillus niger Subject RIV: CE - Biochemistry Impact factor: 2.516, year: 2014
Czech Academy of Sciences Publication Activity Database
Rucká, Lenka; Volkova, Olga; Pavlík, Adam; Kaplan, Ondřej; Kracík, M.; Nešvera, Jan; Martínková, Ludmila; Pátek, Miroslav
2014-01-01
Roč. 105, č. 6 (2014), s. 1179-1190 ISSN 0003-6072 R&D Projects: GA MŠk(CZ) LC06010; GA ČR(CZ) GAP504/11/0394 Institutional support: RVO:61388971 Keywords : Rhodococcus erythropolis * Amidase * Nitrile hydratase Subject RIV: EE - Microbiology, Virology Impact factor: 1.806, year: 2014
Czech Academy of Sciences Publication Activity Database
Martínková, Ludmila; Chmátal, Martin
2016-01-01
Roč. 102, October (2016), s. 90-95 ISSN 0043-1354 R&D Projects: GA TA ČR TA01021368; GA TA ČR(CZ) TA04021212; GA MŠk(CZ) LD12049 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Tyrosinase * Cyanide Subject RIV: CE - Biochemistry Impact factor: 6.942, year: 2016
Role models for complex networks
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Latimer, Scott; Li, Yubing; Nguyen, Thuong T H; Soubeyrand, Eric; Fatihi, Abdelhak; Elowsky, Christian G; Block, Anna; Pichersky, Eran; Basset, Gilles J
2018-05-09
The proteinogenic branched-chain amino acids (BCAAs) leucine, isoleucine and valine are essential nutrients for mammals. In plants, BCAAs double as alternative energy sources when carbohydrates become limiting, the catabolism of BCAAs providing electrons to the respiratory chain and intermediates to the tricarboxylic acid cycle. Yet, the actual architecture of the degradation pathways of BCAAs is not well understood. In this study, gene network modeling in Arabidopsis and rice, and plant-prokaryote comparative genomics detected candidates for 3-methylglutaconyl-CoA hydratase (4.2.1.18), one of the missing plant enzymes of leucine catabolism. Alignments of these protein candidates sampled from various spermatophytes revealed non-homologous N-terminal extensions that are lacking in their bacterial counterparts, and green fluorescent protein-fusion experiments demonstrated that the Arabidopsis protein, product of gene At4g16800, is targeted to mitochondria. Recombinant At4g16800 catalyzed the dehydration of 3-hydroxymethylglutaryl-CoA into 3-methylglutaconyl-CoA, and displayed kinetic features similar to those of its prokaryotic homolog. When at4g16800 knockout plants were subjected to dark-induced carbon starvation, their rosette leaves displayed accelerated senescence as compared to control plants, and this phenotype was paralleled by a marked increase in the accumulation of free and total leucine, isoleucine and valine. The seeds of the at4g16800 mutant showed a similar accumulation of free BCAAs. These data suggest that 3-methylglutaconyl-CoA hydratase is not solely involved in the degradation of leucine, but is also a significant contributor to that of isoleucine and valine. Furthermore, evidence is shown that unlike the situation observed in Trypanosomatidae, leucine catabolism does not contribute to the formation of the terpenoid precursor mevalonate. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights
Modelling the structure of complex networks
DEFF Research Database (Denmark)
Herlau, Tue
networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...
Wang, Huizheng; Zhang, Kai; Zhu, Jie; Song, Weiwei; Zhao, Li; Zhang, Xiuguo
2013-01-01
Polyhydroxyalkanoates (PHAs) have attracted increasing attention as "green plastic" due to their biodegradable, biocompatible, thermoplastic, and mechanical properties, and considerable research has been undertaken to develop low cost/high efficiency processes for the production of PHAs. MaoC-like hydratase (MaoC), which belongs to (R)-hydratase involved in linking the β-oxidation and the PHA biosynthetic pathways, has been identified recently. Understanding the regulatory mechanisms of (R)-hydratase catalysis is critical for efficient production of PHAs that promise synthesis an environment-friendly plastic. We have determined the crystal structure of a new MaoC recognized from Phytophthora capsici. The crystal structure of the enzyme was solved at 2.00 Å resolution. The structure shows that MaoC has a canonical (R)-hydratase fold with an N-domain and a C-domain. Supporting its dimerization observed in structure, MaoC forms a stable homodimer in solution. Mutations that disrupt the dimeric MaoC result in a complete loss of activity toward crotonyl-CoA, indicating that dimerization is required for the enzymatic activity of MaoC. Importantly, structure comparison reveals that a loop unique to MaoC interacts with an α-helix that harbors the catalytic residues of MaoC. Deletion of the loop enhances the enzymatic activity of MaoC, suggesting its inhibitory role in regulating the activity of MaoC. The data in our study reveal the regulatory mechanism of an (R)-hydratase, providing information on enzyme engineering to produce low cost PHAs.
Directory of Open Access Journals (Sweden)
Huizheng Wang
Full Text Available Polyhydroxyalkanoates (PHAs have attracted increasing attention as "green plastic" due to their biodegradable, biocompatible, thermoplastic, and mechanical properties, and considerable research has been undertaken to develop low cost/high efficiency processes for the production of PHAs. MaoC-like hydratase (MaoC, which belongs to (R-hydratase involved in linking the β-oxidation and the PHA biosynthetic pathways, has been identified recently. Understanding the regulatory mechanisms of (R-hydratase catalysis is critical for efficient production of PHAs that promise synthesis an environment-friendly plastic.We have determined the crystal structure of a new MaoC recognized from Phytophthora capsici. The crystal structure of the enzyme was solved at 2.00 Å resolution. The structure shows that MaoC has a canonical (R-hydratase fold with an N-domain and a C-domain. Supporting its dimerization observed in structure, MaoC forms a stable homodimer in solution. Mutations that disrupt the dimeric MaoC result in a complete loss of activity toward crotonyl-CoA, indicating that dimerization is required for the enzymatic activity of MaoC. Importantly, structure comparison reveals that a loop unique to MaoC interacts with an α-helix that harbors the catalytic residues of MaoC. Deletion of the loop enhances the enzymatic activity of MaoC, suggesting its inhibitory role in regulating the activity of MaoC.The data in our study reveal the regulatory mechanism of an (R-hydratase, providing information on enzyme engineering to produce low cost PHAs.
Computational Modeling of Complex Protein Activity Networks
Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude
2017-01-01
Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a
Models of complex attitude systems
DEFF Research Database (Denmark)
Sørensen, Bjarne Taulo
search algorithms and structural equation models. The results suggest that evaluative judgments of the importance of production system attributes are generated in a schematic manner, driven by personal value orientations. The effect of personal value orientations was strong and largely unmediated...... that evaluative affect propagates through the system in such a way that the system becomes evaluatively consistent and operates as a schema for the generation of evaluative judgments. In the empirical part of the paper, the causal structure of an attitude system from which people derive their evaluations of pork......Existing research on public attitudes towards agricultural production systems is largely descriptive, abstracting from the processes through which members of the general public generate their evaluations of such systems. The present paper adopts a systems perspective on such evaluations...
Directory of Open Access Journals (Sweden)
Mitchell Douglas A
2010-05-01
Full Text Available Abstract Background A new family of natural products has been described in which cysteine, serine and threonine from ribosomally-produced peptides are converted to thiazoles, oxazoles and methyloxazoles, respectively. These metabolites and their biosynthetic gene clusters are now referred to as thiazole/oxazole-modified microcins (TOMM. As exemplified by microcin B17 and streptolysin S, TOMM precursors contain an N-terminal leader sequence and C-terminal core peptide. The leader sequence contains binding sites for the posttranslational modifying enzymes which subsequently act upon the core peptide. TOMM peptides are small and highly variable, frequently missed by gene-finders and occasionally situated far from the thiazole/oxazole forming genes. Thus, locating a substrate for a particular TOMM pathway can be a challenging endeavor. Results Examination of candidate TOMM precursors has revealed a subclass with an uncharacteristically long leader sequence closely related to the enzyme nitrile hydratase. Members of this nitrile hydratase leader peptide (NHLP family lack the metal-binding residues required for catalysis. Instead, NHLP sequences display the classic Gly-Gly cleavage motif and have C-terminal regions rich in heterocyclizable residues. The NHLP family exhibits a correlated species distribution and local clustering with an ABC transport system. This study also provides evidence that a separate family, annotated as Nif11 nitrogen-fixing proteins, can serve as natural product precursors (N11P, but not always of the TOMM variety. Indeed, a number of cyanobacterial genomes show extensive N11P paralogous expansion, such as Nostoc, Prochlorococcus and Cyanothece, which replace the TOMM cluster with lanthionine biosynthetic machinery. Conclusions This study has united numerous TOMM gene clusters with their cognate substrates. These results suggest that two large protein families, the nitrile hydratases and Nif11, have been retailored for
Modeling Musical Complexity: Commentary on Eerola (2016
Directory of Open Access Journals (Sweden)
Joshua Albrecht
2016-07-01
Full Text Available In his paper, "Expectancy violation and information-theoretic models of melodic complexity," Eerola compares a number of models that correlate musical features of monophonic melodies with participant ratings of perceived melodic complexity. He finds that fairly strong results can be achieved using several different approaches to modeling perceived melodic complexity. The data used in this study are gathered from several previously published studies that use widely different types of melodies, including isochronous folk melodies, isochronous 12-tone rows, and rhythmically complex African folk melodies. This commentary first briefly reviews the article's method and main findings, then suggests a rethinking of the theoretical framework of the study. Finally, some of the methodological issues of the study are discussed.
A Role for Cytosolic Fumarate Hydratase in Urea Cycle Metabolism and Renal Neoplasia
Directory of Open Access Journals (Sweden)
Julie Adam
2013-05-01
Full Text Available The identification of mutated metabolic enzymes in hereditary cancer syndromes has established a direct link between metabolic dysregulation and cancer. Mutations in the Krebs cycle enzyme, fumarate hydratase (FH, predispose affected individuals to leiomyomas, renal cysts, and cancers, though the respective pathogenic roles of mitochondrial and cytosolic FH isoforms remain undefined. On the basis of comprehensive metabolomic analyses, we demonstrate that FH1-deficient cells and tissues exhibit defects in the urea cycle/arginine metabolism. Remarkably, transgenic re-expression of cytosolic FH ameliorated both renal cyst development and urea cycle defects associated with renal-specific FH1 deletion in mice. Furthermore, acute arginine depletion significantly reduced the viability of FH1-deficient cells in comparison to controls. Our findings highlight the importance of extramitochondrial metabolic pathways in FH-associated oncogenesis and the urea cycle/arginine metabolism as a potential therapeutic target.
Modeling complex work systems - method meets reality
van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert
1996-01-01
Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the
Fatigue modeling of materials with complex microstructures
DEFF Research Database (Denmark)
Qing, Hai; Mishnaevsky, Leon
2011-01-01
with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Linehan, W. Marston; Rouault, Tracey A.
2015-01-01
Hereditary leiomyomatosis and renal cell carcinoma (HLRCC) is a hereditary cancer syndrome in which affected individuals are at risk for development of cutaneous and uterine leiomyomas and an aggressive form of type II papillary kidney cancer. HLRCC is characterized by germline mutation of the tricarboxylic acid cycle (TCA) enzyme, fumarate hydratase (FH). FH-deficient kidney cancer is characterized by impaired oxidative phosphorylation and a metabolic shift to aerobic glycolysis, a form of metabolic reprogramming referred to as the Warburg effect. Increased glycolysis generates ATP needed for increased cell proliferation. In FH-deficient kidney cancer levels of AMPK, a cellular energy sensor, are decreased; resulting in diminished p53 levels, decreased expression of the iron importer, DMT1, leading to low cellular iron levels, and to enhanced fatty acid synthesis by diminishing phosphorylation of acetyl CoA carboxylase, a rate limiting step for fatty acid synthesis. Increased fumarate and decreased iron levels in FH-deficient kidney cancer cells inactivate prolyl hydroxylases, leading to stabilization of HIF1α, and increased expression of genes such as vascular endothelial growth factor (VEGF) and GLUT1 to provide fuel needed for rapid growth demands. Several therapeutic approaches for targeting the metabolic basis of FH-deficient kidney cancer are under development or are being evaluated in clinical trials, including the use of agents such as metformin, which would reverse the inactivation of AMPK, approaches to inhibit glucose transport, LDH-A, the anti-oxidant response pathway, the heme oxygenase pathway and approaches to target the tumor vasculature and glucose transport with agents such as bevacizumab and erlotinib. These same types of metabolic shifts, to aerobic glycolysis with decreased oxidative phosphorylation, have been found in a wide variety of other cancer types. Targeting the metabolic basis of a rare cancer such as fumarate hydratase
Rosner, B M; Schink, B
1995-10-01
Acetylene hydratase of the mesophilic fermenting bacterium Pelobacter acetylenicus catalyzes the hydration of acetylene to acetaldehyde. Growth of P. acetylenicus with acetylene and specific acetylene hydratase activity depended on tungstate or, to a lower degree, molybdate supply in the medium. The specific enzyme activity in cell extract was highest after growth in the presence of tungstate. Enzyme activity was stable even after prolonged storage of the cell extract or of the purified protein under air. However, enzyme activity could be measured only in the presence of a strong reducing agent such as titanium(III) citrate or dithionite. The enzyme was purified 240-fold by ammonium sulfate precipitation, anion-exchange chromatography, size exclusion chromatography, and a second anion-exchange chromatography step, with a yield of 36%. The protein was a monomer with an apparent molecular mass of 73 kDa, as determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The isoelectric point was at pH 4.2. Per mol of enzyme, 4.8 mol of iron, 3.9 mol of acid-labile sulfur, and 0.4 mol of tungsten, but no molybdenum, were detected. The Km for acetylene as assayed in a coupled photometric test with yeast alcohol dehydrogenase and NADH was 14 microM, and the Vmax was 69 mumol.min-1.mg of protein-1. The optimum temperature for activity was 50 degrees C, and the apparent pH optimum was 6.0 to 6.5. The N-terminal amino acid sequence gave no indication of resemblance to any enzyme protein described so far.
Complexity, Modeling, and Natural Resource Management
Directory of Open Access Journals (Sweden)
Paul Cilliers
2013-09-01
Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Coffey, Lee; Owens, Erica; Tambling, Karen; O'Neill, David; O'Connor, Laura; O'Reilly, Catherine
2010-11-01
Nitriles are widespread in the environment as a result of biological and industrial activity. Nitrile hydratases catalyse the hydration of nitriles to the corresponding amide and are often associated with amidases, which catalyze the conversion of amides to the corresponding acids. Nitrile hydratases have potential as biocatalysts in bioremediation and biotransformation applications, and several successful examples demonstrate the advantages. In this work a real-time PCR assay was designed for the detection of Fe-type nitrile hydratase genes from environmental isolates purified from nitrile-enriched soils and seaweeds. Specific PCR primers were also designed for amplification and sequencing of the genes. Identical or highly homologous nitrile hydratase genes were detected from isolates of numerous genera from geographically diverse sites, as were numerous novel genes. The genes were also detected from isolates of genera not previously reported to harbour nitrile hydratases. The results provide further evidence that many bacteria have acquired the genes via horizontal gene transfer. The real-time PCR assay should prove useful in searching for nitrile hydratases that could have novel substrate specificities and therefore potential in industrial applications.
Kim, Kyoung-Rok; Oh, Hye-Jin; Park, Chul-Soon; Hong, Seung-Hye; Park, Ji-Young; Oh, Deok-Kun
2015-11-01
The aim of this study is the first time demonstration of cis-12 regio-selective linoleate double-bond hydratase. Hydroxylation of fatty acids, abundant feedstock in nature, is an emerging alternative route for many petroleum replaceable products thorough hydroxy fatty acids, carboxylic acids, and lactones. However, chemical route for selective hydroxylation is still quite challenging owing to low selectivity and many environmental concerns. Hydroxylation of fatty acids by hydroxy fatty acid forming enzymes is an important route for selective biocatalytic oxyfunctionalization of fatty acids. Therefore, novel fatty acid hydroxylation enzymes should be discovered. The two hydratase genes of Lactobacillus acidophilus were identified by genomic analysis, and the expressed two recombinant hydratases were identified as cis-9 and cis-12 double-bond selective linoleate hydratases by in vitro functional validation, including the identification of products and the determination of regio-selectivity, substrate specificity, and kinetic parameters. The two different linoleate hydratases were the involved enzymes in the 10,13-dihydroxyoctadecanoic acid biosynthesis. Linoleate 13-hydratase (LHT-13) selectively converted 10 mM linoleic acid to 13S-hydroxy-9(Z)-octadecenoic acid with high titer (8.1 mM) and yield (81%). Our study will expand knowledge for microbial fatty acid-hydroxylation enzymes and facilitate the designed production of the regio-selective hydroxy fatty acids for useful chemicals from polyunsaturated fatty acid feedstocks. © 2015 Wiley Periodicals, Inc.
Modeling OPC complexity for design for manufacturability
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data
Sutherland models for complex reflection groups
International Nuclear Information System (INIS)
Crampe, N.; Young, C.A.S.
2008-01-01
There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups
Minimum-complexity helicopter simulation math model
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Complex Systems and Self-organization Modelling
Bertelle, Cyrille; Kadri-Dahmani, Hakima
2009-01-01
The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.
Geometric Modelling with a-Complexes
Gerritsen, B.H.M.; Werff, K. van der; Veltkamp, R.C.
2001-01-01
The shape of real objects can be so complicated, that only a sampling data point set can accurately represent them. Analytic descriptions are too complicated or impossible. Natural objects, for example, can be vague and rough with many holes. For this kind of modelling, a-complexes offer advantages
The Kuramoto model in complex networks
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
A cognitive model for software architecture complexity
Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.
2010-01-01
Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In
Comparing flood loss models of different complexity
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
Complex scaling in the cluster model
International Nuclear Information System (INIS)
Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.
1987-01-01
To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs
Modeling of anaerobic digestion of complex substrates
International Nuclear Information System (INIS)
Keshtkar, A. R.; Abolhamd, G.; Meyssami, B.; Ghaforian, H.
2003-01-01
A structured mathematical model of anaerobic conversion of complex organic materials in non-ideally cyclic-batch reactors for biogas production has been developed. The model is based on multiple-reaction stoichiometry (enzymatic hydrolysis, acidogenesis, aceto genesis and methano genesis), microbial growth kinetics, conventional material balances in the liquid and gas phases for a cyclic-batch reactor, liquid-gas interactions, liquid-phase equilibrium reactions and a simple mixing model which considers the reactor volume in two separate sections: the flow-through and the retention regions. The dynamic model describes the effects of reactant's distribution resulting from the mixing conditions, time interval of feeding, hydraulic retention time and mixing parameters on the process performance. The model is applied in the simulation of anaerobic digestion of cattle manure under different operating conditions. The model is compared with experimental data and good correlations are obtained
A Practical Philosophy of Complex Climate Modelling
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Intrinsic Uncertainties in Modeling Complex Systems.
Energy Technology Data Exchange (ETDEWEB)
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Different Epidemic Models on Complex Networks
International Nuclear Information System (INIS)
Zhang Haifeng; Small, Michael; Fu Xinchu
2009-01-01
Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.
FRAM Modelling Complex Socio-technical Systems
Hollnagel, Erik
2012-01-01
There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Complex networks under dynamic repair model
Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao
2018-01-01
Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.
From complex to simple: interdisciplinary stochastic models
International Nuclear Information System (INIS)
Mazilu, D A; Zamora, G; Mazilu, I
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)
Martínková, Ludmila; Veselá, Alicja Barbara; Rinágelová, Anna; Chmátal, Martin
2015-11-01
The purpose of this study is to summarize the current knowledge of the enzymes which are involved in the hydrolysis of cyanide, i.e., cyanide hydratases (CHTs; EC 4.2.1.66) and cyanide dihydratases (CynD; EC 3.5.5.1). CHTs are probably exclusively produced by filamentous fungi and widely occur in these organisms; in contrast, CynDs were only found in a few bacterial genera. CHTs differ from CynDs in their reaction products (formamide vs. formic acid and ammonia, respectively). Several CHTs were also found to transform nitriles but with lower relative activities compared to HCN. Mutants of CynDs and CHTs were constructed to study the structure-activity relationships in these enzymes or to improve their catalytic properties. The effect of the C-terminal part of the protein on the enzyme activity was determined by constructing the corresponding deletion mutants. CynDs are less active at alkaline pH than CHTs. To improve its bioremediation potential, CynD from Bacillus pumilus was engineered by directed evolution combined with site-directed mutagenesis, and its operation at pH 10 was thus enabled. Some of the enzymes have been tested for their potential to eliminate cyanide from cyanide-containing wastewaters. CynDs were also used to construct cyanide biosensors.
Fumarate hydratase is a critical metabolic regulator of hematopoietic stem cell functions.
Guitart, Amelie V; Panagopoulou, Theano I; Villacreces, Arnaud; Vukovic, Milica; Sepulveda, Catarina; Allen, Lewis; Carter, Roderick N; van de Lagemaat, Louie N; Morgan, Marcos; Giles, Peter; Sas, Zuzanna; Gonzalez, Marta Vila; Lawson, Hannah; Paris, Jasmin; Edwards-Hicks, Joy; Schaak, Katrin; Subramani, Chithra; Gezer, Deniz; Armesilla-Diaz, Alejandro; Wills, Jimi; Easterbrook, Aaron; Coman, David; So, Chi Wai Eric; O'Carroll, Donal; Vernimmen, Douglas; Rodrigues, Neil P; Pollard, Patrick J; Morton, Nicholas M; Finch, Andrew; Kranc, Kamil R
2017-03-06
Strict regulation of stem cell metabolism is essential for tissue functions and tumor suppression. In this study, we investigated the role of fumarate hydratase (Fh1), a key component of the mitochondrial tricarboxylic acid (TCA) cycle and cytosolic fumarate metabolism, in normal and leukemic hematopoiesis. Hematopoiesis-specific Fh1 deletion (resulting in endogenous fumarate accumulation and a genetic TCA cycle block reflected by decreased maximal mitochondrial respiration) caused lethal fetal liver hematopoietic defects and hematopoietic stem cell (HSC) failure. Reexpression of extramitochondrial Fh1 (which normalized fumarate levels but not maximal mitochondrial respiration) rescued these phenotypes, indicating the causal role of cellular fumarate accumulation. However, HSCs lacking mitochondrial Fh1 (which had normal fumarate levels but defective maximal mitochondrial respiration) failed to self-renew and displayed lymphoid differentiation defects. In contrast, leukemia-initiating cells lacking mitochondrial Fh1 efficiently propagated Meis1 / Hoxa9 -driven leukemia. Thus, we identify novel roles for fumarate metabolism in HSC maintenance and hematopoietic differentiation and reveal a differential requirement for mitochondrial Fh1 in normal hematopoiesis and leukemia propagation. © 2017 Guitart et al.
Fumarate Hydratase Deletion in Pancreatic β Cells Leads to Progressive Diabetes
Directory of Open Access Journals (Sweden)
Julie Adam
2017-09-01
Full Text Available We explored the role of the Krebs cycle enzyme fumarate hydratase (FH in glucose-stimulated insulin secretion (GSIS. Mice lacking Fh1 in pancreatic β cells (Fh1βKO mice appear normal for 6–8 weeks but then develop progressive glucose intolerance and diabetes. Glucose tolerance is rescued by expression of mitochondrial or cytosolic FH but not by deletion of Hif1α or Nrf2. Progressive hyperglycemia in Fh1βKO mice led to dysregulated metabolism in β cells, a decrease in glucose-induced ATP production, electrical activity, cytoplasmic [Ca2+]i elevation, and GSIS. Fh1 loss resulted in elevated intracellular fumarate, promoting succination of critical cysteines in GAPDH, GMPR, and PARK 7/DJ-1 and cytoplasmic acidification. Intracellular fumarate levels were increased in islets exposed to high glucose and in islets from human donors with type 2 diabetes (T2D. The impaired GSIS in islets from diabetic Fh1βKO mice was ameliorated after culture under normoglycemic conditions. These studies highlight the role of FH and dysregulated mitochondrial metabolism in T2D.
Liu, Yi; Liu, Ping; Lin, Lu; Zhao, Yueqin; Zhong, Wenjuan; Wu, Lunjie; Zhou, Zhemin; Sun, Weifeng
2016-09-01
The maturation mechanism of nitrile hydratase (NHase) of Pseudomonas putida NRRL-18668 was discovered and named as "self-subunit swapping." Since the NHase of Bordetella petrii DSM 12804 is similar to that of P. putida, the NHase maturation of B. petrii is proposed to be the same as that of P. putida. However, there is no further information on the application of NHase according to these findings. We successfully rapidly purified NHase and its activator through affinity his tag, and found that the cell extracts of NHase possessed multiple types of protein ingredients including α, β, α2β2, and α(P14K)2 who were in a state of chemical equilibrium. Furthermore, the activity was significantly enhanced through adding extra α(P14K)2 to the cell extracts of NHase according to the chemical equilibrium. Our findings are useful for the activity enhancement of multiple-subunit enzyme and for the first time significantly increased the NHase activity according to the chemical equilibrium.
A SIMULATION MODEL OF THE GAS COMPLEX
Directory of Open Access Journals (Sweden)
Sokolova G. E.
2016-06-01
Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.
Structured analysis and modeling of complex systems
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Glass Durability Modeling, Activated Complex Theory (ACT)
International Nuclear Information System (INIS)
CAROL, JANTZEN
2005-01-01
The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple
Chaos from simple models to complex systems
Cencini, Massimo; Vulpiani, Angelo
2010-01-01
Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor
Complex singlet extension of the standard model
International Nuclear Information System (INIS)
Barger, Vernon; McCaskey, Mathew; Langacker, Paul; Ramsey-Musolf, Michael; Shaughnessy, Gabe
2009-01-01
We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider.
Extension of association models to complex chemicals
DEFF Research Database (Denmark)
Avlund, Ane Søgaard
Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry......; CPA and sPC-SAFT. Phase equilibrium and monomer fraction calculations with sPC-SAFT for methanol are used in the thesis to illustrate the importance of parameter estimation when using SAFT. Different parameter sets give similar pure component vapor pressure and liquid density results, whereas very...... association is presented in the thesis, and compared to the corresponding lattice theory. The theory for intramolecular association is then applied in connection with sPC-SAFT for mixtures containing glycol ethers. Calculations with sPC-SAFT (without intramolecular association) are presented for comparison...
The Model of Complex Structure of Quark
Liu, Rongwu
2017-09-01
In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
Reducing Spatial Data Complexity for Classification Models
International Nuclear Information System (INIS)
Ruta, Dymitr; Gabrys, Bogdan
2007-01-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Reducing Spatial Data Complexity for Classification Models
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
On sampling and modeling complex systems
International Nuclear Information System (INIS)
Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser
2013-01-01
The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)
Modeling the Structure and Complexity of Engineering Routine Design Problems
Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.
2011-01-01
This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development
Modeling competitive substitution in a polyelectrolyte complex
International Nuclear Information System (INIS)
Peng, B.; Muthukumar, M.
2015-01-01
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution
Modelling of information processes management of educational complex
Directory of Open Access Journals (Sweden)
Оксана Николаевна Ромашкова
2014-12-01
Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.
Sandpile model for relaxation in complex systems
International Nuclear Information System (INIS)
Vazquez, A.; Sotolongo-Costa, O.; Brouers, F.
1997-10-01
The relaxation in complex systems is, in general, nonexponential. After an initial rapid decay the system relaxes slowly following a long time tail. In the present paper a sandpile moderation of the relaxation in complex systems is analysed. Complexity is introduced by a process of avalanches in the Bethe lattice and a feedback mechanism which leads to slower decay with increasing time. In this way, some features of relaxation in complex systems: long time tails relaxation, aging, and fractal distribution of characteristic times, are obtained by simple computer simulations. (author)
Modeling Complex Chemical Systems: Problems and Solutions
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Modeling the Chemical Complexity in Titan's Atmosphere
Vuitton, Veronique; Yelle, Roger; Klippenstein, Stephen J.; Horst, Sarah; Lavvas, Panayotis
2018-06-01
Titan's atmospheric chemistry is extremely complicated because of the multiplicity of chemical as well as physical processes involved. Chemical processes begin with the dissociation and ionization of the most abundant species, N2 and CH4, by a variety of energy sources, i.e. solar UV and X-ray photons, suprathermal electrons (reactions involving radicals as well as positive and negative ions, all possibly in some excited electronic and vibrational state. Heterogeneous chemistry at the surface of the aerosols could also play a significant role. The efficiency and outcome of these reactions depends strongly on the physical characteristics of the atmosphere, namely pressure and temperature, ranging from 1.5×103 to 10-10 mbar and from 70 to 200 K, respectively. Moreover, the distribution of the species is affected by molecular diffusion and winds as well as escape from the top of the atmosphere and condensation in the lower stratosphere.Photochemical and microphysical models are the keystones of our understanding of Titan's atmospheric chemistry. Their main objective is to compute the distribution and nature of minor chemical species (typically containing up to 6 carbon atoms) and haze particles, respectively. Density profiles are compared to the available observations, allowing to identify important processes and to highlight those that remain to be constrained in the laboratory, experimentally and/or theoretically. We argue that positive ion chemistry is at the origin of complex organic molecules, such as benzene, ammonia and hydrogen isocyanide while neutral-neutral radiative association reactions are a significant source of alkanes. We find that negatively charged macromolecules (m/z ~100) attract the abundant positive ions, which ultimately leads to the formation of the aerosols. We also discuss the possibility that an incoming flux of oxygen from Enceladus, another Saturn's satellite, is responsible for the presence of oxygen-bearing species in Titan's reductive
Modelling the complex dynamics of vegetation, livestock and rainfall ...
African Journals Online (AJOL)
Open Access DOWNLOAD FULL TEXT ... In this paper, we present mathematical models that incorporate ideas from complex systems theory to integrate several strands of rangeland theory in a hierarchical framework. ... Keywords: catastrophe theory; complexity theory; disequilibrium; hysteresis; moving attractors
Generative complexity of Gray-Scott model
Adamatzky, Andrew
2018-03-01
In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
Complex systems modeling by cellular automata
Kroc, J.; Sloot, P.M.A.; Rabuñal Dopico, J.R.; Dorado de la Calle, J.; Pazos Sierra, A.
2009-01-01
In recent years, the notion of complex systems proved to be a very useful concept to define, describe, and study various natural phenomena observed in a vast number of scientific disciplines. Examples of scientific disciplines that highly benefit from this concept range from physics, mathematics,
Modeling pitch perception of complex tones
Houtsma, A.J.M.
1986-01-01
When one listens to a series of harmonic complex tones that have no acoustic energy at their fundamental frequencies, one usually still hears a melody that corresponds to those missing fundamentals. Since it has become evident some two decades ago that neither Helmholtz's difference tone theory nor
The utility of Earth system Models of Intermediate Complexity
Weber, S.L.
2010-01-01
Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by
Advances in dynamic network modeling in complex transportation systems
Ukkusuri, Satish V
2013-01-01
This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.
Narrowing the gap between network models and real complex systems
Viamontes Esquivel, Alcides
2014-01-01
Simple network models that focus only on graph topology or, at best, basic interactions are often insufficient to capture all the aspects of a dynamic complex system. In this thesis, I explore those limitations, and some concrete methods of resolving them. I argue that, in order to succeed at interpreting and influencing complex systems, we need to take into account slightly more complex parts, interactions and information flows in our models.This thesis supports that affirmation with five a...
Uncertainty and Complexity in Mathematical Modeling
Cannon, Susan O.; Sanders, Mark
2017-01-01
Modeling is an effective tool to help students access mathematical concepts. Finding a math teacher who has not drawn a fraction bar or pie chart on the board would be difficult, as would finding students who have not been asked to draw models and represent numbers in different ways. In this article, the authors will discuss: (1) the properties of…
Information, complexity and efficiency: The automobile model
Energy Technology Data Exchange (ETDEWEB)
Allenby, B. [Lucent Technologies (United States)]|[Lawrence Livermore National Lab., CA (United States)
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Modeling Power Systems as Complex Adaptive Systems
Energy Technology Data Exchange (ETDEWEB)
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions
Directory of Open Access Journals (Sweden)
Camaren Peter
2014-03-01
Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.
International Nuclear Information System (INIS)
Hourai, Shinji; Ishii, Takeshi; Miki, Misao; Takashima, Yoshiki; Mitsuda, Satoshi; Yanagi, Kazunori
2005-01-01
The nitrile hydratase from the themophilic B. smithii SC-J05-1 (Bs NHase) has been purified, cloned and crystallized. Nitrile hydratase (NHase) converts nitriles to the corresponding amides and is recognized as having important industrial applications. Purification, cloning, crystallization and initial crystallographic studies of the NHase from Bacillus smithii SC-J05-1 (Bs NHase) were conducted to analyze the activity, specificity and thermal stability of this hydrolytic enzyme. Bs NHase was purified to homogeneity from microbial cells of B. smithii SC-J05-1 and the nucleotide sequences of both the α- and β-subunits were determined. Purified Bs NHase was used for crystallization and several crystal forms were obtained by the vapour-diffusion method. Microseeding and the addition of magnesium ions were essential components to obtain crystals suitable for X-ray diffraction analysis
Gorbenko, M V; Popova, T N; Shul'gin, K K; Popov, S S; Agarkov, A A
2014-01-01
The influence of melaxen and valdoxan on the biochemiluminescence parameters, aconitate hydratase activity and citrate level in rats heart and liver during development of experimental hyperthyroidism has been investigated. Administration of these substances promoted a decrease of biochemiluminescence parameters, which had been increased in tissues of rats in response to the development of oxidative stress under hyperthyroidism. Aconitate hydratase activity and citrate concentration in rats liver and heart, growing at pathological conditions, changed towards control value after administration of the drugs correcting melatonin level. The results indicate the positive effect of valdoxan and melaxen on oxidative status of the organism under the development of experimental hyperthyroidism that is associated with antioxidant action of melatonin.
Mathematical modeling and optimization of complex structures
Repin, Sergey; Tuovinen, Tero
2016-01-01
This volume contains selected papers in three closely related areas: mathematical modeling in mechanics, numerical analysis, and optimization methods. The papers are based upon talks presented on the International Conference for Mathematical Modeling and Optimization in Mechanics, held in Jyväskylä, Finland, March 6-7, 2014 dedicated to Prof. N. Banichuk on the occasion of his 70th birthday. The articles are written by well-known scientists working in computational mechanics and in optimization of complicated technical models. Also, the volume contains papers discussing the historical development, the state of the art, new ideas, and open problems arising in modern continuum mechanics and applied optimization problems. Several papers are concerned with mathematical problems in numerical analysis, which are also closely related to important mechanical models. The main topics treated include: * Computer simulation methods in mechanics, physics, and biology; * Variational problems and methods; minimiz...
Hierarchical Models of the Nearshore Complex System
National Research Council Canada - National Science Library
Werner, Brad
2004-01-01
.... This grant was termination funding for the Werner group, specifically aimed at finishing up and publishing research related to synoptic imaging of near shore bathymetry, testing models for beach cusp...
Complex models of nodal nuclear data
International Nuclear Information System (INIS)
Dufek, Jan
2011-01-01
During the core simulations, nuclear data are required at various nodal thermal-hydraulic and fuel burnup conditions. The nodal data are also partially affected by thermal-hydraulic and fuel burnup conditions in surrounding nodes as these change the neutron energy spectrum in the node. Therefore, the nodal data are functions of many parameters (state variables), and the more state variables are considered by the nodal data models the more accurate and flexible the models get. The existing table and polynomial regression models, however, cannot reflect the data dependences on many state variables. As for the table models, the number of mesh points (and necessary lattice calculations) grows exponentially with the number of variables. As for the polynomial regression models, the number of possible multivariate polynomials exceeds the limits of existing selection algorithms that should identify a few dozens of the most important polynomials. Also, the standard scheme of lattice calculations is not convenient for modelling the data dependences on various burnup conditions since it performs only a single or few burnup calculations at fixed nominal conditions. We suggest a new efficient algorithm for selecting the most important multivariate polynomials for the polynomial regression models so that dependences on many state variables can be considered. We also present a new scheme for lattice calculations where a large number of burnup histories are accomplished at varied nodal conditions. The number of lattice calculations being performed and the number of polynomials being analysed are controlled and minimised while building the nodal data models of a required accuracy. (author)
Integrated Modeling of Complex Optomechanical Systems
Andersen, Torben; Enmark, Anita
2011-09-01
Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.
Smart modeling and simulation for complex systems practice and theory
Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin
2015-01-01
This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.
The sigma model on complex projective superspaces
Energy Technology Data Exchange (ETDEWEB)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker [DESY, Hamburg (Germany). Theory Group; Quella, Thomas [Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics; Saleur, Hubert [CEA Saclay, 91 - Gif-sur-Yvette (France). Inst. de Physique Theorique; USC, Los Angeles, CA (United States). Physics Dept.
2009-08-15
The sigma model on projective superspaces CP{sup S-1} {sup vertical} {sup stroke} {sup S} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle {theta}. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP{sup S-1} {sup vertical} {sup stroke} {sup S} model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)
The sigma model on complex projective superspaces
International Nuclear Information System (INIS)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker; Quella, Thomas; Saleur, Hubert; USC, Los Angeles, CA
2009-08-01
The sigma model on projective superspaces CP S-1 vertical stroke S gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we propose a spin chain regularization of the CP S-1 vertical stroke S model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis. (orig.)
A complex autoregressive model and application to monthly temperature forecasts
Directory of Open Access Journals (Sweden)
X. Gu
2005-11-01
Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.
Understanding complex urban systems integrating multidisciplinary data in urban models
Gebetsroither-Geringer, Ernst; Atun, Funda; Werner, Liss
2016-01-01
This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers—including municipal politicians, spatial planners, and citizen groups—in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches—and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Age...
Fluid flow modeling in complex areas*, **
Directory of Open Access Journals (Sweden)
Poullet Pascal
2012-04-01
Full Text Available We show first results of 3D simulation of sea currents in a realistic context. We use the full Navier–Stokes equations for incompressible viscous fluid. The problem is solved using a second order incremental projection method associated with the finite volume of the staggered (MAC scheme for the spatial discretization. After validation on classical cases, it is used in a numerical simulation of the Pointe à Pitre harbour area. The use of the fictious domain method permits us to take into account the complexity of bathymetric data and allows us to work with regular meshes and thus preserves the efficiency essential for a 3D code. Dans cette étude, nous présentons les premiers résultats de simulation d’un écoulement d’un fluide incompressible visqueux dans un contexte environnemental réel. L’approche utilisée utilise une méthode de domaines fictifs pour une prise en compte d’un domaine physique tridimensionnel très irrégulier. Le schéma numérique combine un schéma de projection incrémentale et des volumes finis utilisant des volumes de contrôle adaptés à un maillage décalé. Les tests de validation sont menés pour les cas tests de la cavité double entraînée ainsi que l’écoulement dans un canal avec un obstacle placé de manière asymmétrique.
Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M
2012-10-01
To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.
Passengers, Crowding and Complexity : Models for passenger oriented public transport
P.C. Bouman (Paul)
2017-01-01
markdownabstractPassengers, Crowding and Complexity was written as part of the Complexity in Public Transport (ComPuTr) project funded by the Netherlands Organisation for Scientific Research (NWO). This thesis studies in three parts how microscopic data can be used in models that have the potential
Stability of Rotor Systems: A Complex Modelling Approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1996-01-01
A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...
Complex versus simple models: ion-channel cardiac toxicity prediction.
Mistry, Hitesh B
2018-01-01
There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Complex versus simple models: ion-channel cardiac toxicity prediction
Directory of Open Access Journals (Sweden)
Hitesh B. Mistry
2018-02-01
Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.
Directory of Open Access Journals (Sweden)
Tomlinson Ian PM
2008-03-01
Full Text Available Abstract Background Fumarate hydratase (HGNC approved gene symbol – FH, also known as fumarase, is an enzyme of the tricarboxylic acid (TCA cycle, involved in fundamental cellular energy production. First described by Zinn et al in 1986, deficiency of FH results in early onset, severe encephalopathy. In 2002, the Multiple Leiomyoma Consortium identified heterozygous germline mutations of FH in patients with multiple cutaneous and uterine leiomyomas, (MCUL: OMIM 150800. In some families renal cell cancer also forms a component of the complex and as such has been described as hereditary leiomyomatosis and renal cell cancer (HLRCC: OMIM 605839. The identification of FH as a tumor suppressor was an unexpected finding and following the identification of subunits of succinate dehydrogenase in 2000 and 2001, was only the second description of the involvement of an enzyme of intermediary metabolism in tumorigenesis. Description The FH mutation database is a part of the TCA cycle gene mutation database (formerly the succinate dehydrogenase gene mutation database and is based on the Leiden Open (source Variation Database (LOVD system. The variants included in the database were derived from the published literature and annotated to conform to current mutation nomenclature. The FH database applies HGVS nomenclature guidelines, and will assist researchers in applying these guidelines when directly submitting new sequence variants online. Since the first molecular characterization of an FH mutation by Bourgeron et al in 1994, a series of reports of both FH deficiency patients and patients with MCUL/HLRRC have described 107 variants, of which 93 are thought to be pathogenic. The most common type of mutation is missense (57%, followed by frameshifts & nonsense (27%, and diverse deletions, insertions and duplications. Here we introduce an online database detailing all reported FH sequence variants. Conclusion The FH mutation database strives to systematically
Modeling Air-Quality in Complex Terrain Using Mesoscale and ...
African Journals Online (AJOL)
Air-quality in a complex terrain (Colorado-River-Valley/Grand-Canyon Area, Southwest U.S.) is modeled using a higher-order closure mesoscale model and a higher-order closure dispersion model. Non-reactive tracers have been released in the Colorado-River valley, during winter and summer 1992, to study the ...
Surface-complexation models for sorption onto heterogeneous surfaces
International Nuclear Information System (INIS)
Harvey, K.B.
1997-10-01
This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)
On spin and matrix models in the complex plane
International Nuclear Information System (INIS)
Damgaard, P.H.; Heller, U.M.
1993-01-01
We describe various aspects of statistical mechanics defined in the complex temperature or coupling-constant plane. Using exactly solvable models, we analyse such aspects as renormalization group flows in the complex plane, the distribution of partition function zeros, and the question of new coupling-constant symmetries of complex-plane spin models. The double-scaling form of matrix models is shown to be exactly equivalent to finite-size scaling of two-dimensional spin systems. This is used to show that the string susceptibility exponents derived from matrix models can be obtained numerically with very high accuracy from the scaling of finite-N partition function zeros in the complex plane. (orig.)
A Framework for Modeling and Analyzing Complex Distributed Systems
National Research Council Canada - National Science Library
Lynch, Nancy A; Shvartsman, Alex Allister
2005-01-01
Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...
Modelling the self-organization and collapse of complex networks
Indian Academy of Sciences (India)
Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Algebraic computability and enumeration models recursion theory and descriptive complexity
Nourani, Cyrus F
2016-01-01
This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...
Modeling of Complex Life Cycle Prediction Based on Cell Division
Directory of Open Access Journals (Sweden)
Fucheng Zhang
2017-01-01
Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.
Applications of Nonlinear Dynamics Model and Design of Complex Systems
In, Visarath; Palacios, Antonio
2009-01-01
This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.
Coping with Complexity Model Reduction and Data Analysis
Gorban, Alexander N
2011-01-01
This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.
Mathematical Models to Determine Stable Behavior of Complex Systems
Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.
2018-05-01
The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.
Understanding complex urban systems multidisciplinary approaches to modeling
Gurr, Jens; Schmidt, J
2014-01-01
Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...
Dynamic complexities in a parasitoid-host-parasitoid ecological model
International Nuclear Information System (INIS)
Yu Hengguo; Zhao Min; Lv Songjuan; Zhu Lili
2009-01-01
Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model
Dynamic complexities in a parasitoid-host-parasitoid ecological model
Energy Technology Data Exchange (ETDEWEB)
Yu Hengguo [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China); Zhao Min [School of Life and Environmental Science, Wenzhou University, Wenzhou, Zhejiang 325027 (China)], E-mail: zmcn@tom.com; Lv Songjuan; Zhu Lili [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China)
2009-01-15
Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model.
A marketing mix model for a complex and turbulent environment
Directory of Open Access Journals (Sweden)
R. B. Mason
2007-12-01
Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the companys external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with
Generalized complex geometry, generalized branes and the Hitchin sigma model
International Nuclear Information System (INIS)
Zucchini, Roberto
2005-01-01
Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)
Reassessing Geophysical Models of the Bushveld Complex in 3D
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less
Liu, Yi; Cui, Wenjing; Liu, Zhongmei; Cui, Youtian; Xia, Yuanyuan; Kobayashi, Michihiko; Zhou, Zhemin
2014-09-01
Self-assembling amphipathic peptides (SAPs) are the peptides that can spontaneously assemble into ordered nanostructures. It has been reported that the attachment of SAPs to the N- or C-terminus of an enzyme can benefit the thermo-stability of the enzyme. Here, we discovered that the thermo-stability and product tolerance of nitrile hydratase (NHase) were enhanced by fusing with two of the SAPs (EAK16 and ELK16). When the ELK16 was fused to the N-terminus of β-subunit, the resultant NHase (SAP-NHase-2) became an active inclusion body; EAK16 fused NHase in the N-terminus of β-subunit (SAP-NHase-1) and ELK16 fused NHase in the C-terminus of β-subunit (SAP-NHase-10) did not affect NHase solubility. Compared with the deactivation of the wild-type NHase after 30 min incubation at 50°C, SAP-NHase-1, SAP-NHase-2 and SAP-NHase-10 retained 45%, 30% and 50% activity; after treatment in the buffer containing 10% acrylamide, the wild-type retained 30% activity, while SAP-NHase-1, SAP-NHase-2 and SAP-NHase-10 retained 52%, 42% and 55% activity. These SAP-NHases with enhanced thermo-stability and product tolerance would be helpful for further industrial applications of the NHase. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Kamble, Ashwini L; Banoth, Linga; Meena, Vachan Singh; Singh, Amit; Chisti, Yusuf; Banerjee, U C
2013-08-01
The intracellular cobalt-type nitrile hydratase was purified from the bacterium Rhodococcuserythropolis. The pure enzyme consisted of two subunits of 29 and 30 kDa. The molecular weight of the native enzyme was estimated to be 65 kDa. At 25 °C the enzyme had a half-life of 25 h. The Michaelis-Menten constants K m and v max for the enzyme were 0.624 mM and 5.12 μmol/min/mg, respectively, using 3-cyanopyridine as the substrate. The enzyme-containing freely-suspended bacterial cells and the cells immobilized within alginate beads were evaluated for converting the various nitriles to amides. In a packed bed reactor, alginate beads (2 % alginate; 3 mm bead diameter) containing 200 mg/mL of cells, achieved a conversion of >90 % for benzonitrile and 4-cyanopyridine in 38 h (25 °C, pH 7.0) at a feed substrate concentration of 100 mM. The beads could be reused for up to six reaction cycles.
Directory of Open Access Journals (Sweden)
Dobrovic Alexander
2009-09-01
Full Text Available Abstract Background Succinate dehydrogenase (SDH and fumarate hydratase (FH are tricarboxylic acid (TCA cycle enzymes that are also known to act as tumour suppressor genes. Increased succinate or fumarate levels as a consequence of SDH and FH deficiency inhibit hypoxia inducible factor-1α (HIF-1α prolyl hydroxylases leading to sustained HIF-1α expression in tumours. Since HIF-1α is frequently expressed in breast carcinomas, DNA methylation at the promoter regions of the SDHA, SDHB, SDHC and SDHD and FH genes was evaluated as a possible mechanism in silencing of SDH and FH expression in breast carcinomas. Findings No DNA methylation was identified in the promoter regions of the SDHA, SDHB, SDHC, SDHD and FH genes in 72 breast carcinomas and 10 breast cancer cell lines using methylation-sensitive high resolution melting which detects both homogeneous and heterogeneous methylation. Conclusion These results show that inactivation via DNA methylation of the promoter CpG islands of SDH and FH is unlikely to play a major role in sporadic breast carcinomas.
Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.
Taha, Mohamed; Khan, Imran; Coutinho, João A P
2016-04-01
With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling Complex Nesting Structures in International Business Research
DEFF Research Database (Denmark)
Nielsen, Bo Bernhard; Nielsen, Sabina
2013-01-01
hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
Universal correlators for multi-arc complex matrix models
International Nuclear Information System (INIS)
Akemann, G.
1997-01-01
The correlation functions of the multi-arc complex matrix model are shown to be universal for any finite number of arcs. The universality classes are characterized by the support of the eigenvalue density and are conjectured to fall into the same classes as the ones recently found for the Hermitian model. This is explicitly shown to be true for the case of two arcs, apart from the known result for one arc. The basic tool is the iterative solution of the loop equation for the complex matrix model with multiple arcs, which provides all multi-loop correlators up to an arbitrary genus. Explicit results for genus one are given for any number of arcs. The two-arc solution is investigated in detail, including the double-scaling limit. In addition universal expressions for the string susceptibility are given for both the complex and Hermitian model. (orig.)
Bim Automation: Advanced Modeling Generative Process for Complex Structures
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Complex groundwater flow systems as traveling agent models
Directory of Open Access Journals (Sweden)
Oliver López Corona
2014-10-01
Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Energy Technology Data Exchange (ETDEWEB)
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
ANS main control complex three-dimensional computer model development
International Nuclear Information System (INIS)
Cleaves, J.E.; Fletcher, W.M.
1993-01-01
A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use
Nostradamus 2014 prediction, modeling and analysis of complex systems
Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto
2014-01-01
The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...
The effects of model and data complexity on predictions from species distributions models
DEFF Research Database (Denmark)
García-Callejas, David; Bastos, Miguel
2016-01-01
How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...
Modeling geophysical complexity: a case for geometric determinism
Directory of Open Access Journals (Sweden)
C. E. Puente
2007-01-01
Full Text Available It has been customary in the last few decades to employ stochastic models to represent complex data sets encountered in geophysics, particularly in hydrology. This article reviews a deterministic geometric procedure to data modeling, one that represents whole data sets as derived distributions of simple multifractal measures via fractal functions. It is shown how such a procedure may lead to faithful holistic representations of existing geophysical data sets that, while complementing existing representations via stochastic methods, may also provide a compact language for geophysical complexity. The implications of these ideas, both scientific and philosophical, are stressed.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.
Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka
2016-01-01
An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…
Modelling, Estimation and Control of Networked Complex Systems
Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro
2009-01-01
The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...
Infinite Multiple Membership Relational Modeling for Complex Networks
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai
Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...
Modeling data irregularities and structural complexities in data envelopment analysis
Zhu, Joe
2007-01-01
In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. This book deals with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling of both qualitative and quantitative data. This handbook treatment deals with specific data problems including: imprecise or inaccurate data; missing data; qualitative data; outliers; undesirable outputs; quality data; statistical analysis; software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.
Modeling the propagation of mobile malware on complex networks
Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue
2016-08-01
In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.
Uncertainty and validation. Effect of model complexity on uncertainty estimates
International Nuclear Information System (INIS)
Elert, M.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Modelling and simulating in-stent restenosis with complex automata
Hoekstra, A.G.; Lawford, P.; Hose, R.
2010-01-01
In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata Model for in-stent restenosis, coupling bulk flow, drug diffusion, and
The Complexity of Developmental Predictions from Dual Process Models
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
Constructive Lower Bounds on Model Complexity of Shallow Perceptron Networks
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra
2018-01-01
Roč. 29, č. 7 (2018), s. 305-315 ISSN 0941-0643 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : shallow and deep networks * model complexity and sparsity * signum perceptron networks * finite mappings * variational norms * Hadamard matrices Subject RIV: IN - Informatics, Computer Science Impact factor: 2.505, year: 2016
Complexity effects in choice experiments-based models
Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.
2012-01-01
Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual
Kolmogorov complexity, pseudorandom generators and statistical models testing
Czech Academy of Sciences Publication Activity Database
Šindelář, Jan; Boček, Pavel
2002-01-01
Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002
Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations
DEFF Research Database (Denmark)
Padfield, Nicolas; Andreasen, Troels
2012-01-01
on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...
Model-based safety architecture framework for complex systems
Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang
2015-01-01
The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural
A binary logistic regression model with complex sampling design of ...
African Journals Online (AJOL)
2017-09-03
Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.
On the general procedure for modelling complex ecological systems
International Nuclear Information System (INIS)
He Shanyu.
1987-12-01
In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs
The dynamic complexity of a three species food chain model
International Nuclear Information System (INIS)
Lv Songjuan; Zhao Min
2008-01-01
In this paper, a three-species food chain model is analytically investigated on theories of ecology and using numerical simulation. Bifurcation diagrams are obtained for biologically feasible parameters. The results show that the system exhibits rich complexity features such as stable, periodic and chaotic dynamics
Sulis, William H
2017-10-01
Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.
Uncertainty and validation. Effect of model complexity on uncertainty estimates
Energy Technology Data Exchange (ETDEWEB)
Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.
1996-09-01
In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root
Hill, Renee J.; Chopra, Pradeep; Richardi, Toni
2012-01-01
Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338
Directory of Open Access Journals (Sweden)
Henry de-Graft Acquah
2013-01-01
Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.
Higher genus correlators for the complex matrix model
International Nuclear Information System (INIS)
Ambjorn, J.; Kristhansen, C.F.; Makeenko, Y.M.
1992-01-01
In this paper, the authors describe an iterative scheme which allows us to calculate any multi-loop correlator for the complex matrix model to any genus using only the first in the chain of loop equations. The method works for a completely general potential and the results contain no explicit reference to the couplings. The genus g contribution to the m-loop correlator depends on a finite number of parameters, namely at most 4g - 2 + m. The authors find the generating functional explicitly up to genus three. The authors show as well that the model is equivalent to an external field problem for the complex matrix model with a logarithmic potential
Reduced Complexity Volterra Models for Nonlinear System Identification
Directory of Open Access Journals (Sweden)
Hacıoğlu Rıfat
2001-01-01
Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.
Deciphering the complexity of acute inflammation using mathematical models.
Vodovotz, Yoram
2006-01-01
Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.
Nonlinear model of epidemic spreading in a complex social network.
Kosiński, Robert A; Grabowski, A
2007-10-01
The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.
Elastic Network Model of a Nuclear Transport Complex
Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.
2010-05-01
The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
BlenX-based compositional modeling of complex reaction mechanisms
Directory of Open Access Journals (Sweden)
Judit Zámborszky
2010-02-01
Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.
Multiscale modeling of complex materials phenomenological, theoretical and computational aspects
Trovalusci, Patrizia
2014-01-01
The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.
Building Better Ecological Machines: Complexity Theory and Alternative Economic Models
Directory of Open Access Journals (Sweden)
Jess Bier
2016-12-01
Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.
Modelling and simulation of gas explosions in complex geometries
Energy Technology Data Exchange (ETDEWEB)
Saeter, Olav
1998-12-31
This thesis presents a three-dimensional Computational Fluid Dynamics (CFD) code (EXSIM94) for modelling and simulation of gas explosions in complex geometries. It gives the theory and validates the following sub-models : (1) the flow resistance and turbulence generation model for densely packed regions, (2) the flow resistance and turbulence generation model for single objects, and (3) the quasi-laminar combustion model. It is found that a simple model for flow resistance and turbulence generation in densely packed beds is able to reproduce the medium and large scale MERGE explosion experiments of the Commission of European Communities (CEC) within a band of factor 2. The model for a single representation is found to predict explosion pressure in better agreement with the experiments with a modified k-{epsilon} model. This modification also gives a slightly improved grid independence for realistic gas explosion approaches. One laminar model is found unsuitable for gas explosion modelling because of strong grid dependence. Another laminar model is found to be relatively grid independent and to work well in harmony with the turbulent combustion model. The code is validated against 40 realistic gas explosion experiments. It is relatively grid independent in predicting explosion pressure in different offshore geometries. It can predict the influence of ignition point location, vent arrangements, different geometries, scaling effects and gas reactivity. The validation study concludes with statistical and uncertainty analyses of the code performance. 98 refs., 96 figs, 12 tabs.
Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling
Directory of Open Access Journals (Sweden)
Samar Hayat Khan Tareen
2015-07-01
Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model
Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.
Transtrum, Mark K; Qiu, Peng
2016-05-01
The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior.
Mathematical modelling of complex contagion on clustered networks
O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James
2015-09-01
The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
Mathematical modelling of complex contagion on clustered networks
Directory of Open Access Journals (Sweden)
David J. P. O'Sullivan
2015-09-01
Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
Directory of Open Access Journals (Sweden)
Ross R
2011-02-01
Full Text Available Abstract Background The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. Results MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. Conclusions MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA production, and this protein has an additional function in bacterial stress protection.
Rosberg-Cody, Eva; Liavonchanka, Alena; Göbel, Cornelia; Ross, R Paul; O'Sullivan, Orla; Fitzgerald, Gerald F; Feussner, Ivo; Stanton, Catherine
2011-02-17
The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA) from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium) and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA) production, and this protein has an additional function in bacterial stress protection.
LENUS (Irish Health Repository)
Rosberg-Cody, Eva
2011-02-17
Abstract Background The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA) from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium) and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. Results MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. Conclusions MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA) production, and this protein has an additional function in bacterial stress protection.
The semiotics of control and modeling relations in complex systems.
Joslyn, C
2001-01-01
We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.
Predicting the future completing models of observed complex systems
Abarbanel, Henry
2013-01-01
Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...
An Ontology for Modeling Complex Inter-relational Organizations
Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel
This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.
A computational framework for modeling targets as complex adaptive systems
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
Fundamentals of complex networks models, structures and dynamics
Chen, Guanrong; Li, Xiang
2014-01-01
Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F
Model Complexities of Shallow Networks Representing Highly Varying Functions
Czech Academy of Sciences Publication Activity Database
Kůrková, Věra; Sanguineti, M.
2016-01-01
Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016
Complexity and agent-based modelling in urban research
DEFF Research Database (Denmark)
Fertner, Christian
influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...
Methodology and Results of Mathematical Modelling of Complex Technological Processes
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
The complex sine-Gordon model on a half line
International Nuclear Information System (INIS)
Tzamtzis, Georgios
2003-01-01
In this thesis, we study the complex sine-Gordon model on a half line. The model in the bulk is an integrable (1+1) dimensional field theory which is U(1) gauge invariant and comprises a generalisation of the sine-Gordon theory. It accepts soliton and breather solutions. By introducing suitably selected boundary conditions we may consider the model on a half line. Through such conditions the model can be shown to remain integrable and various aspects of the boundary theory can be examined. The first chapter serves as a brief introduction to some basic concepts of integrability and soliton solutions. As an example of an integrable system with soliton solutions, the sine-Gordon model is presented both in the bulk and on a half line. These results will serve as a useful guide for the model at hand. The introduction finishes with a brief overview of the two methods that will be used on the fourth chapter in order to obtain the quantum spectrum of the boundary complex sine-Gordon model. In the second chapter the model is properly introduced along with a brief literature review. Different realisations of the model and their connexions are discussed. The vacuum of the theory is investigated. Soliton solutions are given and a discussion on the existence of breathers follows. Finally the collapse of breather solutions to single solitons is demonstrated and the chapter concludes with a different approach to the breather problem. In the third chapter, we construct the lowest conserved currents and through them we find suitable boundary conditions that allow for their conservation in the presence of a boundary. The boundary term is added to the Lagrangian and the vacuum is reexamined in the half line case. The reflection process of solitons from the boundary is studied and the time-delay is calculated. Finally we address the existence of boundary-bound states. In the fourth chapter we study the quantum complex sine-Gordon model. We begin with a brief overview of the theory in
Extending a configuration model to find communities in complex networks
International Nuclear Information System (INIS)
Jin, Di; Hu, Qinghua; He, Dongxiao; Yang, Bo; Baquero, Carlos
2013-01-01
Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities. (paper)
Tucker, Trudy-Ann; Crow, Sidney A; Pierce, George E
2012-11-01
Rhodococcus is an important industrial microorganism that possesses diverse metabolic capabilities; it also has a cell envelope, composed of an outer layer of mycolic acids and glycolipids. Selected Rhodococcus species when induced are capable of transforming nitriles to the corresponding amide by the enzyme nitrile hydratase (NHase), and subsequently to the corresponding acid via an amidase. This nitrile biochemistry has generated interest in using the rhodococci as biocatalysts. It was hypothesized that altering sugars in the growth medium might impact cell envelope components and have effects on NHase. When the primary carbon source in growth media was changed from glucose to fructose, maltose, or maltodextrin, the NHase activity increased. Cells grown in the presence of maltose and maltodextrin showed the highest activities against propionitrile, 197 and 202 units/mg cdw, respectively. Stability of NHase was also affected as cells grown in the presence of maltose and maltodextrin retained more NHase activity at 55 °C (45 and 23 %, respectively) than cells grown in the presence of glucose or fructose (19 and 10 %, respectively). Supplementation of trehalose in the growth media resulted in increased NHase stability at 55 °C, as cells grown in the presence of glucose retained 40 % NHase activity as opposed to 19 % without the presence of trehalose. Changes in cell envelope components, such mycolic acids and glycolipids, were evaluated by high-performance liquid chromatography (HPLC) and thin-layer chromatography (TLC), respectively. Changing sugars and the addition of inducing components for NHase, such as cobalt and urea in growth media, resulted in changes in mycolic acid profiles. Mycolic acid content increased 5 times when cobalt and urea were added to media with glucose. Glycolipids levels were also affected by the changes in sugars and addition of inducing components. This research demonstrates that carbohydrate selection impacts NHase activity and
Modelling non-redox enzymes: Anaerobic and aerobic acetylene ...
Indian Academy of Sciences (India)
Administrator
Modelling non-redox enzymes: Anaerobic and aerobic acetylene hydratase. SABYASACHI SARKAR. Department of Chemistry, Indian Institute of Technology, Kanpur 208 016,. India. Acetaldehyde is the first metabolite produced during acetylene degradation by bacteria either aerobically or anaerobically. Conversion of ...
Using model complexes to augment and advance metalloproteinase inhibitor design.
Jacobsen, Faith E; Cohen, Seth M
2004-05-17
The tetrahedral zinc complex [(Tp(Ph,Me))ZnOH] (Tp(Ph,Me) = hydrotris(3,5-phenylmethylpyrazolyl)borate) was combined with 2-thenylmercaptan, ethyl 4,4,4-trifluoroacetoacetate, salicylic acid, salicylamide, thiosalicylic acid, thiosalicylamide, methyl salicylate, methyl thiosalicyliate, and 2-hydroxyacetophenone to form the corresponding [(Tp(Ph,Me))Zn(ZBG)] complexes (ZBG = zinc-binding group). X-ray crystal structures of these complexes were obtained to determine the mode of binding for each ZBG, several of which had been previously studied with SAR by NMR (structure-activity relationship by nuclear magnetic resonance) as potential ligands for use in matrix metalloproteinase inhibitors. The [(Tp(Ph,Me))Zn(ZBG)] complexes show that hydrogen bonding and donor atom acidity have a pronounced effect on the mode of binding for this series of ligands. The results of these studies give valuable insight into how ligand protonation state and intramolecular hydrogen bonds can influence the coordination mode of metal-binding proteinase inhibitors. The findings here suggest that model-based approaches can be used to augment drug discovery methods applied to metalloproteins and can aid second-generation drug design.
Semiotic aspects of control and modeling relations in complex systems
Energy Technology Data Exchange (ETDEWEB)
Joslyn, C.
1996-08-01
A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.
Stability of rotor systems: A complex modelling approach
DEFF Research Database (Denmark)
Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob
1998-01-01
The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...
Surface complexation modeling of zinc sorption onto ferrihydrite.
Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L
2004-02-01
A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength
Modelling of the quenching process in complex superconducting magnet systems
International Nuclear Information System (INIS)
Hagedorn, D.; Rodriguez-Mateos, F.
1992-01-01
This paper reports that the superconducting twin bore dipole magnet for the proposed Large Hadron Collider (LHC) at CERN shows a complex winding structure consisting of eight compact layers each of them electromagnetically and thermally coupled with the others. This magnet is only one part of an electrical circuit; test and operation conditions are characterized by different circuits. In order to study the quenching process in this complex system, design adequate protection schemes, and provide a basis for the dimensioning of protection devices such as heaters, current breakers and dump resistors, a general simulation tool called QUABER has been developed using the analog system analysis program SABER. A complete set of electro-thermal models has been crated for the propagation of normal regions. Any network extension or modification is easy to implement without rewriting the whole set of differential equations
A Primer for Model Selection: The Decisive Role of Model Complexity
Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang
2018-03-01
Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)
Spectroscopic studies of molybdenum complexes as models for nitrogenase
International Nuclear Information System (INIS)
Walker, T.P.
1981-05-01
Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo 2 O 4 (cysteine) 2 2- and trans-Mo(N 2 ) 2 (dppe) 2 (dppe = 1,2-bis(diphenylphosphino)ethane). The H 1 and C 13 NMR of solutions of Mo 2 O 4 (cys) 2 2- are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N 2 ) 2 (dppe) 2 is described and compared to the EXAFS of MoH 4 (dppe) 2 . The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo 2 O 4 (cys) 2 2- and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction
Diffusion in higher dimensional SYK model with complex fermions
Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong
2018-01-01
We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.
3D model of amphioxus steroid receptor complexed with estradiol
Energy Technology Data Exchange (ETDEWEB)
Baker, Michael E., E-mail: mbaker@ucsd.edu [Department of Medicine, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0693 (United States); Chang, David J. [Department of Biology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0693 (United States)
2009-08-28
The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.
International Nuclear Information System (INIS)
Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de
2011-01-01
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.
Energy Technology Data Exchange (ETDEWEB)
Bonten, Luc T.C., E-mail: luc.bonten@wur.nl [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Groenenberg, Jan E. [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Meesenburg, Henning [Northwest German Forest Research Station, Abt. Umweltkontrolle, Sachgebiet Intensives Umweltmonitoring, Goettingen (Germany); Vries, Wim de [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands)
2011-10-15
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: > Surface complexation models can be well applied in field studies. > Soil chemistry under a forest site is adequately modelled using generic parameters. > The model is easily extended with extra elements within the existing framework. > Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.
Multiagent model and mean field theory of complex auction dynamics
Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng
2015-09-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.
Multiagent model and mean field theory of complex auction dynamics
International Nuclear Information System (INIS)
Chen, Qinghua; Wang, Yougui; Huang, Zi-Gang; Lai, Ying-Cheng
2015-01-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena. (paper)
Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.
Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos
2017-01-01
Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Modelling study of sea breezes in a complex coastal environment
Cai, X.-M.; Steyn, D. G.
This study investigates a mesoscale modelling of sea breezes blowing from a narrow strait into the lower Fraser valley (LFV), British Columbia, Canada, during the period of 17-20 July, 1985. Without a nudging scheme in the inner grid, the CSU-RAMS model produces satisfactory wind and temperature fields during the daytime. In comparison with observation, the agreement indices for surface wind and temperature during daytime reach about 0.6 and 0.95, respectively, while the agreement indices drop to 0.4 at night. In the vertical, profiles of modelled wind and temperature generally agree with tethersonde data collected on 17 and 19 July. The study demonstrates that in late afternoon, the model does not capture the advection of an elevated warm layer which originated from land surfaces outside of the inner grid. Mixed layer depth (MLD) is calculated from model output of turbulent kinetic energy field. Comparison of MLD results with observation shows that the method generates a reliable MLD during the daytime, and that accurate estimates of MLD near the coast require the correct simulation of wind conditions over the sea. The study has shown that for a complex coast environment like the LFV, a reliable modelling study depends not only on local surface fluxes but also on elevated layers transported from remote land surfaces. This dependence is especially important when local forcings are weak, for example, during late afternoon and at night.
Physical modelling of flow and dispersion over complex terrain
Cermak, J. E.
1984-09-01
Atmospheric motion and dispersion over topography characterized by irregular (or regular) hill-valley or mountain-valley distributions are strongly dependent upon three general sets of variables. These are variables that describe topographic geometry, synoptic-scale winds and surface-air temperature distributions. In addition, pollutant concentration distributions also depend upon location and physical characteristics of the pollutant source. Overall fluid-flow complexity and variability from site to site have stimulated the development and use of physical modelling for determination of flow and dispersion in many wind-engineering applications. Models with length scales as small as 1:12,000 have been placed in boundary-layer wind tunnels to study flows in which forced convection by synoptic winds is of primary significance. Flows driven primarily by forces arising from temperature differences (gravitational or free convection) have been investigated by small-scale physical models placed in an isolated space (gravitational convection chamber). Similarity criteria and facilities for both forced and gravitational-convection flow studies are discussed. Forced-convection modelling is illustrated by application to dispersion of air pollutants by unstable flow near a paper mill in the state of Maryland and by stable flow over Point Arguello, California. Gravitational-convection modelling is demonstrated by a study of drainage flow and pollutant transport from a proposed mining operation in the Rocky Mountains of Colorado. Other studies in which field data are available for comparison with model data are reviewed.
Modeling the Propagation of Mobile Phone Virus under Complex Network
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209
Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)
2010-01-01
Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Complex Dynamics of an Adnascent-Type Game Model
Directory of Open Access Journals (Sweden)
Baogui Xin
2008-01-01
Full Text Available The paper presents a nonlinear discrete game model for two oligopolistic firms whose products are adnascent. (In biology, the term adnascent has only one sense, “growing to or on something else,” e.g., “moss is an adnascent plant.” See Webster's Revised Unabridged Dictionary published in 1913 by C. & G. Merriam Co., edited by Noah Porter. The bifurcation of its Nash equilibrium is analyzed with Schwarzian derivative and normal form theory. Its complex dynamics is demonstrated by means of the largest Lyapunov exponents, fractal dimensions, bifurcation diagrams, and phase portraits. At last, bifurcation and chaos anticontrol of this system are studied.
Socio-Environmental Resilience and Complex Urban Systems Modeling
Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir
2017-04-01
The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water
a Range Based Method for Complex Facade Modeling
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of
A RANGE BASED METHOD FOR COMPLEX FACADE MODELING
Directory of Open Access Journals (Sweden)
A. Adami
2012-09-01
Full Text Available 3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes and the final results (a more detailed and complex mesh versus an approximate and more simple solid model. Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and
A subsurface model of the beaver meadow complex
Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.
2015-12-01
Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.
Simple models for studying complex spatiotemporal patterns of animal behavior
Tyutyunov, Yuri V.; Titova, Lyudmila I.
2017-06-01
Minimal mathematical models able to explain complex patterns of animal behavior are essential parts of simulation systems describing large-scale spatiotemporal dynamics of trophic communities, particularly those with wide-ranging species, such as occur in pelagic environments. We present results obtained with three different modelling approaches: (i) an individual-based model of animal spatial behavior; (ii) a continuous taxis-diffusion-reaction system of partial-difference equations; (iii) a 'hybrid' approach combining the individual-based algorithm of organism movements with explicit description of decay and diffusion of the movement stimuli. Though the models are based on extremely simple rules, they all allow description of spatial movements of animals in a predator-prey system within a closed habitat, reproducing some typical patterns of the pursuit-evasion behavior observed in natural populations. In all three models, at each spatial position the animal movements are determined by local conditions only, so the pattern of collective behavior emerges due to self-organization. The movement velocities of animals are proportional to the density gradients of specific cues emitted by individuals of the antagonistic species (pheromones, exometabolites or mechanical waves of the media, e.g., sound). These cues play a role of taxis stimuli: prey attract predators, while predators repel prey. Depending on the nature and the properties of the movement stimulus we propose using either a simplified individual-based model, a continuous taxis pursuit-evasion system, or a little more detailed 'hybrid' approach that combines simulation of the individual movements with the continuous model describing diffusion and decay of the stimuli in an explicit way. These can be used to improve movement models for many species, including large marine predators.
Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin
2015-01-01
In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....
Automated sensitivity analysis: New tools for modeling complex dynamic systems
International Nuclear Information System (INIS)
Pin, F.G.
1987-01-01
Sensitivity analysis is an established methodology used by researchers in almost every field to gain essential insight in design and modeling studies and in performance assessments of complex systems. Conventional sensitivity analysis methodologies, however, have not enjoyed the widespread use they deserve considering the wealth of information they can provide, partly because of their prohibitive cost or the large initial analytical investment they require. Automated systems have recently been developed at ORNL to eliminate these drawbacks. Compilers such as GRESS and EXAP now allow automatic and cost effective calculation of sensitivities in FORTRAN computer codes. In this paper, these and other related tools are described and their impact and applicability in the general areas of modeling, performance assessment and decision making for radioactive waste isolation problems are discussed
Complex system modelling and control through intelligent soft computations
Azar, Ahmad
2015-01-01
The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...
Does model performance improve with complexity? A case study with three hydrological models
Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano
2015-04-01
In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).
Accurate modeling and evaluation of microstructures in complex materials
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Green IT engineering concepts, models, complex systems architectures
Kondratenko, Yuriy; Kacprzyk, Janusz
2017-01-01
This volume provides a comprehensive state of the art overview of a series of advanced trends and concepts that have recently been proposed in the area of green information technologies engineering as well as of design and development methodologies for models and complex systems architectures and their intelligent components. The contributions included in the volume have their roots in the authors’ presentations, and vivid discussions that have followed the presentations, at a series of workshop and seminars held within the international TEMPUS-project GreenCo project in United Kingdom, Italy, Portugal, Sweden and the Ukraine, during 2013-2015 and at the 1st - 5th Workshops on Green and Safe Computing (GreenSCom) held in Russia, Slovakia and the Ukraine. The book presents a systematic exposition of research on principles, models, components and complex systems and a description of industry- and society-oriented aspects of the green IT engineering. A chapter-oriented structure has been adopted for this book ...
Modelling methodology for engineering of complex sociotechnical systems
CSIR Research Space (South Africa)
Oosthuizen, R
2014-10-01
Full Text Available Different systems engineering techniques and approaches are applied to design and develop complex sociotechnical systems for complex problems. In a complex sociotechnical system cognitive and social humans use information technology to make sense...
Tong, Winghang; Sourbier, Carole; Kovtunovych, Gennadiy; Jeong, Suhyoung; Vira, Manish A.; Ghosh, Manik Chandra; Romero, Vladimir Valera; Sougrat, Rachid; Vaulont, Sophie; Viollet, Benoî t; Kim, Yeongsang; Lee, Sunmin; Trepel, Jane B.; Srinivasan, Ramaprasad; Bratslavsky, Gennady; Yang, Youfeng; Linehan, William Marston; Rouault, Tracey A.
2011-01-01
Inactivation of the TCA cycle enzyme, fumarate hydratase (FH), drives a metabolic shift to aerobic glycolysis in FH-deficient kidney tumors and cell lines from patients with hereditary leiomyomatosis renal cell cancer (HLRCC), resulting in decreased levels of AMP-activated kinase (AMPK) and p53 tumor suppressor, and activation of the anabolic factors, acetyl-CoA carboxylase and ribosomal protein S6. Reduced AMPK levels lead to diminished expression of the DMT1 iron transporter, and the resulting cytosolic iron deficiency activates the iron regulatory proteins, IRP1 and IRP2, and increases expression of the hypoxia inducible factor HIF-1α, but not HIF-2α. Silencing of HIF-1α or activation of AMPK diminishes invasive activities, indicating that alterations of HIF-1α and AMPK contribute to the oncogenic growth of FH-deficient cells. © 2011 Elsevier Inc.
Tong, Winghang
2011-09-01
Inactivation of the TCA cycle enzyme, fumarate hydratase (FH), drives a metabolic shift to aerobic glycolysis in FH-deficient kidney tumors and cell lines from patients with hereditary leiomyomatosis renal cell cancer (HLRCC), resulting in decreased levels of AMP-activated kinase (AMPK) and p53 tumor suppressor, and activation of the anabolic factors, acetyl-CoA carboxylase and ribosomal protein S6. Reduced AMPK levels lead to diminished expression of the DMT1 iron transporter, and the resulting cytosolic iron deficiency activates the iron regulatory proteins, IRP1 and IRP2, and increases expression of the hypoxia inducible factor HIF-1α, but not HIF-2α. Silencing of HIF-1α or activation of AMPK diminishes invasive activities, indicating that alterations of HIF-1α and AMPK contribute to the oncogenic growth of FH-deficient cells. © 2011 Elsevier Inc.
A modeling process to understand complex system architectures
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two
Equation-free model reduction for complex dynamical systems
International Nuclear Information System (INIS)
Le Maitre, O. P.; Mathelin, L.; Le Maitre, O. P.
2010-01-01
This paper presents a reduced model strategy for simulation of complex physical systems. A classical reduced basis is first constructed relying on proper orthogonal decomposition of the system. Then, unlike the alternative approaches, such as Galerkin projection schemes for instance, an equation-free reduced model is constructed. It consists in the determination of an explicit transformation, or mapping, for the evolution over a coarse time-step of the projection coefficients of the system state on the reduced basis. The mapping is expressed as an explicit polynomial transformation of the projection coefficients and is computed once and for all in a pre-processing stage using the detailed model equation of the system. The reduced system can then be advanced in time by successive applications of the mapping. The CPU cost of the method lies essentially in the mapping approximation which is performed offline, in a parallel fashion, and only once. Subsequent application of the mapping to perform a time-integration is carried out at a low cost thanks to its explicit character. Application of the method is considered for the 2-D flow around a circular cylinder. We investigate the effectiveness of the reduced model in rendering the dynamics for both asymptotic state and transient stages. It is shown that the method leads to a stable and accurate time-integration for only a fraction of the cost of a detailed simulation, provided that the mapping is properly approximated and the reduced basis remains relevant for the dynamics investigated. (authors)
Educational complex of light-colored modeling of urban environment
Directory of Open Access Journals (Sweden)
Karpenko Vladimir E.
2018-01-01
Full Text Available Mechanisms, methodological tools and structure of a training complex of light-colored modeling of the urban environment are developed in this paper. The following results of the practical work of students are presented: light composition and installation, media facades, lighting of building facades, city streets and embankment. As a result of modeling, the structure of the light form is determined. Light-transmitting materials and causing characteristic optical illusions, light-visual and light-dynamic effects (video-dynamics and photostatics, basic compositional techniques of light form are revealed. The main elements of the light installation are studied, including a light projection, an electronic device, interactivity and relationality of the installation, and the mechanical device which becomes a part of the installation composition. The meaning of modern media facade technology is the transformation of external building structures and their facades into a changing information cover, into a media content translator using LED technology. Light tectonics and the light rhythm of the plastics of the architectural object are built up through point and local illumination, modeling of the urban ensemble assumes the structural interaction of several light building models with special light-composition techniques. When modeling the social and pedestrian environment, the lighting parameters depend on the scale of the chosen space and are adapted taking into account the visual perception of the pedestrian, and the atmospheric effects of comfort and safety of the environment are achieved with the help of special light compositional techniques. With the aim of realizing the tasks of light modeling, a methodology has been created, including the mechanisms of models, variability and complementarity. The perspectives of light modeling in the context of structural elements of the city, neuropsychology, wireless and bioluminescence technologies are proposed
Adaptive Surface Modeling of Soil Properties in Complex Landforms
Directory of Open Access Journals (Sweden)
Wei Liu
2017-06-01
Full Text Available Abstract: Spatial discontinuity often causes poor accuracy when a single model is used for the surface modeling of soil properties in complex geomorphic areas. Here we present a method for adaptive surface modeling of combined secondary variables to improve prediction accuracy during the interpolation of soil properties (ASM-SP. Using various secondary variables and multiple base interpolation models, ASM-SP was used to interpolate soil K+ in a typical complex geomorphic area (Qinghai Lake Basin, China. Five methods, including inverse distance weighting (IDW, ordinary kriging (OK, and OK combined with different secondary variables (e.g., OK-Landuse, OK-Geology, and OK-Soil, were used to validate the proposed method. The mean error (ME, mean absolute error (MAE, root mean square error (RMSE, mean relative error (MRE, and accuracy (AC were used as evaluation indicators. Results showed that: (1 The OK interpolation result is spatially smooth and has a weak bull's-eye effect, and the IDW has a stronger ‘bull’s-eye’ effect, relatively. They both have obvious deficiencies in depicting spatial variability of soil K+. (2 The methods incorporating combinations of different secondary variables (e.g., ASM-SP, OK-Landuse, OK-Geology, and OK-Soil were associated with lower estimation bias. Compared with IDW, OK, OK-Landuse, OK-Geology, and OK-Soil, the accuracy of ASM-SP increased by 13.63%, 10.85%, 9.98%, 8.32%, and 7.66%, respectively. Furthermore, ASM-SP was more stable, with lower MEs, MAEs, RMSEs, and MREs. (3 ASM-SP presents more details than others in the abrupt boundary, which can render the result consistent with the true secondary variables. In conclusion, ASM-SP can not only consider the nonlinear relationship between secondary variables and soil properties, but can also adaptively combine the advantages of multiple models, which contributes to making the spatial interpolation of soil K+ more reasonable.
Modeling Cu{sup 2+}-Aβ complexes from computational approaches
Energy Technology Data Exchange (ETDEWEB)
Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)
2015-09-15
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Siegfried, Robert
2014-01-01
Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard
Wind Tunnel Modeling Of Wind Flow Over Complex Terrain
Banks, D.; Cochran, B.
2010-12-01
This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation
Toxicological risk assessment of complex mixtures through the Wtox model
Directory of Open Access Journals (Sweden)
William Gerson Matias
2015-01-01
Full Text Available Mathematical models are important tools for environmental management and risk assessment. Predictions about the toxicity of chemical mixtures must be enhanced due to the complexity of eects that can be caused to the living species. In this work, the environmental risk was accessed addressing the need to study the relationship between the organism and xenobiotics. Therefore, ve toxicological endpoints were applied through the WTox Model, and with this methodology we obtained the risk classication of potentially toxic substances. Acute and chronic toxicity, citotoxicity and genotoxicity were observed in the organisms Daphnia magna, Vibrio scheri and Oreochromis niloticus. A case study was conducted with solid wastes from textile, metal-mechanic and pulp and paper industries. The results have shown that several industrial wastes induced mortality, reproductive eects, micronucleus formation and increases in the rate of lipid peroxidation and DNA methylation of the organisms tested. These results, analyzed together through the WTox Model, allowed the classication of the environmental risk of industrial wastes. The evaluation showed that the toxicological environmental risk of the samples analyzed can be classied as signicant or critical.
Integrated modeling tool for performance engineering of complex computer systems
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Atmospheric dispersion modelling over complex terrain at small scale
Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.
2014-03-01
Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.
Complex accident scenarios modelled and analysed by Stochastic Petri Nets
International Nuclear Information System (INIS)
Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš
2015-01-01
This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)
3D modeling and visualization software for complex geometries
International Nuclear Information System (INIS)
Guse, Guenter; Klotzbuecher, Michael; Mohr, Friedrich
2011-01-01
The reactor safety depends on reliable nondestructive testing of reactor components. For 100% detection probability of flaws and the determination of their size using ultrasonic methods the ultrasonic waves have to hit the flaws within a specific incidence and squint angle. For complex test geometries like testing of nozzle welds from the outside of the component these angular ranges can only be determined using elaborate mathematical calculations. The authors developed a 3D modeling and visualization software tool that allows to integrate and present ultrasonic measuring data into the 3D geometry. The software package was verified using 1:1 test samples (example: testing of the nozzle edge of the feedwater nozzle of a steam generator from the outside; testing of the reactor pressure vessel nozzle edge from the inside).
The inherent complexity in nonlinear business cycle model in resonance
International Nuclear Information System (INIS)
Ma Junhai; Sun Tao; Liu Lixia
2008-01-01
Based on Abraham C.-L. Chian's research, we applied nonlinear dynamic system theory to study the first-order and second-order approximate solutions to one category of the nonlinear business cycle model in resonance condition. We have also analyzed the relation between amplitude and phase of second-order approximate solutions as well as the relation between outer excitements' amplitude, frequency approximate solutions, and system bifurcation parameters. Then we studied the system quasi-periodical solutions, annulus periodical solutions and the path leading to system bifurcation and chaotic state with different parameter combinations. Finally, we conducted some numerical simulations for various complicated circumstances. Therefore this research will lay solid foundation for detecting the complexity of business cycles and systems in the future
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Elements of complexity in subsurface modeling, exemplified with three case studies
Energy Technology Data Exchange (ETDEWEB)
Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2017-04-03
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.
An artificial intelligence tool for complex age-depth models
Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.
2017-12-01
CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.
Dynamics of vortices in complex wakes: Modeling, analysis, and experiments
Basu, Saikat
The thesis develops singly-periodic mathematical models for complex laminar wakes which are formed behind vortex-shedding bluff bodies. These wake structures exhibit a variety of patterns as the bodies oscillate or are in close proximity of one another. The most well-known formation comprises two counter-rotating vortices in each shedding cycle and is popularly known as the von Karman vortex street. Of the more complex configurations, as a specific example, this thesis investigates one of the most commonly occurring wake arrangements, which consists of two pairs of vortices in each shedding period. The paired vortices are, in general, counter-rotating and belong to a more general definition of the 2P mode, which involves periodic release of four vortices into the flow. The 2P arrangement can, primarily, be sub-classed into two types: one with a symmetric orientation of the two vortex pairs about the streamwise direction in a periodic domain and the other in which the two vortex pairs per period are placed in a staggered geometry about the wake centerline. The thesis explores the governing dynamics of such wakes and characterizes the corresponding relative vortex motion. In general, for both the symmetric as well as the staggered four vortex periodic arrangements, the thesis develops two-dimensional potential flow models (consisting of an integrable Hamiltonian system of point vortices) that consider spatially periodic arrays of four vortices with their strengths being +/-Gamma1 and +/-Gamma2. Vortex formations observed in the experiments inspire the assumed spatial symmetry. The models demonstrate a number of dynamic modes that are classified using a bifurcation analysis of the phase space topology, consisting of level curves of the Hamiltonian. Despite the vortex strengths in each pair being unequal in magnitude, some initial conditions lead to relative equilibrium when the vortex configuration moves with invariant size and shape. The scaled comparisons of the
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Robustness and Optimization of Complex Networks : Reconstructability, Algorithms and Modeling
Liu, D.
2013-01-01
The infrastructure networks, including the Internet, telecommunication networks, electrical power grids, transportation networks (road, railway, waterway, and airway networks), gas networks and water networks, are becoming more and more complex. The complex infrastructure networks are crucial to our
Analysis of a Mouse Skin Model of Tuberous Sclerosis Complex.
Directory of Open Access Journals (Sweden)
Yanan Guo
Full Text Available Tuberous Sclerosis Complex (TSC is an autosomal dominant tumor suppressor gene syndrome in which patients develop several types of tumors, including facial angiofibroma, subungual fibroma, Shagreen patch, angiomyolipomas, and lymphangioleiomyomatosis. It is due to inactivating mutations in TSC1 or TSC2. We sought to generate a mouse model of one or more of these tumor types by targeting deletion of the Tsc1 gene to fibroblasts using the Fsp-Cre allele. Mutant, Tsc1ccFsp-Cre+ mice survived a median of nearly a year, and developed tumors in multiple sites but did not develop angiomyolipoma or lymphangioleiomyomatosis. They did develop a prominent skin phenotype with marked thickening of the dermis with accumulation of mast cells, that was minimally responsive to systemic rapamycin therapy, and was quite different from the pathology seen in human TSC skin lesions. Recombination and loss of Tsc1 was demonstrated in skin fibroblasts in vivo and in cultured skin fibroblasts. Loss of Tsc1 in fibroblasts in mice does not lead to a model of angiomyolipoma or lymphangioleiomyomatosis.
Entropies from Markov Models as Complexity Measures of Embedded Attractors
Directory of Open Access Journals (Sweden)
Julián D. Arias-Londoño
2015-06-01
Full Text Available This paper addresses the problem of measuring complexity from embedded attractors as a way to characterize changes in the dynamical behavior of different types of systems with a quasi-periodic behavior by observing their outputs. With the aim of measuring the stability of the trajectories of the attractor along time, this paper proposes three new estimations of entropy that are derived from a Markov model of the embedded attractor. The proposed estimators are compared with traditional nonparametric entropy measures, such as approximate entropy, sample entropy and fuzzy entropy, which only take into account the spatial dimension of the trajectory. The method proposes the use of an unsupervised algorithm to find the principal curve, which is considered as the “profile trajectory”, that will serve to adjust the Markov model. The new entropy measures are evaluated using three synthetic experiments and three datasets of physiological signals. In terms of consistency and discrimination capabilities, the results show that the proposed measures perform better than the other entropy measures used for comparison purposes.
Electromagnetic modelling of Ground Penetrating Radar responses to complex targets
Pajewski, Lara; Giannopoulos, Antonis
2014-05-01
This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be
Finite element modeling of piezoelectric elements with complex electrode configuration
International Nuclear Information System (INIS)
Paradies, R; Schläpfer, B
2009-01-01
It is well known that the material properties of piezoelectric materials strongly depend on the state of polarization of the individual element. While an unpolarized material exhibits mechanically isotropic material properties in the absence of global piezoelectric capabilities, the piezoelectric material properties become transversally isotropic with respect to the polarization direction after polarization. Therefore, for evaluating piezoelectric elements the material properties, including the coupling between the mechanical and the electromechanical behavior, should be addressed correctly. This is of special importance for the micromechanical description of piezoelectric elements with interdigitated electrodes (IDEs). The best known representatives of this group are active fiber composites (AFCs), macro fiber composites (MFCs) and the radial field diaphragm (RFD), respectively. While the material properties are available for a piezoelectric wafer with a homogeneous polarization perpendicular to its plane as postulated in the so-called uniform field model (UFM), the same information is missing for piezoelectric elements with more complex electrode configurations like the above-mentioned ones with IDEs. This is due to the inhomogeneous field distribution which does not automatically allow for the correct assignment of the material, i.e. orientation and property. A variation of the material orientation as well as the material properties can be accomplished by including the polarization process of the piezoelectric transducer in the finite element (FE) simulation prior to the actual load case to be investigated. A corresponding procedure is presented which automatically assigns the piezoelectric material properties, e.g. elasticity matrix, permittivity, and charge vector, for finite element models (FEMs) describing piezoelectric transducers according to the electric field distribution (field orientation and strength) in the structure. A corresponding code has been
Facing urban complexity : towards cognitive modelling. Part 1. Modelling as a cognitive mediator
Directory of Open Access Journals (Sweden)
Sylvie Occelli
2002-03-01
Full Text Available Over the last twenty years, complexity issues have been a central theme of enquiry for the modelling field. Whereas contributing to both a critical revisiting of the existing methods and opening new ways of reasoning, the effectiveness (and sense of modelling activity was rarely questioned. Acknowledgment of complexity however has been a fruitful spur new and more sophisticated methods in order to improve understanding and advance geographical sciences. However its contribution to tackle urban problems in everyday life has been rather poor and mainly limited to rhetorical claims about the potentialities of the new approach. We argue that although complexity has put the classical modelling activity in serious distress, it is disclosing new potentialities, which are still largely unnoticed. These are primarily related to what the authors has called the structural cognitive shift, which involves both the contents and role of modelling activity. This paper is a first part of a work aimed to illustrate the main features of this shift and discuss its main consequences on the modelling activity. We contend that a most relevant aspect of novelty lies in the new role of modelling as a cognitive mediator, i.e. as a kind of interface between the various components of a modelling process and the external environment to which a model application belongs.
Realistic modelling of observed seismic motion in complex sedimentary basins
International Nuclear Information System (INIS)
Faeh, D.; Panza, G.F.
1994-03-01
Three applications of a numerical technique are illustrated to model realistically the seismic ground motion for complex two-dimensional structures. First we consider a sedimentary basin in the Friuli region, and we model strong motion records from an aftershock of the 1976 earthquake. Then we simulate the ground motion caused in Rome by the 1915, Fucino (Italy) earthquake, and we compare our modelling with the damage distribution observed in the town. Finally we deal with the interpretation of ground motion recorded in Mexico City, as a consequence of earthquakes in the Mexican subduction zone. The synthetic signals explain the major characteristics (relative amplitudes, spectral amplification, frequency content) of the considered seismograms, and the space distribution of the available macroseismic data. For the sedimentary basin in the Friuli area, parametric studies demonstrate the relevant sensitivity of the computed ground motion to small changes in the subsurface topography of the sedimentary basin, and in the velocity and quality factor of the sediments. The total energy of ground motion, determined from our numerical simulation in Rome, is in very good agreement with the distribution of damage observed during the Fucino earthquake. For epicentral distances in the range 50km-100km, the source location and not only the local soil conditions control the local effects. For Mexico City, the observed ground motion can be explained as resonance effects and as excitation of local surface waves, and the theoretical and the observed maximum spectral amplifications are very similar. In general, our numerical simulations permit the estimate of the maximum and average spectral amplification for specific sites, i.e. are a very powerful tool for accurate micro-zonation. (author). 38 refs, 19 figs, 1 tab
Modelling wetting and drying effects over complex topography
Tchamen, G. W.; Kahawita, R. A.
1998-06-01
The numerical simulation of free surface flows that alternately flood and dry out over complex topography is a formidable task. The model equation set generally used for this purpose is the two-dimensional (2D) shallow water wave model (SWWM). Simplified forms of this system such as the zero inertia model (ZIM) can accommodate specific situations like slowly evolving floods over gentle slopes. Classical numerical techniques, such as finite differences (FD) and finite elements (FE), have been used for their integration over the last 20-30 years. Most of these schemes experience some kind of instability and usually fail when some particular domain under specific flow conditions is treated. The numerical instability generally manifests itself in the form of an unphysical negative depth that subsequently causes a run-time error at the computation of the celerity and/or the friction slope. The origins of this behaviour are diverse and may be generally attributed to:1. The use of a scheme that is inappropriate for such complex flow conditions (mixed regimes).2. Improper treatment of a friction source term or a large local curvature in topography.3. Mishandling of a cell that is partially wet/dry.In this paper, a tentative attempt has been made to gain a better understanding of the genesis of the instabilities, their implications and the limits to the proposed solutions. Frequently, the enforcement of robustness is made at the expense of accuracy. The need for a positive scheme, that is, a scheme that always predicts positive depths when run within the constraints of some practical stability limits, is fundamental. It is shown here how a carefully chosen scheme (in this case, an adaptation of the solver to the SWWM) can preserve positive values of water depth under both explicit and implicit time integration, high velocities and complex topography that may include dry areas. However, the treatment of the source terms: friction, Coriolis and particularly the bathymetry
Lewis, Brian A
2010-01-15
The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.
International Nuclear Information System (INIS)
Hammond, Glenn E.; Cygan, Randall Timothy
2007-01-01
Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given
Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication
Thompson, Kimberly M.
Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.
Simulation and Analysis of Complex Biological Processes: an Organisation Modelling Perspective
Bosse, T.; Jonker, C.M.; Treur, J.
2005-01-01
This paper explores how the dynamics of complex biological processes can be modelled and simulated as an organisation of multiple agents. This modelling perspective identifies organisational structure occurring in complex decentralised processes and handles complexity of the analysis of the dynamics
DEFF Research Database (Denmark)
Eby, M.; Weaver, A. J.; Alexander, K.
2013-01-01
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20...
Dynamics of Symmetric Conserved Mass Aggregation Model on Complex Networks
Institute of Scientific and Technical Information of China (English)
HUA Da-Yin
2009-01-01
We investigate the dynamical behaviour of the aggregation process in the symmetric conserved mass aggregation model under three different topological structures. The dispersion σ(t, L) = (∑i(mi - ρ0)2/L)1/2 is defined to describe the dynamical behaviour where ρ0 is the density of particle and mi is the particle number on a site. It is found numerically that for a regular lattice and a scale-free network, σ(t, L) follows a power-law scaling σ(t, L) ～ tδ1 and σ(t, L) ～ tδ4 from a random initial condition to the stationary states, respectively. However, for a small-world network, there are two power-law scaling regimes, σ(t, L) ～ tδ2 when t＜T and a(t, L) ～ tδ3 when tT. Moreover, it is found numerically that δ2 is near to δ1 for small rewiring probability q, and δ3 hardly changes with varying q and it is almost the same as δ4. We speculate that the aggregation of the connection degree accelerates the mass aggregation in the initial relaxation stage and the existence of the long-distance interactions in the complex networks results in the acceleration of the mass aggregation when tT for the small-world networks. We also show that the relaxation time T follows a power-law scaling τ Lz and σ(t, L) in the stationary state follows a power-law σs(L) ～ Lσ for three different structures.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required
International Nuclear Information System (INIS)
Ouyang, Min; Zhao, Lijing; Hong, Liu; Pan, Zhezhe
2014-01-01
Recently numerous studies have applied complex network based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards. But how effective are these models to capture their real performance response is still a question worthy of research. Taking the Chinese railway system as an example, this paper selects three typical complex network based models, including purely topological model (PTM), purely shortest path model (PSPM), and weight (link length) based shortest path model (WBSPM), to analyze railway accessibility and flow-based vulnerability and compare their results with those from the real train flow model (RTFM). The results show that the WBSPM can produce the train routines with 83% stations and 77% railway links identical to the real routines and can approach the RTFM the best for railway vulnerability under both single and multiple component failures. The correlation coefficient for accessibility vulnerability from WBSPM and RTFM under single station failures is 0.96 while it is 0.92 for flow-based vulnerability; under multiple station failures, where each station has the same failure probability fp, the WBSPM can produce almost identical vulnerability results with those from the RTFM under almost all failures scenarios when fp is larger than 0.62 for accessibility vulnerability and 0.86 for flow-based vulnerability
Imidazole-based Vanadium Complexes as Haloperoxidase Models ...
African Journals Online (AJOL)
NICO
Excellent conversions of thioanisole (100 %) were obtained under mild room temperature conditions. ... cluding that of sulphides, alkanes, alkenes and alcohols.5,6,10,11 ... This complex was prepared according to a literature method but with ...
Mental Models and the Control of Actions in Complex Environments
DEFF Research Database (Denmark)
Rasmussen, Jens
1987-01-01
of human activities. The need for analysis of complex work scenarios is discussed, together with the necessity of considering several levels of cognitive control depending upon different kinds of internal representations. The development of mental representations during learning and adaptation...
Complex Automated Negotiations Theories, Models, and Software Competitions
Zhang, Minjie; Robu, Valentin; Matsuo, Tokuro
2013-01-01
Complex Automated Negotiations are a widely studied, emerging area in the field of Autonomous Agents and Multi-Agent Systems. In general, automated negotiations can be complex, since there are a lot of factors that characterize such negotiations. For this book, we solicited papers on all aspects of such complex automated negotiations, which are studied in the field of Autonomous Agents and Multi-Agent Systems. This book includes two parts, which are Part I: Agent-based Complex Automated Negotiations and Part II: Automated Negotiation Agents Competition. Each chapter in Part I is an extended version of ACAN 2011 papers after peer reviews by three PC members. Part II includes ANAC 2011 (The Second Automated Negotiating Agents Competition), in which automated agents who have different negotiation strategies and implemented by different developers are automatically negotiate in the several negotiation domains. ANAC is an international competition in which automated negotiation strategies, submitted by a number of...
Model for measuring complex performance in an aviation environment
International Nuclear Information System (INIS)
Hahn, H.A.
1988-01-01
An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab
2015-11-01
Gholamreza, and Ester, Martin. “Modeling the Temporal Dynamics of Social Rating Networks Using Bidirectional Effects of Social Relations and Rating...1.1.2 β-disruptor Problems Besides the homogeneous network model consisting of uniform nodes and bidirectional links, the heterogeneous network model... neural and metabolic networks .” Biological Cybernetics 90 (2004): 311–317. 10.1007/s00422-004-0479-1. URL http://dx.doi.org/10.1007/s00422-004-0479-1 [51
Hashimoto, Koichi; Suzuki, Hiroyuki; Taniguchi, Kayoko; Noguchi, Takumi; Yohda, Masafumi; Odaka, Masafumi
2008-01-01
Nitrile hydratases (NHases) have an unusual iron or cobalt catalytic center with two oxidized cysteine ligands, cysteine-sulfinic acid and cysteine-sulfenic acid, catalyzing the hydration of nitriles to amides. Recently, we found that the NHase of Rhodococcus erythropolis N771 exhibited an additional catalytic activity, converting tert-butylisonitrile (tBuNC) to tert-butylamine. Taking advantage of the slow reactivity of tBuNC and the photoreactivity of nitrosylated NHase, we present the first structural evidence for the catalytic mechanism of NHase with time-resolved x-ray crystallography. By monitoring the reaction with attenuated total reflectance-Fourier transform infrared spectroscopy, the product from the isonitrile carbon was identified as a CO molecule. Crystals of nitrosylated inactive NHase were soaked with tBuNC. The catalytic reaction was initiated by photo-induced denitrosylation and stopped by flash cooling. tBuNC was first trapped at the hydrophobic pocket above the iron center and then coordinated to the iron ion at 120 min. At 440 min, the electron density of tBuNC was significantly altered, and a new electron density was observed near the isonitrile carbon as well as the sulfenate oxygen of αCys114. These results demonstrate that the substrate was coordinated to the iron and then attacked by a solvent molecule activated by αCys114-SOH. PMID:18948265
Numerical simulations and mathematical models of flows in complex geometries
DEFF Research Database (Denmark)
Hernandez Garcia, Anier
The research work of the present thesis was mainly aimed at exploiting one of the strengths of the Lattice Boltzmann methods, namely, the ability to handle complicated geometries to accurately simulate flows in complex geometries. In this thesis, we perform a very detailed theoretical analysis...... and through the Chapman-Enskog multi-scale expansion technique the dependence of the kinetic viscosity on each scheme is investigated. Seeking for optimal numerical schemes to eciently simulate a wide range of complex flows a variant of the finite element, off-lattice Boltzmann method [5], which uses...... the characteristic based integration is also implemented. Using the latter scheme, numerical simulations are conducted in flows of different complexities: flow in a (real) porous network and turbulent flows in ducts with wall irregularities. From the simulations of flows in porous media driven by pressure gradients...
Complexity of repeated game model in electric power triopoly
International Nuclear Information System (INIS)
Ma Junhai; Ji Weizhuo
2009-01-01
According to the repeated game model in electric power duopoly, a triopoly outputs game model is presented. On the basis of some hypotheses, the dynamic characters are demonstrated with theoretical analysis and numerical simulations. The results show that the triopoly model is a chaotic system and it is better than the duopoly model in applications.
Information Geometric Complexity of a Trivariate Gaussian Statistical Model
Directory of Open Access Journals (Sweden)
Domenico Felice
2014-05-01
Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.
satlc model lesson for teaching and learning complex environmental ...
African Journals Online (AJOL)
IICBA01
Greenhouse-gas-induced temperature increase is one of the main reasons of ... The relation between altitude and density is a fairly complex exponential that has been ... in to ocean by which water becomes acidic also when water is heated it ...
Can Models Capture the Complexity of the Systems Engineering Process?
Boppana, Krishna; Chow, Sam; de Weck, Olivier L.; Lafon, Christian; Lekkakos, Spyridon D.; Lyneis, James; Rinaldi, Matthew; Wang, Zhiyong; Wheeler, Paul; Zborovskiy, Marat; Wojcik, Leonard A.
Many large-scale, complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam, 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: Hilton/Marriott/American Airlines system for hotel reservations and flights; 1988-1992; 125 million; "scrapped"
A Multiscale Model of Morphological Complexity in Cities
DEFF Research Database (Denmark)
Heinrich, Mary Katherine; Ayres, Phil; Bar-Yam, Yaneer
2017-01-01
Approaches from complex systems science can support design decision-making by extracting important information about key dependencies from large, unstructured data sources. This paper presents an initial case study applying such approaches to city structure, by characterising low-level features a...
ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING
Directory of Open Access Journals (Sweden)
Cătălin IANCU
2017-12-01
Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.
Unsteady panel method for complex configurations including wake modeling
CSIR Research Space (South Africa)
Van Zyl, Lourens H
2008-01-01
Full Text Available implementations of the DLM are however not very versatile in terms of geometries that can be modeled. The ZONA6 code offers a versatile surface panel body model including a separated wake model, but uses a pressure panel method for lifting surfaces. This paper...
DEVELOPING INDUSTRIAL ROBOT SIMULATION MODEL TUR10-K USING “UNIVERSAL MECHANISM” SOFTWARE COMPLEX
Directory of Open Access Journals (Sweden)
Vadim Vladimirovich Chirkov
2018-02-01
Full Text Available Manipulation robots are complex spatial mechanical systems having five or six degrees of freedom, and sometimes more. For this reason, modeling manipulative robots movement, even in the kinematic formulation, is a complex mathematical task. If one moves from kinematic modeling of motion to dynamic modeling then there must be taken into account the inertial properties of the modeling object. In this case, analytical constructing of such a complex object mathematical model as a manipulation robot becomes practically impossible. Therefore, special computer-aided design systems, called CAE-systems, are used for modeling complex mechanical systems. The purpose of the paper is simulation model construction of a complex mechanical system, such as the industrial robot TUR10-K, to obtain its dynamic characteristics. Developing such models makes it possible to reduce the complexity of designing complex systems process and to obtain the necessary characteristics. Purpose. Developing the simulation model of the industrial robot TUR10-K and obtaining dynamic characteristics of the mechanism. Methodology: the article is used a computer simulation method. Results: There is obtained the simulation model of the robot and its dynamic characteristics. Practical implications: the results can be used in the mechanical systems design and various simulation models.
Complexation of metal ions with humic acid: charge neutralization model
International Nuclear Information System (INIS)
Kim, J.I.; Czerwinski, K.R.
1995-01-01
A number of different approaches are being used for describing the complexation equilibrium of actinide ions with humic or fulvic acid. The approach chosen and verified experimentally by Tu Muenchen will be discussed with notable examples from experiment. This approach is based on the conception that a given actinide ion is neutralized upon complexation with functional groups of humic or fulvic acid, e.g. carboxylic and phenolic groups, which are known as heterogeneously cross-linked polyelectrolytes. The photon energy transfer experiment with laser light excitation has shown that the actinide ion binding with the functional groups is certainly a chelation process accompanied by metal ion charge neutralization. This fact is in accordance with the experimental evidence of the postulated thermodynamic equilibrium reaction. The experimental results are found to be independent of origin of humic or fulvic acid and applicable for a broad range of pH. (authors). 23 refs., 7 figs., 1 tab
Model-Based Approach to the Evaluation of Task Complexity in Nuclear Power Plant
International Nuclear Information System (INIS)
Ham, Dong Han
2007-02-01
This study developed a model-based method for evaluating task complexity and examined the ways of evaluating the complexity of tasks designed for abnormal situations and daily task situations in NPPs. The main results of this study can be summarised as follows. First, this study developed a conceptual framework for studying complexity factors and a model of complexity factors that classifies complexity factors according to the types of knowledge that human operators use. Second, this study developed a more practical model of task complexity factors and identified twenty-one complexity factors based on the model. The model emphasizes that a task is a system to be designed and its complexity has several dimensions. Third, we developed a method of identifying task complexity factors and evaluating task complexity qualitatively based on the developed model of task complexity factors. This method can be widely used in various task situations. Fourth, this study examined the applicability of TACOM to abnormal situations and daily task situations, such as maintenance and confirmed that it can be reasonably used in those situations. Fifth, we developed application examples to demonstrate the use of the theoretical results of this study. Lastly, this study reinterpreted well-know principles for designing information displays in NPPs in terms of task complexity and suggested a way of evaluating the conceptual design of displays in an analytical way by using the concept of task complexity. All of the results of this study will be used as a basis when evaluating the complexity of tasks designed on procedures or information displays and designing ways of improving human performance in NPPs
van Vijfeijken, H.; Kleingeld, A.; van Tuijl, H.; Algera, J.A.; Thierry, Hk.
2002-01-01
A prescriptive model on how to design effective combinations of goal setting and contingent rewards for group performance management is presented. The model incorporates the constructs task complexity, task interdependence, goal interdependence, and reward interdependence and specifies optimal fit
Vijfeijken, van H.T.G.A.; Kleingeld, P.A.M.; Tuijl, van H.F.J.M.; Algera, J.A.; Thierry, H.
2002-01-01
A prescriptive model on how to design effective combinations of goal setting and contingent rewards for group performance management is presented. The model incorporates the constructs task complexity, task interdependence, goal interdependence, and reward interdependence and specifies optimal fit
First results from the International Urban Energy Balance Model Comparison: Model Complexity
Blackett, M.; Grimmond, S.; Best, M.
2009-04-01
A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run
Complex Price Dynamics in the Modified Kaldorian Model
Czech Academy of Sciences Publication Activity Database
Kodera, Jan; Van Tran, Q.; Vošvrda, Miloslav
2013-01-01
Roč. 22, č. 3 (2013), s. 358-384 ISSN 1210-0455 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : Priice dynamics, * numerical examples * two-equation model * four-equation model * nonlinear time series analysis Subject RIV: AH - Economics Impact factor: 0.208, year: 2013 http://library.utia.cas.cz/separaty/2013/E/kodera-model of price dynamics and chaos.pdf
A Complex Network Approach to Distributional Semantic Models.
Directory of Open Access Journals (Sweden)
Akira Utsumi
Full Text Available A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.
Automation of program model developing for complex structure control objects
International Nuclear Information System (INIS)
Ivanov, A.P.; Sizova, T.B.; Mikhejkina, N.D.; Sankovskij, G.A.; Tyufyagin, A.N.
1991-01-01
A brief description of software for automated developing the models of integrating modular programming system, program module generator and program module library providing thermal-hydraulic calcualtion of process dynamics in power unit equipment components and on-line control system operation simulation is given. Technical recommendations for model development are based on experience in creation of concrete models of NPP power units. 8 refs., 1 tab., 4 figs
Using Stochastic Model Checking to Provision Complex Business Services
DEFF Research Database (Denmark)
Herbert, Luke Thomas; Sharp, Robin
2012-01-01
bounds on resources consumed during execution of business processes. Accurate resource provisioning is often central to ensuring the safe execution of a process. We first introduce a formalised core subset of the Business Process Modelling and Notation (BPMN), which we extend with probabilistic and non......-deterministic branching and reward annotations. We then develop an algorithm for the efficient translation of these models into the guarded command language used by the model checker PRISM, in turn enabling model checking of BPMN processes and allowing for the calculation of a wide range of quantitative properties...
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
Conceptual basis for developing of trainig models in complex ...
African Journals Online (AJOL)
This paper presents conceptual basis for developing of training models of interactive assembling system for automatic building of application software systems, ... software generation, such as: program module compatibility, formalization of computer interaction and choosing of formal model for human machine interface.
An advanced modelling tool for simulating complex river systems.
Trancoso, Ana Rosa; Braunschweig, Frank; Chambel Leitão, Pedro; Obermann, Matthias; Neves, Ramiro
2009-04-01
The present paper describes MOHID River Network (MRN), a 1D hydrodynamic model for river networks as part of MOHID Water Modelling System, which is a modular system for the simulation of water bodies (hydrodynamics and water constituents). MRN is capable of simulating water quality in the aquatic and benthic phase and its development was especially focused on the reproduction of processes occurring in temporary river networks (flush events, pools formation, and transmission losses). Further, unlike many other models, it allows the quantification of settled materials at the channel bed also over periods when the river falls dry. These features are very important to secure mass conservation in highly varying flows of temporary rivers. The water quality models existing in MOHID are base on well-known ecological models, such as WASP and ERSEM, the latter allowing explicit parameterization of C, N, P, Si, and O cycles. MRN can be coupled to the basin model, MOHID Land, with computes runoff and porous media transport, allowing for the dynamic exchange of water and materials between the river and surroundings, or it can be used as a standalone model, receiving discharges at any specified nodes (ASCII files of time series with arbitrary time step). These features account for spatial gradients in precipitation which can be significant in Mediterranean-like basins. An interface has been already developed for SWAT basin model.
Computerized models : tools for assessing the future of complex systems?
Ittersum, van M.K.; Sterk, B.
2015-01-01
Models are commonly used to make decisions. At some point all of us will have employed a mental model, that is, a simplification of reality, in an everyday situation. For instance, when we want to make the best decision for the environment and consider whether to buy our vegetables in a large
Simulation-based modeling of building complexes construction management
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
Building confidence and credibility amid growing model and computing complexity
Evans, K. J.; Mahajan, S.; Veneziani, C.; Kennedy, J. H.
2017-12-01
As global Earth system models are developed to answer an ever-wider range of science questions, software products that provide robust verification, validation, and evaluation must evolve in tandem. Measuring the degree to which these new models capture past behavior, predict the future, and provide the certainty of predictions is becoming ever more challenging for reasons that are generally well known, yet are still challenging to address. Two specific and divergent needs for analysis of the Accelerated Climate Model for Energy (ACME) model - but with a similar software philosophy - are presented to show how a model developer-based focus can address analysis needs during expansive model changes to provide greater fidelity and execute on multi-petascale computing facilities. A-PRIME is a python script-based quick-look overview of a fully-coupled global model configuration to determine quickly if it captures specific behavior before significant computer time and expense is invested. EVE is an ensemble-based software framework that focuses on verification of performance-based ACME model development, such as compiler or machine settings, to determine the equivalence of relevant climate statistics. The challenges and solutions for analysis of multi-petabyte output data are highlighted from the aspect of the scientist using the software, with the aim of fostering discussion and further input from the community about improving developer confidence and community credibility.
Applications of complex terrain meteorological models to emergency response management
International Nuclear Information System (INIS)
Yamada, Tetsuji; Leone, J.M. Jr.; Rao, K.S.; Dickerson, M.H.; Bader, D.C.; Williams, M.D.
1989-01-01
The Office of Health and Environmental Research (OHER), US Department of Energy (DOE), has supported the development of mesoscale transport and diffusion and meteorological models for several decades. The model development activities are closely tied to the OHER field measurement program which has generated a large amount of meteorological and tracer gas data that have been used extensively to test and improve both meteorological and dispersion models. This paper briefly discusses the history of the model development activities associated with the OHER atmospheric science program. The discussion will then focus on how results from this program have made their way into the emergency response community in the past, and what activities are presently being pursued to improve real-time emergency response capabilities. Finally, fruitful areas of research for improving real-time emergency response modeling capabilities are suggested. 35 refs., 5 figs
Space-time complexity in solid state models
International Nuclear Information System (INIS)
Bishop, A.R.
1985-01-01
In this Workshop on symmetry-breaking it is appropriate to include the evolving fields of nonlinear-nonequilibrium systems in which transitions to and between various degrees of ''complexity'' (including ''chaos'') occur in time or space or both. These notions naturally bring together phenomena of pattern formation and chaos and therefore have ramifications for a huge array of natural sciences - astrophysics, plasmas and lasers, hydrodynamics, field theory, materials and solid state theory, optics and electronics, biology, pattern recognition and evolution, etc. Our particular concerns here are with examples from solid state and condensed matter
Computer modeling of properties of complex molecular systems
Energy Technology Data Exchange (ETDEWEB)
Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)
2015-03-10
Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.
Teleconnections in complex human-Earth system models
Calvin, K. V.; Edmonds, J.
2017-12-01
Human systems and physical Earth systems are closely coupled and interact in complex ways that are sometimes surprising. This presentation discusses a few examples of system interactions. We consider the coupled energy-water-land-economy systems. We show how reductions in fossil fuel emissions are inversely coupled to land rents, food prices and deforestation. We discuss how water shortages in one part of the world is propagated to other distant parts of the world. We discuss the sensitivity of international trade patterns to energy and land systems technology and markets, and the potentially unanticipated results that can emerge.
Non-consensus Opinion Models on Complex Networks
Li, Qian; Braunstein, Lidia A.; Wang, Huijuan; Shao, Jia; Stanley, H. Eugene; Havlin, Shlomo
2013-04-01
Social dynamic opinion models have been widely studied to understand how interactions among individuals cause opinions to evolve. Most opinion models that utilize spin interaction models usually produce a consensus steady state in which only one opinion exists. Because in reality different opinions usually coexist, we focus on non-consensus opinion models in which above a certain threshold two opinions coexist in a stable relationship. We revisit and extend the non-consensus opinion (NCO) model introduced by Shao et al. (Phys. Rev. Lett. 103:01870, 2009). The NCO model in random networks displays a second order phase transition that belongs to regular mean field percolation and is characterized by the appearance (above a certain threshold) of a large spanning cluster of the minority opinion. We generalize the NCO model by adding a weight factor W to each individual's original opinion when determining their future opinion (NCO W model). We find that as W increases the minority opinion holders tend to form stable clusters with a smaller initial minority fraction than in the NCO model. We also revisit another non-consensus opinion model based on the NCO model, the inflexible contrarian opinion (ICO) model (Li et al. in Phys. Rev. E 84:066101, 2011), which introduces inflexible contrarians to model the competition between two opinions in a steady state. Inflexible contrarians are individuals that never change their original opinion but may influence the opinions of others. To place the inflexible contrarians in the ICO model we use two different strategies, random placement and one in which high-degree nodes are targeted. The inflexible contrarians effectively decrease the size of the largest rival-opinion cluster in both strategies, but the effect is more pronounced under the targeted method. All of the above models have previously been explored in terms of a single network, but human communities are usually interconnected, not isolated. Because opinions propagate not
A comprehensive multi-local-world model for complex networks
International Nuclear Information System (INIS)
Fan Zhengping; Chen Guanrong; Zhang Yunong
2009-01-01
The nodes in a community within a network are much more connected to each other than to the others outside the community in the same network. This phenomenon has been commonly observed from many real-world networks, ranging from social to biological even to technical networks. Meanwhile, the number of communities in some real-world networks, such as the Internet and most social networks, are evolving with time. To model this kind of networks, the present Letter proposes a multi-local-world (MLW) model to capture and describe their essential topological properties. Based on the mean-field theory, the degree distribution of this model is obtained analytically, showing that the generated network has a novel topological feature as being not completely random nor completely scale-free but behaving somewhere between them. As a typical application, the MLW model is applied to characterize the Internet against some other models such as the BA, GBA, Fitness and HOT models, demonstrating the superiority of the new model.
A multi-element cosmological model with a complex space-time topology
Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2015-02-01
Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.
Complex structure-induced deformations of σ-models
Energy Technology Data Exchange (ETDEWEB)
Bykov, Dmitri [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut,Am Mühlenberg 1, D-14476 Potsdam-Golm (Germany); Steklov Mathematical Institute of Russ. Acad. Sci.,Gubkina str. 8, 119991 Moscow (Russian Federation)
2017-03-24
We describe a deformation of the principal chiral model (with an even-dimensional target space G) by a B-field proportional to the Kähler form on the target space. The equations of motion of the deformed model admit a zero-curvature representation. As a simplest example, we consider the case of G=S{sup 1}×S{sup 3}. We also apply a variant of the construction to a deformation of the AdS{sub 3}×S{sup 3}×S{sup 1} (super-)σ-model.
Calibration of two complex ecosystem models with different likelihood functions
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
What do we gain from simplicity versus complexity in species distribution models?
Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane
2014-01-01
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species
Complexities in coastal sediment transport studies by numerical modelling
Digital Repository Service at National Institute of Oceanography (India)
Ilangovan, D.; ManiMurali, R.
equations arrived based on scientific principles as all natural phenomena are governed by certain rules which can be explained by scientific principles. Efficiency of numerical modeling greatly depends on quality of input parameters. When input parameters...
Modeling of Complex Adaptive Systems in Air Operations
National Research Council Canada - National Science Library
Busch, Timothy E; Trevisani, Dawn A
2006-01-01
.... Model predictive control theory provides the basis for this investigation. Given some set of objectives the military commander must devise a sequence of actions that transform the current state to the desired one...
Equity venture capital platform model based on complex network
Guo, Dongwei; Zhang, Lanshu; Liu, Miao
2018-05-01
This paper uses the small-world network and the random-network to simulate the relationship among the investors, construct the network model of the equity venture capital platform to explore the impact of the fraud rate and the bankruptcy rate on the robustness of the network model while observing the impact of the average path length and the average agglomeration coefficient of the investor relationship network on the income of the network model. The study found that the fraud rate and bankruptcy rate exceeded a certain threshold will lead to network collapse; The bankruptcy rate has a great influence on the income of the platform; The risk premium exists, and the average return is better under a certain range of bankruptcy risk; The structure of the investor relationship network has no effect on the income of the investment model.
A coupled mass transfer and surface complexation model for uranium (VI) removal from wastewaters
International Nuclear Information System (INIS)
Lenhart, J.; Figueroa, L.A.; Honeyman, B.D.
1994-01-01
A remediation technique has been developed for removing uranium (VI) from complex contaminated groundwater using flake chitin as a biosorbent in batch and continuous flow configurations. With this system, U(VI) removal efficiency can be predicted using a model that integrates surface complexation models, mass transport limitations and sorption kinetics. This integration allows the reactor model to predict removal efficiencies for complex groundwaters with variable U(VI) concentrations and other constituents. The system has been validated using laboratory-derived kinetic data in batch and CSTR systems to verify the model predictions of U(VI) uptake from simulated contaminated groundwater
Modelling H5N1 in Bangladesh across spatial scales: Model complexity and zoonotic transmission risk
Directory of Open Access Journals (Sweden)
Edward M. Hill
2017-09-01
Full Text Available Highly pathogenic avian influenza H5N1 remains a persistent public health threat, capable of causing infection in humans with a high mortality rate while simultaneously negatively impacting the livestock industry. A central question is to determine regions that are likely sources of newly emerging influenza strains with pandemic causing potential. A suitable candidate is Bangladesh, being one of the most densely populated countries in the world and having an intensifying farming system. It is therefore vital to establish the key factors, specific to Bangladesh, that enable both continued transmission within poultry and spillover across the human–animal interface. We apply a modelling framework to H5N1 epidemics in the Dhaka region of Bangladesh, occurring from 2007 onwards, that resulted in large outbreaks in the poultry sector and a limited number of confirmed human cases. This model consisted of separate poultry transmission and zoonotic transmission components. Utilising poultry farm spatial and population information a set of competing nested models of varying complexity were fitted to the observed case data, with parameter inference carried out using Bayesian methodology and goodness-of-fit verified by stochastic simulations. For the poultry transmission component, successfully identifying a model of minimal complexity, which enabled the accurate prediction of the size and spatial distribution of cases in H5N1 outbreaks, was found to be dependent on the administration level being analysed. A consistent outcome of non-optimal reporting of infected premises materialised in each poultry epidemic of interest, though across the outbreaks analysed there were substantial differences in the estimated transmission parameters. The zoonotic transmission component found the main contributor to spillover transmission of H5N1 in Bangladesh was found to differ from one poultry epidemic to another. We conclude by discussing possible explanations for
Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off
DEFF Research Database (Denmark)
Nielsen, Helle Ørsted; Malawska, Anna Katarzyna
2013-01-01
to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...
Molecular model for annihilation rates in positron complexes
Energy Technology Data Exchange (ETDEWEB)
Assafrao, Denise [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Walters, H.R. James [Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Mohallem, Jose R. [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom)], E-mail: rachid@fisica.ufmg.br
2008-02-15
The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e{sup +}He (electronic triplet) and e{sup +}Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e{sup +}Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones.
Simulation of complex pharmacokinetic models in Microsoft Excel.
Meineke, Ingolf; Brockmöller, Jürgen
2007-12-01
With the arrival of powerful personal computers in the office numerical methods are accessible to everybody. Simulation of complex processes therefore has become an indispensible tool in research and education. In this paper Microsoft EXCEL is used as a platform for a universal differential equation solver. The software is designed as an add-in aiming at a minimum of required user input to perform a given task. Four examples are included to demonstrate both, the simplicity of use and the versatility of possible applications. While the layout of the program is admittedly geared to the needs of pharmacokineticists, it can be used in any field where sets of differential equations are involved. The software package is available upon request.
Molecular model for annihilation rates in positron complexes
International Nuclear Information System (INIS)
Assafrao, Denise; Walters, H.R. James; Mohallem, Jose R.
2008-01-01
The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e + He (electronic triplet) and e + Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e + Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones
The complex model of risk and progression of AMD estimation
Directory of Open Access Journals (Sweden)
V. S. Akopyan
2012-01-01
Full Text Available Purpose: to develop a method and a statistical model to estimate individual risk of AMD and the risk for progression to advanced AMD using clinical and genetic risk factors.Methods: A statistical risk assessment model was developed using stepwise binary logistic regression analysis. to estimate the population differences in the prevalence of allelic variants of genes and for the development of models adapted to the population of Moscow region genotyping and assessment of the influence of other risk factors was performed in two groups: patients with differ- ent stages of AMD (n = 74, and control group (n = 116. Genetic risk factors included in the study: polymorphisms in the complement system genes (C3 and CFH, genes at 10q26 locus (ARMS2 and HtRA1, polymorphism in the mitochondrial gene Mt-ND2. Clinical risk factors included in the study: age, gender, high body mass index, smoking history.Results: A comprehensive analysis of genetic and clinical risk factors for AMD in the study group was performed. Compiled statis- tical model assessment of individual risk of AMD, the sensitivity of the model — 66.7%, specificity — 78.5%, AUC = 0.76. Risk factors of late AMD, compiled a statistical model describing the probability of late AMD, the sensitivity of the model — 66.7%, specificity — 78.3%, AUC = 0.73. the developed system allows determining the most likely version of the current late AMD: dry or wet.Conclusion: the developed test system and the mathematical algorhythm for determining the risk of AMD, risk of progression to advanced AMD have fair diagnostic informative and promising for use in clinical practice.
Research Strategy for Modeling the Complexities of Turbine Heat Transfer
Simoneau, Robert J.
1996-01-01
The subject of this paper is a NASA research program, known as the Coolant Flow Management Program, which focuses on the interaction between the internal coolant channel and the external film cooling of a turbine blade and/or vane in an aircraft gas turbine engine. The turbine gas path is really a very complex flow field. The combination of strong pressure gradients, abrupt geometry changes and intersecting surfaces, viscous forces, rotation, and unsteady blade/vane interactions all combine to offer a formidable challenge. To this, in the high pressure turbine, we add the necessity of film cooling. The ultimate goal of the turbine designer is to maintain or increase the high level of turbine performance and at the same time reduce the amount of coolant flow needed to achieve this end. Simply stated, coolant flow is a penalty on the cycle and reduces engine thermal efficiency. Accordingly, understanding the flow field and heat transfer associated with the coolant flow is a priority goal. It is important to understand both the film cooling and the internal coolant flow, particularly their interaction. Thus, the motivation for the Coolant Flow Management Program. The paper will begin with a brief discussion of the management and research strategy, will then proceed to discuss the current attack from the internal coolant side, and will conclude by looking at the film cooling effort - at all times keeping sight of the primary goal the interaction between the two. One of the themes of this paper is that complex heat transfer problems of this nature cannot be attacked by single researchers or even groups of researchers, each working alone. It truly needs the combined efforts of a well-coordinated team to make an impact. It is important to note that this is a government/industry/university team effort.
Wind field and dispersion modelling in complex terrain
International Nuclear Information System (INIS)
Bartzis, J.G.; Varvayanni, M.; Catsaros, N.; Konte, K.; Amanatidis, G.
1991-01-01
Dispersion of airborne radioactive material can have an important environmental impact. Its prediction remains a difficult problem, especially over complex and inhomogeneous terrain, or under complicated atmospheric conditions. The ADREA-I code, a three-dimensional transport code especially designed for terrains of high complexity can be considered as contribution to the solution of the above problem. The code development has been initiated within the present CEC Radiation Program. New features are introduced into the code to describe the anomalous topography, the turbulent diffusion and numerical solution procedures. In this work besides a brief presentation of the main features of the code, a number of applications will be presented with the aim on one hand to illustrate the capability and reliability of the code and on the other hand to clarify the effects on windfield and dispersion in special cases of interest. Within the framework of ADREA-I verification studies, a I-D simulation of the experimental Wangara Day-33 mean boundary layer was attempted, reproducing the daytime wind speeds, temperatures, specific humidities and mixing depths. In order to address the effect of surface irregularities and inhomogeneities on contamination patterns, the flow field and dispersion were analyzed over a 2-D, 1000m high mountain range, surrounded by sea, with a point source assumed 40km offshore from one coastline. This terrain was studied as representing a greater Athens area idealization. The effects of a 2-D, 1000m high mountain range of Gaussian shape on long range transport has also been studied in terms of influence area, wind and concentration profile distortions and dry deposition patterns
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David. Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
Model Complexity and Out-of-Sample Performance: Evidence from S&P 500 Index Returns
Kaeck, Andreas; Rodrigues, Paulo; Seeger, Norman J.
We apply a range of out-of-sample specification tests to more than forty competing stochastic volatility models to address how model complexity affects out-of-sample performance. Using daily S&P 500 index returns, model confidence set estimations provide strong evidence that the most important model
Holocene glacier variability: three case studies using an intermediate-complexity climate model
Weber, S.L.; Oerlemans, J.
2003-01-01
Synthetic glacier length records are generated for the Holocene epoch using a process-based glacier model coupled to the intermediate-complexity climate model ECBilt. The glacier model consists of a massbalance component and an ice-flow component. The climate model is forced by the insolation change
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
A structure for models of hazardous materials with complex behavior
International Nuclear Information System (INIS)
Rodean, H.C.
1991-01-01
Most atmospheric dispersion models used to assess the environmental consequences of accidental releases of hazardous chemicals do not have the capability to simulate the pertinent chemical and physical processes associated with the release of the material and its mixing with the atmosphere. The purpose of this paper is to present a materials sub-model with the flexibility to simulate the chemical and physical behaviour of a variety of materials released into the atmosphere. The model, which is based on thermodynamic equilibrium, incorporates the ideal gas law, temperature-dependent vapor pressure equations, temperature-dependent dissociation reactions, and reactions with atmospheric water vapor. The model equations, written in terms of pressure ratios and dimensionless parameters, are used to construct equilibrium diagrams with temperature and the mass fraction of the material in the mixture as coordinates. The model's versatility is demonstrated by its application to the release of UF 6 and N 2 O 4 , two materials with very different physical and chemical properties. (author)
Brodsky, Yu. I.
2015-01-01
The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
A Simple Model for Complex Fabrication of MEMS based Pressure Sensor: A Challenging Approach
Directory of Open Access Journals (Sweden)
Himani SHARMA
2010-08-01
Full Text Available In this paper we have presented the simple model for complex fabrication of MEMS based absolute micro pressure sensor. This kind of modeling is extremely useful for determining its complexity in fabrication steps and provides complete information about process sequence to be followed during manufacturing. Therefore, the need for test iteration decreases and cost, time can be reduced significantly. By using DevEdit tool (part of SILVACO tool, a behavioral model of pressure sensor have been presented and implemented.
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Ancel, Ersin; Shih, Ann T.
2014-01-01
This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.
Epidemic Processes on Complex Networks : Modelling, Simulation and Algorithms
Van de Bovenkamp, R.
2015-01-01
Local interactions on a graph will lead to global dynamic behaviour. In this thesis we focus on two types of dynamic processes on graphs: the Susceptible-Infected-Susceptilbe (SIS) virus spreading model, and gossip style epidemic algorithms. The largest part of this thesis is devoted to the SIS
Unified Model for Generation Complex Networks with Utility Preferential Attachment
International Nuclear Information System (INIS)
Wu Jianjun; Gao Ziyou; Sun Huijun
2006-01-01
In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.
Satlc model lesson for teaching and learning complex ...
African Journals Online (AJOL)
Environmental chemistry is one of the disciplines of Science. For the goal of the deep learning of the subject, it is indispensable to present perception and models of chemical behaviour explicitly. This can be accomplished by giving careful consideration to the development of concepts such that newer approaches are given ...
Plane answers to complex questions the theory of linear models
Christensen, Ronald
1987-01-01
This book was written to rigorously illustrate the practical application of the projective approach to linear models. To some, this may seem contradictory. I contend that it is possible to be both rigorous and illustrative and that it is possible to use the projective approach in practical applications. Therefore, unlike many other books on linear models, the use of projections and sub spaces does not stop after the general theory. They are used wherever I could figure out how to do it. Solving normal equations and using calculus (outside of maximum likelihood theory) are anathema to me. This is because I do not believe that they contribute to the understanding of linear models. I have similar feelings about the use of side conditions. Such topics are mentioned when appropriate and thenceforward avoided like the plague. On the other side of the coin, I just as strenuously reject teaching linear models with a coordinate free approach. Although Joe Eaton assures me that the issues in complicated problems freq...
Production compilation : A simple mechanism to model complex skill acquisition
Taatgen, N.A.; Lee, F.J.
2003-01-01
In this article we describe production compilation, a mechanism for modeling skill acquisition. Production compilation has been developed within the ACT-Rational (ACT-R; J. R. Anderson, D. Bothell, M. D. Byrne, & C. Lebiere, 2002) cognitive architecture and consists of combining and specializing
Complex Automata: Multi-scale Modeling with Coupled Cellular Automata
Hoekstra, A.G.; Caiazzo, A.; Lorenz, E.; Falcone, J.-L.; Chopard, B.; Hoekstra, A.G.; Kroc, J.; Sloot, P.M.A.
2010-01-01
Cellular Automata (CA) are generally acknowledged to be a powerful way to describe and model natural phenomena [1-3]. There are even tempting claims that nature itself is one big (quantum) information processing system, e.g. [4], and that CA may actually be nature’s way to do this processing [5-7].
Design and analysis of information model hotel complex
Directory of Open Access Journals (Sweden)
Garyaev Nikolai
2016-01-01
Full Text Available The article analyzes the innovation in 3D modeling and development of process design approaches based on visualization of information technology and computer-aided design systems. The problems arising in the modern design and the approach to address them.
Ground model and computer complex for designing underground explosions
Energy Technology Data Exchange (ETDEWEB)
Bashurov, V.V.; Vakhrameev, Yu.S.; Dem' yanovskii, S.V.; Ignatenko, V.V.; Simonova, T.V.
1977-01-01
A description is given of a ground model that accounts for large deformations, their irreversibility, loose rock, breakdown, resistance to internal friction, and other factors. Calculations from the American Sulky explosion and camouflage detonations of two spaced explosive charges are cited as examples illustrating the possibility of design methods and the suitability of ground state equations for describing underground detonations.
Model order reduction for complex high-tech systems
Lutowska, A.; Hochstenbach, M.E.; Schilders, W.H.A.; Michielsen, B.; Poirier, J.R.
2012-01-01
This paper presents a computationally efficient model order reduction (MOR) technique for interconnected systems. This MOR technique preserves block structures and zero blocks and exploits separate MOR approximations for the individual sub-systems in combination with low rank approximations for the
Wind field near complex terrain using numerical weather prediction model
Chim, Kin-Sang
The PennState/NCAR MM5 model was modified to simulate an idealized flow pass through a 3D obstacle in the Micro- Alpha Scale domain. The obstacle used were the idealized Gaussian obstacle and the real topography of Lantau Island of Hong Kong. The Froude number under study is ranged from 0.22 to 1.5. Regime diagrams for both the idealized Gaussian obstacle and Lantau island were constructed. This work is divided into five parts. The first part is the problem definition and the literature review of the related publications. The second part briefly discuss as the PennState/NCAR MM5 model and a case study of long- range transport is included. The third part is devoted to the modification and the verification of the PennState/NCAR MM5 model on the Micro-Alpha Scale domain. The implementation of the Orlanski (1976) open boundary condition is included with the method of single sounding initialization of the model. Moreover, an upper dissipative layer, Klemp and Lilly (1978), is implemented on the model. The simulated result is verified by the Automatic Weather Station (AWS) data and the Wind Profiler data. Four different types of Planetary Boundary Layer (PBL) parameterization schemes have been investigated in order to find out the most suitable one for Micro-Alpha Scale domain in terms of both accuracy and efficiency. Bulk Aerodynamic type of PBL parameterization scheme is found to be the most suitable PBL parameterization scheme. Investigation of the free- slip lower boundary condition is performed and the simulated result is compared with that with friction. The fourth part is the use of the modified PennState/NCAR MM5 model for an idealized flow simulation. The idealized uniform flow used is nonhydrostatic and has constant Froude number. Sensitivity test is performed by varying the Froude number and the regime diagram is constructed. Moreover, nondimensional drag is found to be useful for regime identification. The model result is also compared with the analytic
Directory of Open Access Journals (Sweden)
Rogers Anne
2007-09-01
Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.
Automated modelling of complex refrigeration cycles through topological structure analysis
International Nuclear Information System (INIS)
Belman-Flores, J.M.; Riesco-Avila, J.M.; Gallegos-Munoz, A.; Navarro-Esbri, J.; Aceves, S.M.
2009-01-01
We have developed a computational method for analysis of refrigeration cycles. The method is well suited for automated analysis of complex refrigeration systems. The refrigerator is specified through a description of flows representing thermodynamic sates at system locations; components that modify the thermodynamic state of a flow; and controls that specify flow characteristics at selected points in the diagram. A system of equations is then established for the refrigerator, based on mass, energy and momentum balances for each of the system components. Controls specify the values of certain system variables, thereby reducing the number of unknowns. It is found that the system of equations for the refrigerator may contain a number of redundant or duplicate equations, and therefore further equations are necessary for a full characterization. The number of additional equations is related to the number of loops in the cycle, and this is calculated by a matrix-based topological method. The methodology is demonstrated through an analysis of a two-stage refrigeration cycle.
Modelling of complex heat transfer systems by the coupling method
Energy Technology Data Exchange (ETDEWEB)
Bacot, P.; Bonfils, R.; Neveu, A.; Ribuot, J. (Centre d' Energetique de l' Ecole des Mines de Paris, 75 (France))
1985-04-01
The coupling method proposed here is designed to reduce the size of matrices which appear in the modelling of heat transfer systems. It consists in isolating the elements that can be modelled separately, and among the input variables of a component, identifying those which will couple it to another component. By grouping these types of variable, one can thus identify a so-called coupling matrix of reduced size, and relate it to the overall system. This matrix allows the calculation of the coupling temperatures as a function of external stresses, and of the state of the overall system at the previous instant. The internal temperatures of the components are determined from for previous ones. Two examples of applications are presented, one concerning a dwelling unit, and the second a solar water heater.
Sustaining innovation collaboration models for a complex world
Carleton, Tamara
2012-01-01
In many ways, the process of innovation is a constant social dance, where the best dancers thrive by adapting new steps with multiple partners. The systematic and continuous generation of value in any innovation system relies on collaboration between different groups, who must overcome multiple, often competing agendas and needs to work together fruitfully over the long term. Featuring contributions from leading researchers, business leaders, and policymakers representing North America, Europe, India, Africa, and Australasia, this volume investigates different combinations of collaborative arrangements among innovation actors, many of which are changing conventional expectations of institutional relationships. Collectively, the authors demonstrate that no particular combination has emerged as the most dominant, or even resilient, model of innovation. Several authors expand on our understanding of the triple helix model, with both academics and practitioners looking to the quadruple helix (encompassing busines...
Rumor Spreading Model with Trust Mechanism in Complex Social Networks
International Nuclear Information System (INIS)
Wang Ya-Qi; Yang Xiao-Yuan; Han Yi-Liang; Wang Xu-An
2013-01-01
In this paper, to study rumor spreading, we propose a novel susceptible-infected-removed (SIR) model by introducing the trust mechanism. We derive mean-field equations that describe the dynamics of the SIR model on homogeneous networks and inhomogeneous networks. Then a steady-state analysis is conducted to investigate the critical threshold and the final size of the rumor spreading. We show that the introduction of trust mechanism reduces the final rumor size and the velocity of rumor spreading, but increases the critical thresholds on both networks. Moreover, the trust mechanism not only greatly reduces the maximum rumor influence, but also postpones the rumor terminal time, which provides us with more time to take measures to control the rumor spreading. The theoretical results are confirmed by sufficient numerical simulations. (interdisciplinary physics and related areas of science and technology)
Rumor Spreading Model with Trust Mechanism in Complex Social Networks
Wang, Ya-Qi; Yang, Xiao-Yuan; Han, Yi-Liang; Wang, Xu-An
2013-04-01
In this paper, to study rumor spreading, we propose a novel susceptible-infected-removed (SIR) model by introducing the trust mechanism. We derive mean-field equations that describe the dynamics of the SIR model on homogeneous networks and inhomogeneous networks. Then a steady-state analysis is conducted to investigate the critical threshold and the final size of the rumor spreading. We show that the introduction of trust mechanism reduces the final rumor size and the velocity of rumor spreading, but increases the critical thresholds on both networks. Moreover, the trust mechanism not only greatly reduces the maximum rumor influence, but also postpones the rumor terminal time, which provides us with more time to take measures to control the rumor spreading. The theoretical results are confirmed by sufficient numerical simulations.
Network-oriented modeling addressing complexity of cognitive, affective and social interactions
Treur, Jan
2016-01-01
This book presents a new approach that can be applied to complex, integrated individual and social human processes. It provides an alternative means of addressing complexity, better suited for its purpose than and effectively complementing traditional strategies involving isolation and separation assumptions. Network-oriented modeling allows high-level cognitive, affective and social models in the form of (cyclic) graphs to be constructed, which can be automatically transformed into executable simulation models. The modeling format used makes it easy to take into account theories and findings about complex cognitive and social processes, which often involve dynamics based on interrelating cycles. Accordingly, it makes it possible to address complex phenomena such as the integration of emotions within cognitive processes of all kinds, of internal simulations of the mental processes of others, and of social phenomena such as shared understandings and collective actions. A variety of sample models – including ...
Assessment of wear dependence parameters in complex model of cutting tool wear
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
Development of structural model of adaptive training complex in ergatic systems for professional use
Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.
2018-03-01
The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.
Modelling of Octahedral Manganese II Complexes with Inorganic Ligands: A Problem with Spin-States
Directory of Open Access Journals (Sweden)
Ludwik Adamowicz
2003-08-01
Full Text Available Abstract: Quantum mechanical ab initio UHF, MP2, MC-SCF and DFT calculations with moderate Gaussian basis sets were performed for MnX6, X = H2O, F-, CN-, manganese octahedral complexes. The correct spin-state of the complexes was obtained only when the counter ions neutralizing the entire complexes were used in the modelling at the B3LYP level of theory.
Stochastic Modelling and Optimization of Complex Infrastructure Systems
DEFF Research Database (Denmark)
Thoft-Christensen, Palle
In this paper it is shown that recent progress in stochastic modelling and optimization in combination with advanced computer systems has now made it possible to improve the design and the maintenance strategies for infrastructure systems. The paper concentrates on highway networks and single large...... bridges. united states has perhaps the largest highway networks in the world with more than 0.5 million highway bridges; see Chase, S.B. 1999. About 40% of these bridges are considered deficient and more than $50 billion is estimated needed to correct the deficiencies; see Roberts, J.E. 2001...
Multiscale Reduced Order Modeling of Complex Multi-Bay Structures
2013-07-01
fuselage panel studied in [28], see Fig. 2 for a picture of the actual hardware taken from [28]. The finite element model of the 9-bay panel, shown in...discussed. Two alternatives to reduce the computational time for the solution of these problems are explored. iii A mi familia ...results at P=0.98-1.82 lb/in, P=1.4-2.6 lb/in. The baseline solution P=1.4-2.6 lb/in has a 46 mean value of 2 lb/in and it is actually very close to
Causal Inference and Model Selection in Complex Settings
Zhao, Shandong
Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly
A linear model for flow over complex terrain
Energy Technology Data Exchange (ETDEWEB)
Frank, H P [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)
1999-03-01
A linear flow model similar to WA{sup s}P or LINCOM has been developed. Major differences are an isentropic temperature equation which allows internal gravity waves, and vertical advection of the shear of the mean flow. The importance of these effects are illustrated by examples. Resource maps are calculated from a distribution of geostrophic winds and stratification for Pyhaetunturi Fell in northern Finland and Acqua Spruzza in Italy. Stratification becomes important if the inverse Froude number formulated with the width of the hill becomes of order one or greater. (au) EU-JOULE-3. 16 refs.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Dispersion in the wake of a model industrial complex
International Nuclear Information System (INIS)
Hatcher, R.V.; Meroney, R.N.; Peterka, J.A.; Kothari, K.
1977-06-01
Models (1:200 scale) of the EOCR reactor building and surrounding silo and tank buildings at the Idaho National Engineering Laboratory, Idaho Falls, Idaho were put into the Meteorological Wind Tunnel at Colorado State University for the purpose of studying the effects of building wakes on dispersion. Flow visualization was done and concentration measurements were taken. The test program consisted of systematic releases from ground, building height, and stack height sources with no appreciable plume rise. The program was repeated for cases of moderately unstable, neutral, moderately stable, and stable conditions in the wind tunnel. Results show that the buildings significantly alter the dispersion patterns and the addition of any extra buildings or slight terrain change in the immediate vicinity of the building has a major effect. In the near wake region the effects of stratification were still evident causing slightly higher concentrations for stable conditions and slightly lower for unstable. Current dispersion models are discussed and evaluated that predict concentrations in the building wake region
International Nuclear Information System (INIS)
Reddy, G.R.; Mahajan, S.C.; Suzuki, Kohei
1997-01-01
A nuclear reactor building structure consists of shear walls with complex geometry, beams and columns. The complexity of the structure is explained in the section Introduction. Seismic analysis of the complex reactor building structure using the continuum mechanics approach may produce good results but this method is very difficult to apply. Hence, the finite element approach is found to be an useful technique for solving the dynamic equations of the reactor building structure. In this approach, the model which uses finite elements such as brick, plate and shell elements may produce accurate results. However, this model also poses some difficulties which are explained in the section Modeling Techniques. Therefore, seismic analysis of complex structures is generally carried out using a lumped mass beam model. This model is preferred because of its simplicity and economy. Nevertheless, mathematical modeling of a shear wall structure as a beam requires specialized skill and a thorough understanding of the structure. For accurate seismic analysis, it is necessary to model more realistically the stiffness, mass and damping. In linear seismic analysis, modeling of the mass and damping may pose few problems compared to modeling the stiffness. When used to represent a complex structure, the stiffness of the beam is directly related to the shear wall section properties such as area, shear area and moment of inertia. Various beam models which are classified based on the method of stiffness evaluation are also explained under the section Modeling Techniques. In the section Case Studies the accuracy and simplicity of the beam models are explained. Among various beam models, the one which evaluates the stiffness using strain energy equivalence proves to be the simplest and most accurate method for modeling the complex shear wall structure. (author)
Where to from here? Future applications of mental models of complex performance
International Nuclear Information System (INIS)
Hahn, H.A.; Nelson, W.R.; Blackman, H.S.
1988-01-01
The purpose of this paper is to raise issues for discussion regarding the applications of mental models in the study of complex performance. Applications for training, expert systems and decision aids, job selection, workstation design, and other complex environments are considered. 1 ref
Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling
Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.
1997-11-01
Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.
Mesoscale modeling: solving complex flows in biology and biotechnology.
Mills, Zachary Grant; Mao, Wenbin; Alexeev, Alexander
2013-07-01
Fluids are involved in practically all physiological activities of living organisms. However, biological and biorelated flows are hard to analyze due to the inherent combination of interdependent effects and processes that occur on a multitude of spatial and temporal scales. Recent advances in mesoscale simulations enable researchers to tackle problems that are central for the understanding of such flows. Furthermore, computational modeling effectively facilitates the development of novel therapeutic approaches. Among other methods, dissipative particle dynamics and the lattice Boltzmann method have become increasingly popular during recent years due to their ability to solve a large variety of problems. In this review, we discuss recent applications of these mesoscale methods to several fluid-related problems in medicine, bioengineering, and biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multistability and complex dynamics in a simple discrete economic model
International Nuclear Information System (INIS)
Peng Mingshu; Jiang Zhonghao; Jiang Xiaoxia; Hu Jiping; Qu Youli
2009-01-01
In this paper, we will propose a generalized Cournot duopoly model with Z 2 symmetry. We demonstrate that cost functions incorporating an interfirm externality lead to a system of couple one-dimensional maps. In the situation where agents take turns, we find in an analytic way that there coexist multiple unstable/stable period-2 cycles or synchronized/asynchronized periodic orbits. Coupling one-dimension chaos can be observed. In a more general situation, where agents move simultaneously, a closer analysis reveals some well-known local bifurcations and global bifurcations which typically occur in two-parameter families of two-dimensional discrete time dynamical systems, including codimension-one (fold-, flip-, Neimark-Sacker-) bifurcations, codimension-two (fold/flip, 1:2 resonance, 1:3 resonance and 1:4 resonance) bifurcations, and hetero-clinic, homo-clinic bifurcations, etc. Multistability, including the coexistence of synchronized/asynchronized solutions are also discussed.
General classical solutions of the complex Grassmannian and CP sub(N-1) sigma models
International Nuclear Information System (INIS)
Sasaki, Ryu.
1983-05-01
General classical solutions are constructed for the complex Grassmannian non-linear sigma models in two euclidean dimensions in terms of holomorphic functions. The Grassmannian sigma models are a simple generalization of the well known CP sup(N-1) model in two dimensions and they share various interesting properties; existence of (anti-) instantons, an infinite number of conserved quantities and complete integrability. (author)
Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention
Koning, L. de; Maanen, P.P. van; Dongen, K. van
2008-01-01
Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were
Modelling the dynamics of the health-production complex in livestock herds
DEFF Research Database (Denmark)
Sørensen, J.T.; Enevoldsen, Carsten
1992-01-01
This paper reviews how the dynamics of the health-production complex in livestock herds is mimicked by livestock herd simulation models. Twelve models simulating the dynamics of dairy, beef, sheep and sow herds were examined. All models basically included options to alter input and output...
Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention
Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David
2016-01-01
Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…
A comprehensive model of anaerobic bioconversion of complex substrates to biogas
DEFF Research Database (Denmark)
Angelidaki, Irini; Ellegaard, Lars; Ahring, Birgitte Kiær
1999-01-01
A dynamic model describing the anaerobic degradation of complex material, and codigestion of different types of wastes, was developed based on a model previously described (Angelidaki et al., 1993). in the model, the substrate is described by its composition of basic organic components, i.e., car...
Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.
Poirier, Louise
Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Czech Academy of Sciences Publication Activity Database
Veselská, V.; Fajgar, Radek; Číhalová, S.; Bolanz, R.M.; Göttlicher, J.; Steininger, R.; Siddique, J.A.; Komárek, M.
2016-01-01
Roč. 318, NOV 15 (2016), s. 433-442 ISSN 0304-3894 Institutional support: RVO:67985858 Keywords : surface complexation modeling * chromate * soil minerals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 6.065, year: 2016
Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex
Energy Technology Data Exchange (ETDEWEB)
Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.
1994-08-01
The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.
Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks
International Nuclear Information System (INIS)
Lin Min; Chen Tianlun
2005-01-01
Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
International Nuclear Information System (INIS)
Hall, Stephen; Roelich, Katy
2016-01-01
This research investigates the new opportunities that business model innovations are creating in electricity supply markets at the sub-national scale. These local supply business models can offer significant benefits to the electricity system, but also generate economic, social, and environmental values that are not well accounted for in current policy or regulation. This paper uses the UK electricity supply market to investigate new business models which rely on more complex value propositions than the incumbent utility model. Nine archetypal local supply business models are identified and their value propositions, value capture methods, and barriers to market entry are analysed. This analysis defines 'complex value' as a key concept in understanding business model innovation in the energy sector. The process of complex value identification poses a challenge to energy researchers, commercial firms and policymakers in liberalised markets; to investigate the opportunities for system efficiency and diverse outcomes that new supplier business models can offer to the electricity system. - Highlights: •Business models of energy supply markets shape energy transitions. •The British system misses four opportunities of local electricity supply. •Nine new business model archetypes of local supply are analysed. •New electricity business models have complex value propositions. •A process for policy response to business model innovation is presented.
A structural model of the E. coli PhoB Dimer in the transcription initiation complex
Directory of Open Access Journals (Sweden)
Tung Chang-Shung
2012-03-01
Full Text Available Abstract Background There exist > 78,000 proteins and/or nucleic acids structures that were determined experimentally. Only a small portion of these structures corresponds to those of protein complexes. While homology modeling is able to exploit knowledge-based potentials of side-chain rotomers and backbone motifs to infer structures for new proteins, no such general method exists to extend our understanding of protein interaction motifs to novel protein complexes. Results We use a Motif Binding Geometries (MBG approach, to infer the structure of a protein complex from the database of complexes of homologous proteins taken from other contexts (such as the helix-turn-helix motif binding double stranded DNA, and demonstrate its utility on one of the more important regulatory complexes in biology, that of the RNA polymerase initiating transcription under conditions of phosphate starvation. The modeled PhoB/RNAP/σ-factor/DNA complex is stereo-chemically reasonable, has sufficient interfacial Solvent Excluded Surface Areas (SESAs to provide adequate binding strength, is physically meaningful for transcription regulation, and is consistent with a variety of known experimental constraints. Conclusions Based on a straightforward and easy to comprehend concept, "proteins and protein domains that fold similarly could interact similarly", a structural model of the PhoB dimer in the transcription initiation complex has been developed. This approach could be extended to enable structural modeling and prediction of other bio-molecular complexes. Just as models of individual proteins provide insight into molecular recognition, catalytic mechanism, and substrate specificity, models of protein complexes will provide understanding into the combinatorial rules of cellular regulation and signaling.
Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin
As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.
Mathematical modeling of complexing in the scandium-salicylic acid-isoamyl alcohol system
International Nuclear Information System (INIS)
Evseev, A.M.; Smirnova, N.S.; Fadeeva, V.I.; Tikhomirova, T.I.; Kir'yanov, Yu.A.
1984-01-01
Mathematical modeling of an equilibrium multicomponent physicochemical system for extraction of Sc salicylate complexes by isoamyl alcohol was conducted. To calculate the equilibrium concentrations of Sc complexes different with respect to the content and composition, the system of nonlinear algebraic mass balance equations was solved. Experimental data on the extraction of Sc salicylates by isoamyl alcohol versus the pH of the solution at a constant Sc concentration and different concentration of salicylate-ions were used for construction of the mathematical model. The stability constants of ScHSal 2+ , Sc(HSal) 3 , ScOH(HSal) 2 , ScoH(HSal) 2 complexes were calculated
Energy Technology Data Exchange (ETDEWEB)
Yun, Sung Mi; Kang, Christina S. [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Jonghwa [Department of Industrial Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Han S., E-mail: hankim@konkuk.ac.kr [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of)
2015-04-28
Highlights: • Remediation of complex contaminated soil achieved by sequential soil flushing. • Removal of Zn, Pb, and heavy petroleum oils using 0.05 M citric acid and 2% SDS. • Unified desorption distribution coefficients modeled and experimentally determined. • Nonequilibrium models for the transport behavior of complex contaminants in soils. - Abstract: The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, K{sub Ubatch} and K{sub Lbatch}, converged to constant values as C{sub e} increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of K{sub Ubatch} and K{sub Lbatch} were very close to those of K{sub Ufit} and K{sub Lfit} determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination.
Model-based identification and use of task complexity factors of human integrated systems
International Nuclear Information System (INIS)
Ham, Dong-Han; Park, Jinkyun; Jung, Wondea
2012-01-01
Task complexity is one of the conceptual constructs that are critical to explain and predict human performance in human integrated systems. A basic approach to evaluating the complexity of tasks is to identify task complexity factors and measure them. Although a great deal of task complexity factors have been studied, there is still a lack of conceptual frameworks for identifying and organizing them analytically, which can be generally used irrespective of the types of domains and tasks. This study proposes a model-based approach to identifying and using task complexity factors, which has two facets—the design aspects of a task and complexity dimensions. Three levels of design abstraction, which are functional, behavioral, and structural aspects of a task, characterize the design aspect of a task. The behavioral aspect is further classified into five cognitive processing activity types. The complexity dimensions explain a task complexity from different perspectives, which are size, variety, and order/organization. Twenty-one task complexity factors are identified by the combination of the attributes of each facet. Identification and evaluation of task complexity factors based on this model is believed to give insights for improving the design quality of tasks. This model for complexity factors can also be used as a referential framework for allocating tasks and designing information aids. The proposed approach is applied to procedure-based tasks of nuclear power plants (NPPs) as a case study to demonstrate its use. Last, we compare the proposed approach with other studies and then suggest some future research directions.
Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes
2017-02-01
This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.
The complex formation-partition and partition-association models of solvent extraction of ions
International Nuclear Information System (INIS)
Siekierski, S.
1976-01-01
Two models of the extraction process have been proposed. In the first model it is assumed that the partitioning neutral species is at first formed in the aqueous phase and then transferred into the organic phase. The second model is based on the assumption that equivalent amounts of cations are at first transferred from the aqueous into the organic phase and then associated to form a neutral molecule. The role of the solubility parameter in extraction and the relation between the solubility of liquid organic substances in water and the partition of complexes have been discussed. The extraction of simple complexes and complexes with organic ligands has been discussed using the first model. Partition coefficients have been calculated theoretically and compared with experimental values in some very simple cases. The extraction of ion pairs has been discussed using the partition-association model and the concept of single-ion partition coefficients. (author)
Applicability of surface complexation modelling in TVO's studies on sorption of radionuclides
International Nuclear Information System (INIS)
Carlsson, T.
1994-03-01
The report focuses on the possibility of applying surface complexation theories to the conditions at a potential repository site in Finland and of doing proper experimental work in order to determine necessary constants for the models. The report provides background information on: (1) what type experiments should be carried out in order to produce data for surface complexation modelling of sorption phenomena under potential Finnish repository conditions, and (2) how to design and perform properly such experiments, in order to gather data, develop models or both. The report does not describe in detail how proper surface complexation experiments or modelling should be carried out. The work contains several examples of information that may be valuable in both modelling and experimental work. (51 refs., 6 figs., 4 tabs.)
Towards a Unified Theory of Health-Disease: I. Health as a complex model-object
Directory of Open Access Journals (Sweden)
Naomar Almeida-Filho
2013-06-01
Full Text Available Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.
Complex data modeling and computationally intensive methods for estimation and prediction
Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics
2015-01-01
The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...
Directory of Open Access Journals (Sweden)
Mana eKatano
2016-02-01
Full Text Available Efficient use of seed nutrient reserves is crucial for germination and establishment of plant seedlings. Mobilizing seed oil reserves in Arabidopsis involves β-oxidation, the glyoxylate cycle, and gluconeogenesis, which provide essential energy and the carbon skeletons needed to sustain seedling growth until photoautotrophy is acquired. We demonstrated that H+-PPase activity is required for gluconeogenesis. Lack of H+-PPase in fugu5 mutants increases cytosolic pyrophosphate (PPi levels, which partially reduces sucrose synthesis de novo and inhibits cell division. In contrast, post-mitotic cell expansion in cotyledons was unusually enhanced, a phenotype called compensation. Therefore, it appears that PPi inhibits several cellular functions, including cell cycling, to trigger compensated cell enlargement (CCE. Here, we mutagenized fugu5-1 seeds with 12C6+ heavy-ion irradiation and screened mutations that restrain CCE to gain insight into the genetic pathway(s involved in CCE. We isolated A#3-1, in which cell size was severely reduced, but cell number remained similar to that of original fugu5-1. Moreover, cell number decreased in A#3-1 single mutant (A#3-1sm, similar to that of fugu5-1, but cell size was almost equal to that of the wild type. Surprisingly, A#3-1 mutation did not affect CCE in other compensation exhibiting mutant backgrounds, such as an3-4 and fugu2-1/fas1-6. Subsequent map-based cloning combined with genome sequencing and HRM curve analysis identified enoyl-CoA hydratase 2 (ECH2 as the causal gene of A#3-1. The above phenotypes were consistently observed in the ech2-1 allele and supplying sucrose restored the morphological and cellular phenotypes in fugu5-1, ech2-1, A#3-1sm, fugu5-1 ech2-1 and A#3-1;fugu5-1. Taken together, these results suggest that defects in either H+-PPase or ECH2 compromise cell proliferation due to defects in mobilizing stored lipids. In contrast, ECH2 alone likely promotes CCE during the post-mitotic cell
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
Surface complexation modelling applied to the sorption of nickel on silica
International Nuclear Information System (INIS)
Olin, M.
1995-10-01
The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)
International Nuclear Information System (INIS)
Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio
2005-01-01
Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth
Estimating the complexity of 3D structural models using machine learning methods
Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques
2016-04-01
Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.
International Nuclear Information System (INIS)
Wittek, P.
1985-09-01
Atmospheric dispersion models are reviewed with respect to their application to the consequence assessment within risk studies for nuclear power plants located in complex terrain. This review comprises: seven straight-line Gaussian models, which have been modified in order to take into account in a crude way terrain elevations, enhanced turbulence and some others effects; three trajectory/puff-models, which can handle wind direction changes and the resulting plume or puff trajectories; five three-dimensional wind field models, which calculate the wind field in complex terrain for the application in a grid model; three grid models; one Monte-Carlo-model. The main features of the computer codes are described, along with some informations on the necessary computer time and storage capacity. (orig.) [de
New approaches in agent-based modeling of complex financial systems
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2017-12-01
Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.
International Nuclear Information System (INIS)
Geroyannis, V.S.
1988-01-01
In this paper, a numerical method is developed for determining the structure distortion of a polytropic star which rotates either uniformly or differentially. This method carries out the required numerical integrations in the complex plane. The method is implemented to compute indicative quantities, such as the critical perturbation parameter which represents an upper limit in the rotational behavior of the star. From such indicative results, it is inferred that this method achieves impressive improvement against other relevant methods; most important, it is comparable to some of the most elaborate and accurate techniques on the subject. It is also shown that the use of this method with Chandrasekhar's first-order perturbation theory yields an immediate drastic improvement of the results. Thus, there is no neeed - for most applications concerning rotating polytropic models - to proceed to the further use of the method with higher order techniques, unless the maximum accuracy of the method is required. 31 references
Directory of Open Access Journals (Sweden)
Олег Богданович ЗАЧКО
2016-03-01
Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.
Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.
2016-01-01
Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…
Small System dynamics models for big issues : Triple jump towards real-world complexity
Pruyt, E.
2013-01-01
System Dynamics (SD) is a method to describe, model, simulate and analyze dynamically complex issues and/or systems in terms of the processes, information, organizational boundaries and strategies. Quantitative SD modeling, simulation and analysis facilitates the (re)design of systems and design of
Paving the way towards complex blood-brain barrier models using pluripotent stem cells
DEFF Research Database (Denmark)
Lauschke, Karin; Frederiksen, Lise; Hall, Vanessa Jane
2017-01-01
, it is now possible to produce many cell types from the BBB and even partially recapitulate this complex tissue in vitro. In this review, we summarize the most recent developments in PSC differentiation and modelling of the BBB. We also suggest how patient-specific human induced PSCs could be used to model...
DEFF Research Database (Denmark)
Xu, Chang; Li, Chen Qi; Han, Xing Xing
2015-01-01
Study on the aerodynamic field in complex terrain is significant to wind farm micro-sitting and wind power prediction. This paper modeled the wind turbine through an actuator disk model, and solved the aerodynamic field by CFD to study the influence of meshing, boundary conditions and turbulence ...
Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models
Czech Academy of Sciences Publication Activity Database
Gnecco, G.; Kůrková, Věra; Sanguineti, M.
2011-01-01
Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Energy Technology Data Exchange (ETDEWEB)
Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)
2016-11-01
The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.
System Testability Analysis for Complex Electronic Devices Based on Multisignal Model
International Nuclear Information System (INIS)
Long, B; Tian, S L; Huang, J G
2006-01-01
It is necessary to consider the system testability problems for electronic devices during their early design phase because modern electronic devices become smaller and more compositive while their function and structure are more complex. Multisignal model, combining advantage of structure model and dependency model, is used to describe the fault dependency relationship for the complex electronic devices, and the main testability indexes (including optimal test program, fault detection rate, fault isolation rate, etc.) to evaluate testability and corresponding algorithms are given. The system testability analysis process is illustrated for USB-GPIB interface circuit with TEAMS toolbox. The experiment results show that the modelling method is simple, the computation speed is rapid and this method has important significance to improve diagnostic capability for complex electronic devices
Bates, P. D.; Neal, J. C.; Fewtrell, T. J.
2012-12-01
In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound
Developing an agent-based model on how different individuals solve complex problems
Directory of Open Access Journals (Sweden)
Ipek Bozkurt
2015-01-01
Full Text Available Purpose: Research that focuses on the emotional, mental, behavioral and cognitive capabilities of individuals has been abundant within disciplines such as psychology, sociology, and anthropology, among others. However, when facing complex problems, a new perspective to understand individuals is necessary. The main purpose of this paper is to develop an agent-based model and simulation to gain understanding on the decision-making and problem-solving abilities of individuals. Design/Methodology/approach: The micro-level analysis modeling and simulation paradigm Agent-Based Modeling Through the use of Agent-Based Modeling, insight is gained on how different individuals with different profiles deal with complex problems. Using previous literature from different bodies of knowledge, established theories and certain assumptions as input parameters, a model is built and executed through a computer simulation. Findings: The results indicate that individuals with certain profiles have better capabilities to deal with complex problems. Moderate profiles could solve the entire complex problem, whereas profiles within extreme conditions could not. This indicates that having a strong predisposition is not the ideal way when approaching complex problems, and there should always be a component from the other perspective. The probability that an individual may use these capabilities provided by the opposite predisposition provides to be a useful option. Originality/value: The originality of the present research stems from how individuals are profiled, and the model and simulation that is built to understand how they solve complex problems. The development of the agent-based model adds value to the existing body of knowledge within both social sciences, and modeling and simulation.
EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST
Directory of Open Access Journals (Sweden)
C. H. Tu
2012-07-01
Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.
Modelling of turbulence and combustion for simulation of gas explosions in complex geometries
Energy Technology Data Exchange (ETDEWEB)
Arntzen, Bjoern Johan
1998-12-31
This thesis analyses and presents new models for turbulent reactive flows for CFD (Computational Fluid Dynamics) simulation of gas explosions in complex geometries like offshore modules. The course of a gas explosion in a complex geometry is largely determined by the development of turbulence and the accompanying increased combustion rate. To be able to model the process it is necessary to use a CFD code as a starting point, provided with a suitable turbulence and combustion model. The modelling and calculations are done in a three-dimensional finite volume CFD code, where complex geometries are represented by a porosity concept, which gives porosity on the grid cell faces, depending on what is inside the cell. The turbulent flow field is modelled with a k-{epsilon} turbulence model. Subgrid models are used for production of turbulence from geometry not fully resolved on the grid. Results from laser doppler anemometry measurements around obstructions in steady and transient flows have been analysed and the turbulence models have been improved to handle transient, subgrid and reactive flows. The combustion is modelled with a burning velocity model and a flame model which incorporates the burning velocity into the code. Two different flame models have been developed: SIF (Simple Interface Flame model), which treats the flame as an interface between reactants and products, and the {beta}-model where the reaction zone is resolved with about three grid cells. The flame normally starts with a quasi laminar burning velocity, due to flame instabilities, modelled as a function of flame radius and laminar burning velocity. As the flow field becomes turbulent, the flame uses a turbulent burning velocity model based on experimental data and dependent on turbulence parameters and laminar burning velocity. The laminar burning velocity is modelled as a function of gas mixture, equivalence ratio, pressure and temperature in reactant. Simulations agree well with experiments. 139
Energy Technology Data Exchange (ETDEWEB)
Chambers, D H
2009-02-24
A new method of locating structural damage using measured differences in vibrational response and a numerical model of the undamaged structure has been presented. This method is particularly suited for complex structures with little or no symmetry. In a prior study the method successively located simulated damage from measurements of the vibrational response on two simple structures. Here we demonstrate that it can locate simulated damage in a complex structure. A numerical model of a complex structure was used to calculate the structural response before and after the introduction of a void. The method can now be considered for application to structures of programmatic interest. It could be used to monitor the structural integrity of complex mechanical structures and assemblies over their lifetimes. This would allow early detection of damage, when repair is relatively easy and inexpensive. It would also allow one to schedule maintenance based on actual damage instead of a time schedule.
The effects of model complexity and calibration period on groundwater recharge simulations
Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario
2017-04-01
A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle
THE MODEL OF LIFELONG EDUCATION IN A TECHNICAL UNIVERSITY AS A MULTILEVEL EDUCATIONAL COMPLEX
Directory of Open Access Journals (Sweden)
Svetlana V. Sergeyeva
2016-06-01
Full Text Available Introduction: the current leading trend of the educational development is characterised by its continuity. Institutions of higher education as multi-level educational complexes nurture favourable conditions for realisation of the strategy of lifelong education. Today a technical university offering training of future engineers is facing a topic issue of creating a multilevel educational complex. Materials and Methods: this paper is put together on the basis of modern Russian and foreign scientific literature about lifelong education. The authors used theoretical methods of scientific research: systemstructural analysis, synthesis, modeling, analysis and generalisations of concepts. Results: the paper presents a model of lifelong education developed by authors for a technical university as a multilevel educational complex. It is realised through a set of principles: multi-level and continuity, integration, conformity and quality, mobility, anticipation, openness, social partnership and feedback. In accordance with the purpose, objectives and principles, the content part of the model is formed. The syllabi following the described model are run in accordance with the training levels undertaken by a technical university as a multilevel educational complex. All syllabi are based on the gradual nature of their implementation. In this regard, the authors highlight three phases: diagnostic, constructive and transformative, assessing. Discussion and Conclusions: the expected result of the created model of lifelong education development in a technical university as a multilevel educational complex is presented by a graduate trained for effective professional activity, competitive, prepared and sought-after at the regional labour market.
Martinez, Neo D.; Tonin, Perrine; Bauer, Barbara; Rael, Rosalyn C.; Singh, Rahul; Yoon, Sangyuk; Yoon, Ilmi; Dunne, Jennifer A.
2012-01-01
Understanding ecological complexity has stymied scientists for decades. Recent elucidation of the famously coined "devious strategies for stability in enduring natural systems" has opened up a new field of computational analyses of complex ecological networks where the nonlinear dynamics of many interacting species can be more realistically mod-eled and understood. Here, we describe the first extension of this field to include coupled human-natural systems. This extension elucidates new strat...
Numerical Modeling of Fluid-Structure Interaction with Rheologically Complex Fluids
Chen, Xingyuan
2014-01-01
In the present work the interaction between rheologically complex fluids and elastic solids is studied by means of numerical modeling. The investigated complex fluids are non-Newtonian viscoelastic fluids. The fluid-structure interaction (FSI) of this kind is frequently encountered in injection molding, food processing, pharmaceutical engineering and biomedicine. The investigation via experiments is costly, difficult or in some cases, even impossible. Therefore, research is increasingly aided...
Process modeling of the platform choise for development of the multimedia educational complex
Directory of Open Access Journals (Sweden)
Ірина Олександрівна Бондар
2016-10-01
Full Text Available The article presents a methodical approach to the platform choice as the technological basis for building of open and functional structure and the further implementation of the substantive content of the modules of the network multimedia complex for the discipline. The proposed approach is implemented through the use of mathematical tools. The result of the process modeling is the decision of the most appropriate platform for development of the multimedia complex
Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher
2005-01-01
This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.
Microscopic universality of complex matrix model correlation functions at weak non-Hermiticity
International Nuclear Information System (INIS)
Akemann, G.
2002-01-01
The microscopic correlation functions of non-chiral random matrix models with complex eigenvalues are analyzed for a wide class of non-Gaussian measures. In the large-N limit of weak non-Hermiticity, where N is the size of the complex matrices, we can prove that all k-point correlation functions including an arbitrary number of Dirac mass terms are universal close to the origin. To this aim we establish the universality of the asymptotics of orthogonal polynomials in the complex plane. The universality of the correlation functions then follows from that of the kernel of orthogonal polynomials and a mapping of massive to massless correlators
A study of the logical model of capital market complexity theories
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Analyzes the shortcomings of the classic capital market theories based on EMH and discloses the complexity essence of the capital market. Considering the capital market a complicated, interactive and adaptable dynamic system, with complexity science as the method for researching the operation law of the capital market, this paper constructs a nonlinear logical model to analyze the applied realm, focal point and interrelationship of such theories as dissipative structure theory, chaos theory, fractal theory, synergetics theory, catastrophe theory and scale theory, and summarizes and discusses the achievements and problems of each theory.Based on the research, the paper foretells the developing direction of complexity science in a capital market.
Collaborative Management of Complex Major Construction Projects: AnyLogic-Based Simulation Modelling
Directory of Open Access Journals (Sweden)
Na Zhao
2016-01-01
Full Text Available Complex supply chain system collaborative management of major construction projects effectively integrates the different participants in the construction project. This paper establishes a simulation model based on AnyLogic to reveal the collaborative elements in the complex supply chain management system and the modes of action as well as the transmission problems of the intent information. Thus it is promoting the participants to become an organism with coordinated development and coevolution. This study can help improve the efficiency and management of the complex system of major construction projects.
A density-based clustering model for community detection in complex networks
Zhao, Xiang; Li, Yantao; Qu, Zehui
2018-04-01
Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.
Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E
2018-04-11
In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.
Modeling and complexity of stochastic interacting Lévy type financial price dynamics
Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao
2018-06-01
In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series
Complex Road Intersection Modelling Based on Low-Frequency GPS Track Data
Huang, J.; Deng, M.; Zhang, Y.; Liu, H.
2017-09-01
It is widely accepted that digital map becomes an indispensable guide for human daily traveling. Traditional road network maps are produced in the time-consuming and labour-intensive ways, such as digitizing printed maps and extraction from remote sensing images. At present, a large number of GPS trajectory data collected by floating vehicles makes it a reality to extract high-detailed and up-to-date road network information. Road intersections are often accident-prone areas and very critical to route planning and the connectivity of road networks is mainly determined by the topological geometry of road intersections. A few studies paid attention on detecting complex road intersections and mining the attached traffic information (e.g., connectivity, topology and turning restriction) from massive GPS traces. To the authors' knowledge, recent studies mainly used high frequency (1 s sampling rate) trajectory data to detect the crossroads regions or extract rough intersection models. It is still difficult to make use of low frequency (20-100 s) and easily available trajectory data to modelling complex road intersections geometrically and semantically. The paper thus attempts to construct precise models for complex road intersection by using low frequency GPS traces. We propose to firstly extract the complex road intersections by a LCSS-based (Longest Common Subsequence) trajectory clustering method, then delineate the geometry shapes of complex road intersections by a K-segment principle curve algorithm, and finally infer the traffic constraint rules inside the complex intersections.
Complex motion of elevators in piecewise map model combined with circle map
Nagatani, Takashi
2013-11-01
We study the dynamic behavior in the elevator traffic controlled by capacity when the inflow rate of passengers into elevators varies periodically with time. The dynamics of elevators is described by the piecewise map model combined with the circle map. The motion of the elevators depends on the inflow rate, its period, and the number of elevators. The motion in the piecewise map model combined with the circle map shows a complex behavior different from the motion in the piecewise map model.
Use of probabilistic relational model (PRM) for dependability analysis of complex systems
Medina-Oliva , Gabriela; Weber , Philippe; Levrat , Eric; Iung , Benoît
2010-01-01
International audience; This paper proposes a methodology to develop a aided decision-making tool for assessing the dependability and performances (i.e. reliability) of an industrial system. This tool is built on a model based on a new formalism, called the probabilistic relational model (PRM) which is adapted to deal with large and complex systems. The model is formalized from functional, dysfunctional and informational studies of the technical industrial systems. An application of this meth...
Trotter, Robert T.; Laurila, Kelly; Alberts, David; Huenneke, Laura F.
2014-01-01
Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an “logic model plus” evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-univ...
International Nuclear Information System (INIS)
Abazi, F.
2011-01-01
Increased level of complexity in almost every discipline and operation today raises the demand for knowledge in order to successfully run an organization whether to generate profit or to attain a non-profit mission. Traditional way of transferring knowledge to information systems rich in data structures and complex algorithms continue to hinder the ability to swiftly turnover concepts into operations. Diagrammatic modelling commonly applied in engineering in order to represent concepts or reality remains to be an excellent way of converging knowledge from domain experts. The nuclear verification domain represents ever more a matter which has great importance to the World safety and security. Demand for knowledge about nuclear processes and verification activities used to offset potential misuse of nuclear technology will intensify with the growth of the subject technology. This Doctoral thesis contributes with a model-based approach for representing complex process such as nuclear inspections. The work presented contributes to other domains characterized with knowledge intensive and complex processes. Based on characteristics of a complex process a conceptual framework was established as the theoretical basis for creating a number of modelling languages to represent the domain. The integrated Safeguards Modelling Method (iSMM) is formalized through an integrated meta-model. The diagrammatic modelling languages represent the verification domain and relevant nuclear verification aspects. Such a meta-model conceptualizes the relation between practices of process management, knowledge management and domain specific verification principles. This fusion is considered as necessary in order to create quality processes. The study also extends the formalization achieved through a meta-model by contributing with a formalization language based on Pattern Theory. Through the use of graphical and mathematical constructs of the theory, process structures are formalized enhancing
Energy Technology Data Exchange (ETDEWEB)
Tsao, Jeffrey Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kleban, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Naugle, Asmeret Bier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Curtis M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Mark A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flanagan, Tatiana Paz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gabert, Kasimir Georg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lave, Matthew Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Wei [Northwestern Univ., Evanston, IL (United States); DeLaurentis, Daniel [Purdue Univ., West Lafayette, IN (United States); Hubler, Alfred [Univ. of Illinois, Urbana, IL (United States); Oberkampf, Bill [WLO Consulting, Austin, TX (United States)
2016-08-01
This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?
Jannesari, Zahra; Hadadzadeh, Hassan; Amirghofran, Zahra; Simpson, Jim; Khayamian, Taghi; Maleki, Batool
2015-02-01
A new mononuclear Zn(II) complex, trans-[Zn(Pir)2(DMSO)2], where Pir- is 4-hydroxy-2-methyl-N-2-pyridyl-2H-1,2-benzothiazine-3-carboxamide-1,1-dioxide (piroxicam), has been synthesized and characterized. The crystal structure of the complex was obtained by the single crystal X-ray diffraction technique. The interaction of the complex with DNA and BSA was investigated. The complex interacts with FS-DNA by two binding modes, viz., electrostatic and groove binding (major and minor). The microenvironment and the secondary structure of BSA are changed in the presence of the complex. The anticancer effects of the seven complexes of oxicam family were also determined on the human K562 cell lines and the results showed reasonable cytotoxicities. The interactions of the oxicam complexes with BSA and DNA were modeled by molecular docking and molecular dynamic simulation methods.
Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.
Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn
2015-10-01
Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of
Complex networks-based energy-efficient evolution model for wireless sensor networks
Energy Technology Data Exchange (ETDEWEB)
Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)
2009-08-30
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
Complex networks-based energy-efficient evolution model for wireless sensor networks
International Nuclear Information System (INIS)
Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun
2009-01-01
Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.
EDM - A model for optimising the short-term power operation of a complex hydroelectric network
International Nuclear Information System (INIS)
Tremblay, M.; Guillaud, C.
1996-01-01
In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs
Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.
Haimes, Yacov Y
2018-01-01
The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Yasami, Yasser; Safaei, Farshad
2018-02-01
The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of
Modeling and simulation for fewer-axis grinding of complex surface
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
New Age of 3D Geological Modelling or Complexity is not an Issue Anymore
Mitrofanov, Aleksandr
2017-04-01
Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Building a pseudo-atomic model of the anaphase-promoting complex
International Nuclear Information System (INIS)
Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; Fonseca, Paula C. A. da; Barford, David
2013-01-01
This article describes an example of molecular replacement in which atomic models are used to interpret electron-density maps determined using single-particle electron-microscopy data. The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14–15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex
Multi-level emulation of complex climate model responses to boundary forcing data
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Directory of Open Access Journals (Sweden)
O. M. Pshinko
2016-12-01
Full Text Available Purpose. The paper aims to develop rating models and related information technologies designed to resolve the tasks of strategic planning of the administrative and territorial units’ development, as well as the tasks of multi-criteria control of inhomogeneous multiparameter objects operation. Methodology. When solving problems of strategic planning of administrative and territorial development and heterogeneous classes management of objects under control, a set of agreed methods is used. Namely the multi-criteria properties analysis for objects of planning and management, diagnostics of the state parameters, forecasting and management of complex systems of different classes. Their states are estimated by sets of different quality indicators, as well as represented by the individual models of operation process. A new information technology is proposed and created to implement the strategic planning and management tasks. This technology uses the procedures for solving typical tasks, that are implemented in MS SQL Server. Findings. A new approach to develop models of analyze and management of complex systems classes based on the ratings has been proposed. Rating models development for analysis of multicriteria and multiparameter systems has been obtained. The management of these systems is performed on the base of parameters of the current and predicted state by non-uniform distribution of resources. The procedure of sensitivity analysis of the changes in the rating model of inhomogeneous distribution of resources parameters has been developed. The information technology of strategic planning and management of heterogeneous classes of objects based on the rating model has been created. Originality. This article proposes a new approach of the rating indicators’ using as a general model for strategic planning of the development and management of heterogeneous objects that can be characterized by the sets of parameters measured on different scales
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and
Directory of Open Access Journals (Sweden)
Nathir A. F. Al-Rawashdeh
2013-01-01
Full Text Available The inclusion complexes of selected sunscreen agents, namely, oxybenzone (Oxy, octocrylene (Oct, and ethylhexyl-methoxycinnamate (Cin with β-cyclodextrin (β-CD were studied by UV-Vis spectroscopy, differential scanning calorimetry (DSC, 13C NMR techniques, and molecular mechanics (MM calculations and modeling. Molecular modeling (MM study of the entire process of the formation of 1 : 1 stoichiometry sunscreen agent/β-cyclodextrin structures has been used to contribute to the understanding and rationalization of the experimental results. Molecular mechanics calculations, together with 13C NMR measurements, for the complex with β-CD have been used to describe details of the structural, energetic, and dynamic features of host-guest complex. Accurate structures of CD inclusion complexes have been derived from molecular mechanics (MM calculations and modeling. The photodegradation reaction of the sunscreen agents' molecules in lotion was explored using UV-Vis spectroscopy. It has been demonstrated that the photostability of these selected sunscreen agents has been enhanced upon forming inclusion complexes with β-CD in lotion. The results of this study demonstrate that β-CD can be utilized as photostabilizer additive for enhancing the photostability of the selected sunscreen agents' molecules.
A computational approach to modeling cellular-scale blood flow in complex geometry
Balogh, Peter; Bagchi, Prosenjit
2017-04-01
We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.
International Nuclear Information System (INIS)
Klukas, M.H.; Davis, P.A.
2000-01-01
AECL is undertaking the validation of ADDAM, an atmospheric dispersion and dose code based on the Canadian Standards Association model CSA N288.2. The key component of the validation program involves comparison of model predicted and measured vertical and lateral dispersion parameters, effective release height and air concentrations. A wind tunnel study of the dispersion of exhaust gases from the CANDU complex at Wolsong, Korea provides test data for dispersion over uniform and complex terrain. The test data are for distances close enough to the release points to evaluate the model for exclusion area boundaries (EAB) as small as 500 m. Lateral and vertical dispersion is described well for releases over uniform terrain but the model tends to over-predict these parameters for complex terrain. Both plume rise and entrainment are modelled conservatively and the way they are combined in the model produces conservative estimates of the effective release height for low and high wind speeds. Estimates for the medium wind speed case (50-m wind speed, 3.8 ms -1 ) are conservative when the correction for entrainment is made. For the highest ground-level concentrations, those of greatest interest in a safety analysis, 82% of the predictions were within a factor 2 of the observed values. The model can be used with confidence to predict air concentrations of exhaust gases at the Wolsong site for neutral conditions, even for flows over the hills to the west, and is unlikely to substantially under-predict concentrations. (author)
Directory of Open Access Journals (Sweden)
Nciri M.
2015-01-01
Full Text Available This paper presents an innovative approach for the modelling of viscous behaviour of short-fibre reinforced composites (SFRC with complex distributions of fibre orientations and for a wide range of strain rates. As an alternative to more complex homogenisation methods, the model is based on an additive decomposition of the state potential for the computation of composite’s macroscopic behaviour. Thus, the composite material is seen as the assembly of a matrix medium and several linear elastic fibre media. The division of short fibres into several families means that complex distributions of orientation or random orientation can be easily modelled. The matrix behaviour is strain-rate sensitive, i.e. viscoelastic and/or viscoplastic. Viscoelastic constitutive laws are based on a generalised linear Maxwell model and the modelling of the viscoplasticity is based on an overstress approach. The model is tested for the case of a polypropylene reinforced with short-glass fibres with distributed orientations and subjected to uniaxial tensile tests, in different loading directions and under different strain rates. Results demonstrate the efficiency of the model over a wide range of strain rates.
A framework for modelling the complexities of food and water security under globalisation
Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.
2018-01-01
We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.
Simple versus complex models of trait evolution and stasis as a response to environmental change
Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott
2015-04-01
Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.
Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex
International Nuclear Information System (INIS)
Gregory, Michael V.; Paul, Pran K.
2000-01-01
An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes
A framework for modelling the complexities of food and water security under globalisation
Directory of Open Access Journals (Sweden)
B. J. Dermody
2018-01-01
Full Text Available We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.
A dynamic globalization model for large eddy simulation of complex turbulent flow
Energy Technology Data Exchange (ETDEWEB)
Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)
2005-07-01
A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.
Directory of Open Access Journals (Sweden)
Михаил Юрьевич Чернышов
2013-12-01
Full Text Available A software complex (SC elaborated by the authors on the basis of the language LMPL and representing a software tool intended for synthesis of applied software models and meta-models constructed on the basis of mathematical programming (MP principles is described. LMPL provides for an explicit form of declarative representation of MP-models, presumes automatic constructing and transformation of models and the capability of adding external software packages. The following software versions of the SC have been implemented: 1 a SC intended for representing the process of choosing an optimal hydroelectric power plant model (on the principles of meta-modeling and 2 a SC intended for representing the logic-sense relations between the models of a set of discourse formations in the discourse meta-model.
Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome
Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.
2018-03-01
The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.
Progress on Complex Langevin simulations of a finite density matrix model for QCD
Energy Technology Data Exchange (ETDEWEB)
Bloch, Jacques [Univ. of Regensburg (Germany). Inst. for Theorectical Physics; Glesaan, Jonas [Swansea Univ., Swansea U.K.; Verbaarschot, Jacobus [Stony Brook Univ., NY (United States). Dept. of Physics and Astronomy; Zafeiropoulos, Savvas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Heidelberg Univ. (Germany). Inst. for Theoretische Physik
2018-04-01
We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.
arXiv Spin models in complex magnetic fields: a hard sign problem
de Forcrand, Philippe
2018-01-01
Coupling spin models to complex external fields can give rise to interesting phenomena like zeroes of the partition function (Lee-Yang zeroes, edge singularities) or oscillating propagators. Unfortunately, it usually also leads to a severe sign problem that can be overcome only in special cases; if the partition function has zeroes, the sign problem is even representation-independent at these points. In this study, we couple the N-state Potts model in different ways to a complex external magnetic field and discuss the above mentioned phenomena and their relations based on analytic calculations (1D) and results obtained using a modified cluster algorithm (general D) that in many cases either cures or at least drastically reduces the sign-problem induced by the complex external field.
Logic-based hierarchies for modeling behavior of complex dynamic systems with applications
International Nuclear Information System (INIS)
Hu, Y.S.; Modarres, M.
2000-01-01
Most complex systems are best represented in the form of a hierarchy. The Goal Tree Success Tree and Master Logic Diagram (GTST-MLD) are proven powerful hierarchic methods to represent complex snap-shot of plant knowledge. To represent dynamic behaviors of complex systems, fuzzy logic is applied to replace binary logic to extend the power of GTST-MLD. Such a fuzzy-logic-based hierarchy is called Dynamic Master Logic Diagram (DMLD). This chapter discusses comparison of the use of GTST-DMLD when applied as a modeling tool for systems whose relationships are modeled by either physical, binary logical or fuzzy logical relationships. This is shown by applying GTST-DMLD to the Direct Containment Heating (DCH) phenomenon at pressurized water reactors which is an important safety issue being addressed by the nuclear industry. (orig.)
Modeling the surface tension of complex, reactive organic-inorganic mixtures
Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye
2013-11-01
Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.
Complex energy eigenstates in a model with two equal mass particles
Energy Technology Data Exchange (ETDEWEB)
Gleiser, R J; Reula, D A; Moreschi, O M [Universidad Nacional de Cordoba (Argentina). Inst. de Matematica, Astronomia y Fisica
1980-09-01
The properties of a simples quantum mechanical model for the decay of two equal mass particles are studied and related to some recent work on complex energy eigenvalues. It consists essentially in a generalization of the Lee-Friedrichs model for an unstable particle and gives a highly idealized version of the K/sup 0/-anti K/sup 0/ system, including CP violations. The model is completely solvable, thus allowing a comparison with the well known Weisskopf-Wigner formalism for the decay amplitudes. A different model, describing the same system is also briefly outlined.
Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks
International Nuclear Information System (INIS)
Zhang Guiqing; Chen Tianlun
2008-01-01
Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.
Development and evaluation of a musculoskeletal model of the elbow joint complex
Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.
From complex spatial dynamics to simple Markov chain models: do predators and prey leave footprints?
DEFF Research Database (Denmark)
Nachman, Gøsta Støger; Borregaard, Michael Krabbe
2010-01-01
to another, are then depicted in a state transition diagram, constituting the "footprints" of the underlying population dynamics. We investigate to what extent changes in the population processes modeled in the complex simulation (i.e. the predator's functional response and the dispersal rates of both......In this paper we present a concept for using presence-absence data to recover information on the population dynamics of predator-prey systems. We use a highly complex and spatially explicit simulation model of a predator-prey mite system to generate simple presence-absence data: the number...... of transition probabilities on state variables, and combine this information in a Markov chain transition matrix model. Finally, we use this extended model to predict the long-term dynamics of the system and to reveal its asymptotic steady state properties....
Unsymmetrical dizinc complexes as models for the active sites of phosphohydrolases.
Jarenmark, Martin; Csapó, Edit; Singh, Jyoti; Wöckel, Simone; Farkas, Etelka; Meyer, Franc; Haukka, Matti; Nordlander, Ebbe
2010-09-21
The unsymmetrical dinucleating ligand 2-(N-isopropyl-N-((2-pyridyl)methyl)aminomethyl)-6-(N-(carboxylmethyl)-N-((2-pyridyl)methyl)aminomethyl)-4-methylphenol (IPCPMP or L) has been synthesized to model the active site environment of dinuclear metallohydrolases. It has been isolated as the hexafluorophosphate salt H(4)IPCPMP(PF(6))(2) x 2 H(2)O (H(4)L), which has been structurally characterized, and has been used to form two different Zn(II) complexes, [{Zn(2)(IPCPMP)(OAc)}(2)][PF(6)](2) (2) and [{Zn(2)(IPCPMP)(Piv)}(2)][PF(6)](2) (3) (OAc = acetate; Piv = pivalate). The crystal structures of and show that they consist of tetranuclear complexes with very similar structures. Infrared spectroscopy and mass spectrometry indicate that the tetranuclear complexes dissociate into dinuclear complexes in solution. Potentiometric studies of the Zn(II):IPCPMP system in aqueous solution reveal that a mononuclear complex is surprisingly stable at low pH, even at a 2:1 Zn(II):L ratio, but a dinuclear complex dominates at high pH and transforms into a dihydroxido complex by a cooperative deprotonation of two, probably terminally coordinated, water molecules. A kinetic investigation indicates that one of these hydroxides is the active nucleophile in the hydrolysis of bis(2,4-dinitrophenyl)phosphate (BDNPP) enhanced by complex 2, and mechanistic proposals are presented for this reaction as well as the previously reported transesterification of 2-hydroxypropyl p-nitrophenyl phosphate (HPNP) promoted by Zn(II) complexes of IPCPMP.
Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J
2016-01-01
Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.
Sparkle/PM7 Lanthanide Parameters for the Modeling of Complexes and Materials
Dutra, José Diogo L.; Filho, Manoel A. M.; Rocha, Gerd B.; Freire, Ricardo O.; Simas, Alfredo M.; Stewart, James J. P.
2013-01-01
The recently published Parametric Method number 7, PM7, is the first semiempirical method to be successfully tested by modeling crystal structures and heats of formation of solids. PM7 is thus also capable of producing results of useful accuracy for materials science, and constitutes a great improvement over its predecessor, PM6. In this article, we present Sparkle Model parameters to be used with PM7 that allow the prediction of geometries of metal complexes and materials which contain lanth...
Dealing with project complexity by matrix-based propagation modelling for project risk analysis
Fang , Chao; Marle , Franck
2012-01-01
International audience; Engineering projects are facing a growing complexity and are thus exposed to numerous and interdependent risks. In this paper, we present a quantitative method for modelling propagation behaviour in the project risk network. The construction of the network requires the involvement of the project manager and related experts using the Design Structure Matrix (DSM) method. A matrix-based risk propagation model is introduced to calculate risk propagation and thus to re-eva...
Calibration of a complex activated sludge model for the full-scale wastewater treatment plant
Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw
2011-01-01
In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that u...
Contrasting model complexity under a changing climate in a headwaters catchment.
Foster, L.; Williams, K. H.; Maxwell, R. M.
2017-12-01
Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater
Agent-Based and Macroscopic Modeling of the Complex Socio-Economic Systems
Directory of Open Access Journals (Sweden)
Aleksejus Kononovičius
2013-08-01
Full Text Available Purpose – The focus of this contribution is the correspondence between collective behavior and inter-individual interactions in the complex socio-economic systems. Currently there is a wide selection of papers proposing various models for the both collective behavior and inter-individual interactions in the complex socio-economic systems. Yet the papers directly relating these two concepts are still quite rare. By studying this correspondence we discuss a cutting edge approach to the modeling of complex socio-economic systems. Design/methodology/approach – The collective behavior is often modeled using stochastic and ordinary calculus, while the inter-individual interactions are modeled using agent-based models. In order to obtain the ideal model, one should start from these frameworks and build a bridge to reach another. This is a formidable task, if we consider the top-down approach, namely starting from the collective behavior and moving towards inter-individual interactions. The bottom-up approach also fails, if complex inter-individual interaction models are considered, yet in this case we can start with simple models and increase the complexity as needed. Findings – The bottom-up approach, considering simple agent-based herding model as a model for the inter-individual interactions, allows us to derive certain macroscopic models of the complex socio-economic systems from the agent-based perspective. This provides interesting insights into the collective behavior patterns observed in the complex socio-economic systems. Research limitations/implications –The simplicity of the agent-based herding model might be considered to be somewhat limiting. Yet this simplicity implies that the model is highly universal. It reproduces universal features of social behavior and also can be further extended to fit different socio-economic scenarios. Practical implications – Insights provided in this contribution might be used to modify existing
Experimental determination and modeling of arsenic complexation with humic and fulvic acids.
Fakour, Hoda; Lin, Tsair-Fuh
2014-08-30
The complexation of humic acid (HA) and fulvic acid (FA) with arsenic (As) in water was studied. Experimental results indicate that arsenic may form complexes with HA and FA with a higher affinity for arsenate than for arsenite. With the presence of iron oxide based adsorbents, binding of arsenic to HA/FA in water was significantly suppressed, probably due to adsorption of As and HA/FA. A two-site ligand binding model, considering only strong and weak site types of binding affinity, was successfully developed to describe the complexation of arsenic on the two natural organic fractions. The model showed that the numbers of weak sites were more than 10 times those of strong sites on both HA and FA for both arsenic species studied. The numbers of both types of binding sites were found to be proportional to the HA concentrations, while the apparent stability constants, defined for describing binding affinity between arsenic and the sites, are independent of the HA concentrations. To the best of our knowledge, this is the first study to characterize the impact of HA concentrations on the applicability of the ligand binding model, and to extrapolate the model to FA. The obtained results may give insights on the complexation of arsenic in HA/FA laden groundwater and on the selection of more effective adsorption-based treatment methods for natural waters. Copyright © 2014 Elsevier B.V. All rights reserved.
The maintenance management framework models and methods for complex systems maintenance
Crespo Márquez, Adolfo
2010-01-01
“The Maintenance Management Framework” describes and reviews the concept, process and framework of modern maintenance management of complex systems; concentrating specifically on modern modelling tools (deterministic and empirical) for maintenance planning and scheduling. It will be bought by engineers and professionals involved in maintenance management, maintenance engineering, operations management, quality, etc. as well as graduate students and researchers in this field.
OPERATING OF MOBILE MACHINE UNITS SYSTEM USING THE MODEL OF MULTICOMPONENT COMPLEX MOVEMENT
Directory of Open Access Journals (Sweden)
A. Lebedev
2015-07-01
Full Text Available To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.
Operating of mobile machine units system using the model of multicomponent complex movement
A. Lebedev; R. Kaidalov; N. Artiomov; M. Shulyak; M. Podrigalo; D. Abramov; D. Klets
2015-01-01
To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite) movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.
Yu, Huixin; van Erp, Nielka; Bins, Sander; Mathijssen, Ron H J; Schellens, Jan H M; Beijnen, Jos H.; Steeghs, Neeltje; Huitema, Alwin D R
Background and Objective: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. Methods: Pharmacokinetic data were available from 96
Yu, H.; Erp, N. van; Bins, S.; Mathijssen, R.H.; Schellens, J.H.; Beijnen, J.H.; Steeghs, N.; Huitema, A.D.
2017-01-01
BACKGROUND AND OBJECTIVE: Pazopanib is a multi-targeted anticancer tyrosine kinase inhibitor. This study was conducted to develop a population pharmacokinetic (popPK) model describing the complex pharmacokinetics of pazopanib in cancer patients. METHODS: Pharmacokinetic data were available from 96
A robust interrupted time series model for analyzing complex health care intervention data
Cruz, Maricela
2017-08-29
Current health policy calls for greater use of evidence-based care delivery services to improve patient quality and safety outcomes. Care delivery is complex, with interacting and interdependent components that challenge traditional statistical analytic techniques, in particular, when modeling a time series of outcomes data that might be
Modeling complexity in engineered infrastructure system: Water distribution network as an example
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
Modelling of spatially complex human-ecosystem, rural-urban and rich-poor interactions
CSIR Research Space (South Africa)
Naude, AH
2008-06-01
Full Text Available The paper outlines the challenges of modelling and assessing spatially complex human-ecosystem interactions, and the need to simultaneously consider rural-urban and rich-poor interactions. The context for exploring these challenges is South Africa...
Modelling complex systems of heterogeneous agents to better design sustainability transitions policy
Mercure, J.F.A.; Pollitt, H.; Bassi, A.M.; Viñuales, J.E.; Edwards, N.R.
2016-01-01
This article proposes a fundamental methodological shift in the modelling of policy interventions for sustainability transitions in order to account for complexity (e.g. self-reinforcing mechanisms, such as technology lock-ins, arising from multi-agent interactions) and agent heterogeneity (e.g.
In vivo and in situ measurement and modelling of intra-body effective complex permittivity
DEFF Research Database (Denmark)
Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Harslund, Jakob L F
2015-01-01
Radio frequency tracking of medical micro-robots in minimally invasive medicine is usually investigated upon the assumption that the human body is a homogeneous propagation medium. In this Letter, the authors conducted various trial programs to measure and model the effective complex permittivity e...