WorldWideScience

Sample records for hydratase model complex

  1. The First Example of a Nitrile Hydratase Model Complex that Reversibly Binds Nitriles

    Science.gov (United States)

    Shearer, Jason; Jackson, Henry L.; Schweitzer, Dirk; Rittenberg, Durrell K.; Leavy, Tanya M.; Kaminsky, Werner; Scarrow, Robert C.; Kovacs, Julie A.

    2015-01-01

    Nitrile hydratase (NHase) is an iron-containing metalloenzyme that converts nitriles to amides. The mechanism by which this biochemical reaction occurs is unknown. One mechanism that has been proposed involves nucleophilic attack of an Fe-bound nitrile by water (or hydroxide). Reported herein is a five-coordinate model compound ([FeIII(S2Me2N3(Et,Pr))]+) containing Fe(III) in an environment resembling that of NHase, which reversibly binds a variety of nitriles, alcohols, amines, and thiocyanate. XAS shows that five-coordinate [FeIII(S2Me2N3(Et,Pr))]+ reacts with both methanol and acetonitrile to afford a six-coordinate solvent-bound complex. Competitive binding studies demonstrate that MeCN preferentially binds over ROH, suggesting that nitriles would be capable of displacing the H2O coordinated to the iron site of NHase. Thermodynamic parameters were determined for acetonitrile (ΔH = −6.2(±0.2) kcal/mol, ΔS = −29.4(±0.8) eu), benzonitrile (−4.2(±0.6) kcal/mol, ΔS = −18(±3) eu), and pyridine (ΔH = −8(±1) kcal/mol, ΔS = −41(±6) eu) binding to [FeIII(S2Me2N3(Et,Pr))]+ using variable-temperature electronic absorption spectroscopy. Ligand exchange kinetics were examined for acetonitrile, iso-propylnitrile, benzonitrile, and 4-tert-butylpyridine using 13C NMR line-broadening analysis, at a variety of temperatures. Activation parameters for ligand exchange were determined to be ΔH‡ = 7.1(±0.8) kcal/mol, ΔS‡ = −10(±1) eu (acetonitrile), ΔH‡ = 5.4(±0.6) kcal/mol, ΔS‡ = −17(±2) eu (iso-propionitrile), ΔH‡ = 4.9(±0.8) kcal/mol, ΔS‡ = −20(±3) eu (benzonitrile), and ΔH‡ = 4.7(±1.4) kcal/mol ΔS‡ = −18(±2) eu (4-tert-butylpyridine). The thermodynamic parameters for pyridine binding to a related complex, [FeIII(S2Me2N3(Pr,Pr))]+ (ΔH = −5.9(±0.8) kcal/mol, ΔS = −24(±3) eu), are also reported, as well as kinetic parameters for 4-tert-butylpyridine exchange (ΔH‡ = 3.1(±0.8) kcal/mol, ΔS‡)−25(±3) eu

  2. Structure of fumarate hydratase from Rickettsia prowazekii, the agent of typhus and suspected relative of the mitochondria

    International Nuclear Information System (INIS)

    Phan, Isabelle; Subramanian, Sandhya; Olsen, Christian; Edwards, Thomas E.; Guo, Wenjin; Zhang, Yang; Van Voorhis, Wesley C.; Stewart, Lance J.; Myler, Peter J.

    2011-01-01

    Fumarate hydratase is an enzyme of the tricarboxylic acid cycle, one of the metabolic pathways characteristic of the mitochondria. The structure of R. prowazekii class II fumarate hydratase is reported at 2.4 Å resolution and is compared with the available structure of the human homolog. Rickettsiae are obligate intracellular parasites of eukaryotic cells that are the causative agents responsible for spotted fever and typhus. Their small genome (about 800 protein-coding genes) is highly conserved across species and has been postulated as the ancestor of the mitochondria. No genes that are required for glycolysis are found in the Rickettsia prowazekii or mitochondrial genomes, but a complete set of genes encoding components of the tricarboxylic acid cycle and the respiratory-chain complex is found in both. A 2.4 Å resolution crystal structure of R. prowazekii fumarate hydratase, an enzyme catalyzing the third step of the tricarboxylic acid cycle pathway that ultimately converts phosphoenolpyruvate into succinyl-CoA, has been solved. A structure alignment with human mitochondrial fumarate hydratase highlights the close similarity between R. prowazekii and mitochondrial enzymes

  3. Is the tungsten(IV complex (NEt42[WO(mnt2] a functional analogue of acetylene hydratase?

    Directory of Open Access Journals (Sweden)

    Matthias Schreyer

    2017-11-01

    Full Text Available The tungsten(IV complex (Et4N2[W(O(mnt2] (1; mnt = maleonitriledithiolate was proposed (Sarkar et al., J. Am. Chem. Soc. 1997, 119, 4315 to be a functional analogue of the active center of the enzyme acetylene hydratase from Pelobacter acetylenicus, which hydrates acetylene (ethyne; 2 to acetaldehyde (ethanal; 3. In the absence of a satisfactory mechanistic proposal for the hydration reaction, we considered the possibility of a metal–vinylidene type activation mode, as it is well established for ruthenium-based alkyne hydration catalysts with anti-Markovnikov regioselectivity. To validate the hypothesis, the regioselectivity of tungsten-catalyzed alkyne hydration of a terminal, higher alkyne had to be determined. However, complex 1 was not a competent catalyst for the hydration of 1-octyne under the conditions tested. Furthermore, we could not observe the earlier reported hydration activity of complex 1 towards acetylene. A critical assessment of, and a possible explanation for the earlier reported results are offered. The title question is answered with "no".

  4. Recombinant expression, purification and biochemical characterization of kievitone hydratase from Nectria haematococca.

    Directory of Open Access Journals (Sweden)

    Matthias Engleder

    Full Text Available Kievitone hydratase catalyzes the addition of water to the double bond of the prenyl moiety of plant isoflavonoid kievitone and, thereby, forms the tertiary alcohol hydroxy-kievitone. In nature, this conversion is associated with a defense mechanism of fungal pathogens against phytoalexins generated by host plants after infection. As of today, a gene sequence coding for kievitone hydratase activity has only been identified and characterized in Fusarium solani f. sp. phaseoli. Here, we report on the identification of a putative kievitone hydratase sequence in Nectria haematococca (NhKHS, the teleomorph state of F. solani, based on in silico sequence analyses. After heterologous expression of the enzyme in the methylotrophic yeast Pichia pastoris, we have confirmed its kievitone hydration activity and have assessed its biochemical properties and substrate specificity. Purified recombinant NhKHS is obviously a homodimeric glycoprotein. Due to its good activity for the readily available chalcone derivative xanthohumol (XN, this compound was selected as a model substrate for biochemical studies. The optimal pH and temperature for hydratase activity were 6.0 and 35°C, respectively, and apparent Vmax and Km values for hydration of XN were 7.16 μmol min-1 mg-1 and 0.98 ± 0.13 mM, respectively. Due to its catalytic properties and apparent substrate promiscuity, NhKHS is a promising enzyme for the biocatalytic production of tertiary alcohols.

  5. The integration of cyanide hydratase and tyrosinase catalysts enables effective degradation of cyanide and phenol in coking wastewaters.

    Science.gov (United States)

    Martínková, Ludmila; Chmátal, Martin

    2016-10-01

    The aim of this study was to design an effective method for the bioremediation of coking wastewaters, specifically for the concurrent elimination of their highly toxic components - cyanide and phenols. Almost full degradation of free cyanide (0.32-20 mM; 8.3-520 mg L(-1)) in the model and the real coking wastewaters was achieved by using a recombinant cyanide hydratase in the first step. The removal of cyanide, a strong inhibitor of tyrosinase, enabled an effective degradation of phenols by this enzyme in the second step. Phenol (16.5 mM, 1,552 mg L(-1)) was completely removed from a real coking wastewater within 20 h and cresols (5.0 mM, 540 mg L(-1)) were removed by 66% under the same conditions. The integration of cyanide hydratase and tyrosinase open up new possibilities for the bioremediation of wastewaters with complex pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Real-time PCR detection of Fe-type nitrile hydratase genes from environmental isolates suggests horizontal gene transfer between multiple genera.

    Science.gov (United States)

    Coffey, Lee; Owens, Erica; Tambling, Karen; O'Neill, David; O'Connor, Laura; O'Reilly, Catherine

    2010-11-01

    Nitriles are widespread in the environment as a result of biological and industrial activity. Nitrile hydratases catalyse the hydration of nitriles to the corresponding amide and are often associated with amidases, which catalyze the conversion of amides to the corresponding acids. Nitrile hydratases have potential as biocatalysts in bioremediation and biotransformation applications, and several successful examples demonstrate the advantages. In this work a real-time PCR assay was designed for the detection of Fe-type nitrile hydratase genes from environmental isolates purified from nitrile-enriched soils and seaweeds. Specific PCR primers were also designed for amplification and sequencing of the genes. Identical or highly homologous nitrile hydratase genes were detected from isolates of numerous genera from geographically diverse sites, as were numerous novel genes. The genes were also detected from isolates of genera not previously reported to harbour nitrile hydratases. The results provide further evidence that many bacteria have acquired the genes via horizontal gene transfer. The real-time PCR assay should prove useful in searching for nitrile hydratases that could have novel substrate specificities and therefore potential in industrial applications.

  7. Genetics Home Reference: 3-methylglutaconyl-CoA hydratase deficiency

    Science.gov (United States)

    ... provide energy for cells. This amino acid is broken down in cell structures called mitochondria , which convert ... 3-methylglutaconyl-CoA hydratase, leucine is not properly broken down, which leads to a buildup of related ...

  8. Expression control of nitrile hydratase and amidase genes in Rhodococcus erythropolis and substrate specificities of the enzymes.

    Science.gov (United States)

    Rucká, Lenka; Volkova, Olga; Pavlík, Adam; Kaplan, Ondřej; Kracík, Martin; Nešvera, Jan; Martínková, Ludmila; Pátek, Miroslav

    2014-06-01

    Bacterial amidases and nitrile hydratases can be used for the synthesis of various intermediates and products in the chemical and pharmaceutical industries and for the bioremediation of toxic pollutants. The aim of this study was to analyze the expression of the amidase and nitrile hydratase genes of Rhodococcus erythropolis and test the stereospecific nitrile hydratase and amidase activities on chiral cyanohydrins. The nucleotide sequences of the gene clusters containing the oxd (aldoxime dehydratase), ami (amidase), nha1, nha2 (subunits of the nitrile hydratase), nhr1, nhr2, nhr3 and nhr4 (putative regulatory proteins) genes of two R. erythropolis strains, A4 and CCM2595, were determined. All genes of both of the clusters are transcribed in the same direction. RT-PCR analysis, primer extension and promoter fusions with the gfp reporter gene showed that the ami, nha1 and nha2 genes of R. erythropolis A4 form an operon transcribed from the Pami promoter and an internal Pnha promoter. The activity of Pami was found to be weakly induced when the cells grew in the presence of acetonitrile, whereas the Pnha promoter was moderately induced by both the acetonitrile or acetamide used instead of the inorganic nitrogen source. However, R. erythropolis A4 cells showed no increase in amidase and nitrile hydratase activities in the presence of acetamide or acetonitrile in the medium. R. erythropolis A4 nitrile hydratase and amidase were found to be effective at hydrolysing cyanohydrins and 2-hydroxyamides, respectively.

  9. The Application of Nitrile Hydratases in Organic Synthesis

    NARCIS (Netherlands)

    Van Pelt, S.

    2010-01-01

    Nitrile hydratases (NHases, E.C. 4.2.1.84) catalyse the transformation of nitriles into the corresponding amides and were first discovered 30 years ago in studies on the microbial degradation of toxic cyano-group-containing compounds. The use of NHases in synthetic chemistry is especially

  10. Purification and characterization of acetylene hydratase of Pelobacter acetylenicus, a tungsten iron-sulfur protein

    OpenAIRE

    Rosner, Bettina M.; Schink, Bernhard

    1995-01-01

    Acetylene hydratase of the mesophilic fermenting bacterium Pelobacter acetylenicus catalyzes the hydration of acetylene to acetaldehyde. Growth of P. acetylenicus with acetylene and specific acetylene hydratase activity depended on tungstate or, to a lower degree, molybdate supply in the medium. The specific enzyme activity in cell extract was highest after growth in the presence of tungstate. Enzyme activity was stable even after prolonged storage of the cell extract or of the purified prote...

  11. [Effects of nitriles and amides on the growth and the nitrile hydratase activity of the Rhodococcus sp. strain gt1].

    Science.gov (United States)

    Maksimov, A Iu; Kuznetsova, M V; Ovechkina, G V; Kozlov, S V; Maksimova, Iu G; Demakov, V A

    2003-01-01

    Effects of some nitriles and amides, as well as glucose and ammonium, on the growth and the nitrile hydratase (EC 4.2.1.84) activity of the Rhodococcus sp. strain gt1 isolated from soil were studied. The activity of nitrile hydratase mainly depended on carbon and nitrogen supply to cells. The activity of nitrile hydratase was high in the presence of glucose and ammonium at medium concentrations and decreased at concentrations of glucose more than 0.3%. Saturated unsubstituted aliphatic nitriles and amides were found to be a good source of nitrogen and carbon. However, the presence of nitriles and amides in the medium was not absolutely necessary for the expression of the activity of nitrile hydratase isolated from the Rhodococcus sp. strain gt1.

  12. The bacterial catabolism of polycyclic aromatic hydrocarbons: Characterization of three hydratase-aldolase-catalyzed reactions

    Directory of Open Access Journals (Sweden)

    Jake A. LeVieux

    2016-12-01

    Full Text Available Polycyclic aromatic hydrocarbons (PAHs are highly toxic, pervasive environmental pollutants with mutagenic, teratogenic, and carcinogenic properties. There is interest in exploiting the nutritional capabilities of microbes to remove PAHs from various environments including those impacted by improper disposal or spills. Although there is a considerable body of literature on PAH degradation, the substrates and products for many of the enzymes have never been identified and many proposed activities have never been confirmed. This is particularly true for high molecular weight PAHs (e.g., phenanthrene, fluoranthene, and pyrene. As a result, pathways for the degradation of these compounds are proposed to follow one elucidated for naphthalene with limited experimental verification. In this pathway, ring fission produces a species that can undergo a non-enzymatic cyclization reaction. An isomerase opens the ring and catalyzes a cis to trans double bond isomerization. The resulting product is the substrate for a hydratase-aldolase, which catalyzes the addition of water to the double bond of an α,β-unsaturated ketone, followed by a retro-aldol cleavage. Initial kinetic and mechanistic studies of the hydratase-aldolase in the naphthalene pathway (designated NahE and two hydratase-aldolases in the phenanthrene pathway (PhdG and PhdJ have been completed. Crystallographic work on two of the enzymes (NahE and PhdJ provides a rudimentary picture of the mechanism and a platform for future work to identify the structural basis for catalysis and the individual specificities of these hydratase-aldolases.

  13. Characterization of linoleate 10-hydratase of Lactobacillus plantarum and novel antifungal metabolites

    Directory of Open Access Journals (Sweden)

    Yuan Yao Chen

    2016-10-01

    Full Text Available Lactobacilli convert linoleic acid to the antifungal compound 10-hydroxy-12-octadecenoic acid (10-HOE by linoleate 10-hydratase (10-LAH. However, the effect of this conversion on cellularmembrane physiology and properties of the cell surface have not been demonstrated. Moreover, L. plantarum produces 13-hydroxy-9-octadecenoic acid (13-HOE in addition to 10-HOE, but the antifungal activity of 13-HOE was unknown. Phylogenetic analyses conducted in this study did not differentiate between 10-LAH and linoleate 13-hydratase (13-LAH. Thus, linoleate hydratases (LAHs must be characterized through their differences in their activities of linoleate conversion. Four genes encoding putative LAHs from lactobacilli were cloned, heterologous expressed, purified and identified as FAD-dependent 10-LAH. The unsaturated fatty acid substrates stimulated the growth of lactobacilli. We also investigated the role of 10-LAH in ethanol tolerance, membrane fluidity and hydrophobicity of cell surfaces in lactobacilli by disruption of 10-lah. Compared with the L. plantarum 10-lah deficient strain, 10-LAH in wild-type strain did not exert effect on cell survival and membrane fluidity under ethanol stress, but influenced the cell surface hydrophobicity. Moreover, deletion of 10-LAH in L. plantarum facilitated purification of 13-HOE and demonstration of its antifungal activity against Penicillium roquefortii and Aspergillus niger.

  14. Structure reveals regulatory mechanisms of a MaoC-like hydratase from Phytophthora capsici involved in biosynthesis of polyhydroxyalkanoates (PHAs).

    Science.gov (United States)

    Wang, Huizheng; Zhang, Kai; Zhu, Jie; Song, Weiwei; Zhao, Li; Zhang, Xiuguo

    2013-01-01

    Polyhydroxyalkanoates (PHAs) have attracted increasing attention as "green plastic" due to their biodegradable, biocompatible, thermoplastic, and mechanical properties, and considerable research has been undertaken to develop low cost/high efficiency processes for the production of PHAs. MaoC-like hydratase (MaoC), which belongs to (R)-hydratase involved in linking the β-oxidation and the PHA biosynthetic pathways, has been identified recently. Understanding the regulatory mechanisms of (R)-hydratase catalysis is critical for efficient production of PHAs that promise synthesis an environment-friendly plastic. We have determined the crystal structure of a new MaoC recognized from Phytophthora capsici. The crystal structure of the enzyme was solved at 2.00 Å resolution. The structure shows that MaoC has a canonical (R)-hydratase fold with an N-domain and a C-domain. Supporting its dimerization observed in structure, MaoC forms a stable homodimer in solution. Mutations that disrupt the dimeric MaoC result in a complete loss of activity toward crotonyl-CoA, indicating that dimerization is required for the enzymatic activity of MaoC. Importantly, structure comparison reveals that a loop unique to MaoC interacts with an α-helix that harbors the catalytic residues of MaoC. Deletion of the loop enhances the enzymatic activity of MaoC, suggesting its inhibitory role in regulating the activity of MaoC. The data in our study reveal the regulatory mechanism of an (R)-hydratase, providing information on enzyme engineering to produce low cost PHAs.

  15. 57Fe Moessbauer spectroscopic studies on photosensitive nitrile hydratase (NHase)

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio; Odaka, Masafumi

    2001-01-01

    57 Fe Moessbauer spectroscopy is a very useful technique for elucidating the chemical properties and biological changes of Fe species located at the reaction centers in various biological systems. We have applied 57 Fe Moessbauer spectroscopy to study the mechanism of photoactivation and the structural change caused by light irradiation of nitrile hydratase (NHase). (author)

  16. Lethal neonatal case and review of primary short-chain enoyl-CoA hydratase (SCEH) deficiency associated with secondary lymphocyte pyruvate dehydrogenase complex (PDC) deficiency

    NARCIS (Netherlands)

    Bedoyan, Jirair K.; Yang, Samuel P.; Ferdinandusse, Sacha; Jack, Rhona M.; Miron, Alexander; Grahame, George; DeBrosse, Suzanne D.; Hoppel, Charles L.; Kerr, Douglas S.; Wanders, Ronald J. A.

    2017-01-01

    Mutations in ECHS1 result in short-chain enoyl-CoA hydratase (SCEH) deficiency which mainly affects the catabolism of various amino acids, particularly valine. We describe a case compound heterozygous for ECHS1 mutations c.836T>C (novel) and c.8C>A identified by whole exome sequencing of proband and

  17. Structure reveals regulatory mechanisms of a MaoC-like hydratase from Phytophthora capsici involved in biosynthesis of polyhydroxyalkanoates (PHAs.

    Directory of Open Access Journals (Sweden)

    Huizheng Wang

    Full Text Available Polyhydroxyalkanoates (PHAs have attracted increasing attention as "green plastic" due to their biodegradable, biocompatible, thermoplastic, and mechanical properties, and considerable research has been undertaken to develop low cost/high efficiency processes for the production of PHAs. MaoC-like hydratase (MaoC, which belongs to (R-hydratase involved in linking the β-oxidation and the PHA biosynthetic pathways, has been identified recently. Understanding the regulatory mechanisms of (R-hydratase catalysis is critical for efficient production of PHAs that promise synthesis an environment-friendly plastic.We have determined the crystal structure of a new MaoC recognized from Phytophthora capsici. The crystal structure of the enzyme was solved at 2.00 Å resolution. The structure shows that MaoC has a canonical (R-hydratase fold with an N-domain and a C-domain. Supporting its dimerization observed in structure, MaoC forms a stable homodimer in solution. Mutations that disrupt the dimeric MaoC result in a complete loss of activity toward crotonyl-CoA, indicating that dimerization is required for the enzymatic activity of MaoC. Importantly, structure comparison reveals that a loop unique to MaoC interacts with an α-helix that harbors the catalytic residues of MaoC. Deletion of the loop enhances the enzymatic activity of MaoC, suggesting its inhibitory role in regulating the activity of MaoC.The data in our study reveal the regulatory mechanism of an (R-hydratase, providing information on enzyme engineering to produce low cost PHAs.

  18. Ferrous and ferric ions-based high-throughput screening strategy for nitrile hydratase and amidase.

    Science.gov (United States)

    Lin, Zhi-Jian; Zheng, Ren-Chao; Lei, Li-Hua; Zheng, Yu-Guo; Shen, Yin-Chu

    2011-06-01

    Rapid and direct screening of nitrile-converting enzymes is of great importance in the development of industrial biocatalytic process for pharmaceuticals and fine chemicals. In this paper, a combination of ferrous and ferric ions was used to establish a novel colorimetric screening method for nitrile hydratase and amidase with α-amino nitriles and α-amino amides as substrates, respectively. Ferrous and ferric ions reacted sequentially with the cyanide dissociated spontaneously from α-amino nitrile solution, forming a characteristic deep blue precipitate. They were also sensitive to weak basicity due to the presence of amino amide, resulting in a yellow precipitate. When amino amide was further hydrolyzed to amino acid, it gave a light yellow solution. Mechanisms of color changes were further proposed. Using this method, two isolates with nitrile hydratase activity towards 2-amino-2,3-dimethyl butyronitrile, one strain capable of hydrating 2-amino-4-(hydroxymethyl phosphiny) butyronitrile and another microbe exhibiting amidase activity against 2-amino-4-methylsulfanyl butyrlamide were obtained from soil samples and culture collections of our laboratory. Versatility of this method enabled it the first direct and inexpensive high-throughput screening system for both nitrile hydratase and amidase. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Unveiling of novel regio-selective fatty acid double bond hydratases from Lactobacillus acidophilus involved in the selective oxyfunctionalization of mono- and di-hydroxy fatty acids.

    Science.gov (United States)

    Kim, Kyoung-Rok; Oh, Hye-Jin; Park, Chul-Soon; Hong, Seung-Hye; Park, Ji-Young; Oh, Deok-Kun

    2015-11-01

    The aim of this study is the first time demonstration of cis-12 regio-selective linoleate double-bond hydratase. Hydroxylation of fatty acids, abundant feedstock in nature, is an emerging alternative route for many petroleum replaceable products thorough hydroxy fatty acids, carboxylic acids, and lactones. However, chemical route for selective hydroxylation is still quite challenging owing to low selectivity and many environmental concerns. Hydroxylation of fatty acids by hydroxy fatty acid forming enzymes is an important route for selective biocatalytic oxyfunctionalization of fatty acids. Therefore, novel fatty acid hydroxylation enzymes should be discovered. The two hydratase genes of Lactobacillus acidophilus were identified by genomic analysis, and the expressed two recombinant hydratases were identified as cis-9 and cis-12 double-bond selective linoleate hydratases by in vitro functional validation, including the identification of products and the determination of regio-selectivity, substrate specificity, and kinetic parameters. The two different linoleate hydratases were the involved enzymes in the 10,13-dihydroxyoctadecanoic acid biosynthesis. Linoleate 13-hydratase (LHT-13) selectively converted 10 mM linoleic acid to 13S-hydroxy-9(Z)-octadecenoic acid with high titer (8.1 mM) and yield (81%). Our study will expand knowledge for microbial fatty acid-hydroxylation enzymes and facilitate the designed production of the regio-selective hydroxy fatty acids for useful chemicals from polyunsaturated fatty acid feedstocks. © 2015 Wiley Periodicals, Inc.

  20. Nitrile Hydratase CLEAs: The immobilization and stabilization of an industrially important enzyme

    Czech Academy of Sciences Publication Activity Database

    van Pelt, S.; Quignard, S.; Kubáč, David; Sorokin, D. Y.; van Rantwijk, F.; Sheldon, R. A.

    2008-01-01

    Roč. 10, č. 4 (2008), s. 395-400 ISSN 1463-9262 R&D Projects: GA MŠk OC D25.002 Institutional research plan: CEZ:AV0Z50200510 Keywords : nitrilase hydratase * clea * cross-linking Subject RIV: CE - Biochemistry Impact factor: 4.542, year: 2008

  1. An Aeroplysinin-1 Specific Nitrile Hydratase Isolated from the Marine Sponge Aplysina cavernicola

    Directory of Open Access Journals (Sweden)

    Peter Proksch

    2013-08-01

    Full Text Available A nitrile hydratase (NHase that specifically accepts the nitrile aeroplysinin-1 (1 as a substrate and converts it into the dienone amide verongiaquinol (7 was isolated, partially purified and characterized from the Mediterranean sponge Aplysina cavernicola; although it is currently not known whether the enzyme is of sponge origin or produced by its symbiotic microorganisms. The formation of aeroplysinin-1 and of the corresponding dienone amide is part of the chemical defence system of A. cavernicola. The latter two compounds that show strong antibiotic activity originate from brominated isoxazoline alkaloids that are thought to protect the sponges from invasion of bacterial pathogens. The sponge was shown to contain at least two NHases as two excised protein bands from a non denaturating Blue Native gel showed nitrile hydratase activity, which was not observed for control samples. The enzymes were shown to be manganese dependent, although cobalt and nickel ions were also able to recover the activity of the nitrile hydratases. The temperature and pH optimum of the studied enzymes were found at 41 °C and pH 7.8. The enzymes showed high substrate specificity towards the physiological substrate aeroplysinin-1 (1 since none of the substrate analogues that were prepared either by partial or by total synthesis were converted in an in vitro assay. Moreover de-novo sequencing by mass spectrometry was employed to obtain information about the primary structure of the studied NHases, which did not reveal any homology to known NHases.

  2. Enzyme-substrate binding landscapes in the process of nitrile biodegradation mediated by nitrile hydratase and amidase.

    Science.gov (United States)

    Zhang, Yu; Zeng, Zhuotong; Zeng, Guangming; Liu, Xuanming; Chen, Ming; Liu, Lifeng; Liu, Zhifeng; Xie, Gengxin

    2013-08-01

    The continuing discharge of nitriles in various industrial processes has caused serious environmental consequences of nitrile pollution. Microorganisms possess several nitrile-degrading pathways by direct interactions of nitriles with nitrile-degrading enzymes. However, these interactions are largely unknown and difficult to experimentally determine but important for interpretation of nitrile metabolisms and design of nitrile-degrading enzymes with better nitrile-converting activity. Here, we undertook a molecular modeling study of enzyme-substrate binding modes in the bi-enzyme pathway for degradation of nitrile to acid. Docking results showed that the top substrates having favorable interactions with nitrile hydratase from Rhodococcus erythropolis AJ270 (ReNHase), nitrile hydratase from Pseudonocardia thermophila JCM 3095 (PtNHase), and amidase from Rhodococcus sp. N-771 (RhAmidase) were benzonitrile, 3-cyanopyridine, and L-methioninamide, respectively. We further analyzed the interactional profiles of these top poses with corresponding enzymes, showing that specific residues within the enzyme's binding pockets formed diverse contacts with substrates. This information on binding landscapes and interactional profiles is of great importance for the design of nitrile-degrading enzyme mutants with better oxidation activity toward nitriles or amides in the process of pollutant treatments.

  3. Biotransformation of benzonitrile herbicides via the nitrile hydratase-amidase pathway in rhodococci

    Czech Academy of Sciences Publication Activity Database

    Veselá, Alicja Barbara; Pelantová, Helena; Šulc, Miroslav; Macková, M.; Lovecká, P.; Thimová, P.; Pasquarelli, F.; Pičmanová, Martina; Pátek, Miroslav; Bhalla, T. C.; Martínková, Ludmila

    2012-01-01

    Roč. 39, č. 12 (2012), s. 1811-1819 ISSN 1367-5435 R&D Projects: GA MŠk OC09046; GA ČR(CZ) GAP504/11/0394; GA ČR GD305/09/H008 Keywords : Nitrile hydratase * Amidase * Benzonitrile herbicides Subject RIV: EE - Microbiology, Virology Impact factor: 2.321, year: 2012

  4. Cyanide hydratases and cyanide dihydratases: emerging tools in the biodegradation and biodetection of cyanide

    Czech Academy of Sciences Publication Activity Database

    Martínková, Ludmila; Veselá, Alicja Barbara; Rinágelová, Anna; Chmátal, Martin

    2015-01-01

    Roč. 99, č. 21 (2015), s. 8875-8882 ISSN 0175-7598 R&D Projects: GA TA ČR TA01021368; GA ČR(CZ) GAP504/11/0394 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Cyanide dihydratase * Enzyme production Subject RIV: CE - Biochemistry Impact factor: 3.376, year: 2015

  5. Germline fumarate hydratase mutations in patients with ovarian mucinous cystadenoma

    DEFF Research Database (Denmark)

    Ylisaukko-oja, Sanna K.; Cybulski, Cezary; Lehtonen, Rainer

    2006-01-01

    Germline mutations in the fumarate hydratase (FH) gene were recently shown to predispose to the dominantly inherited syndrome, hereditary leiomyomatosis and renal cell cancer (HLRCC). HLRCC is characterized by benign leiomyomas of the skin and the uterus, renal cell carcinoma, and uterine...... leiomyosarcoma. The aim of this study was to identify new families with FH mutations, and to further examine the tumor spectrum associated with FH mutations. FH germline mutations were screened from 89 patients with RCC, skin leiomyomas or ovarian tumors. Subsequently, 13 ovarian and 48 bladder carcinomas were...

  6. Kinetic effects of sulfur oxidation on catalytic nitrile hydration: nitrile hydratase insights from bioinspired ruthenium(II) complexes.

    Science.gov (United States)

    Kumar, Davinder; Nguyen, Tho N; Grapperhaus, Craig A

    2014-12-01

    Kinetic investigations inspired by the metalloenzyme nitrile hydratase were performed on a series of ruthenium(II) complexes to determine the effect of sulfur oxidation on catalytic nitrile hydration. The rate of benzonitrile hydration was quantified as a function of catalyst, nitrile, and water concentrations. Precatalysts L(n)RuPPh3 (n = 1-3; L(1) = 4,7-bis(2'-methyl-2'-mercapto-propyl)-1-thia-4,7-diazacyclononane; L(2) = 4-(2'-methyl-2'-sulfinatopropyl)-7-(2'-methyl-2'-mercapto-propyl)-1-thia-4,7-diazacyclononane; L(3) = 4-(2'-methyl-2'-sulfinatopropyl)-7-(2'-methyl-2'-sulfenato-propyl)-1-thia-4,7-diazacyclononane) were activated by substitution of triphenylphosphine with substrate in hot dimethylformamide solution. Rate measurements are consistent with a dynamic equilibrium between inactive aqua (L(n)Ru-OH2) and active nitrile (L(n)Ru-NCR) derivatives with K = 21 ± 1, 9 ± 0.9, and 23 ± 3 for L(1) to L(3), respectively. Subsequent hydration of the L(n)Ru-NCR intermediate yields the amide product with measured hydration rate constants (k's) of 0.37 ± 0.01, 0.82 ± 0.07, and 1.59 ± 0.12 M(-1) h(-1) for L(1) to L(3), respectively. Temperature dependent studies reveal that sulfur oxidation lowers the enthalpic barrier by 27 kJ/mol, but increases the entropic barrier by 65 J/(mol K). Density functional theory (DFT) calculations (B3LYP/LanL2DZ (Ru); 6-31G(d) (all other atoms)) support a nitrile bound catalytic cycle with lowering of the reaction barrier as a consequence of sulfur oxidation through enhanced nitrile binding and attack of the water nucleophile through a highly organized transition state.

  7. Metabolic reprogramming for producing energy and reducing power in fumarate hydratase null cells from hereditary leiomyomatosis renal cell carcinoma.

    Directory of Open Access Journals (Sweden)

    Youfeng Yang

    synthesis for rapid proliferation and is essential for defense against increased oxidative stress. This increased NADPH producing PPP activity was shown to be a strong consistent feature in both fumarate hydratase deficient tumors and cell line models.

  8. The aconitate hydratase family from Citrus

    Directory of Open Access Journals (Sweden)

    Cercos Manuel

    2010-10-01

    Full Text Available Abstract Background Research on citrus fruit ripening has received considerable attention because of the importance of citrus fruits for the human diet. Organic acids are among the main determinants of taste and organoleptic quality of fruits and hence the control of fruit acidity loss has a strong economical relevance. In citrus, organic acids accumulate in the juice sac cells of developing fruits and are catabolized thereafter during ripening. Aconitase, that transforms citrate to isocitrate, is the first step of citric acid catabolism and a major component of the citrate utilization machinery. In this work, the citrus aconitase gene family was first characterized and a phylogenetic analysis was then carried out in order to understand the evolutionary history of this family in plants. Gene expression analyses of the citrus aconitase family were subsequently performed in several acidic and acidless genotypes to elucidate their involvement in acid homeostasis. Results Analysis of 460,000 citrus ESTs, followed by sequencing of complete cDNA clones, identified in citrus 3 transcription units coding for putatively active aconitate hydratase proteins, named as CcAco1, CcAco2 and CcAco3. A phylogenetic study carried on the Aco family in 14 plant species, shows the presence of 5 Aco subfamilies, and that the ancestor of monocot and dicot species shared at least one Aco gene. Real-time RT-PCR expression analyses of the three aconitase citrus genes were performed in pulp tissues along fruit development in acidic and acidless citrus varieties such as mandarins, oranges and lemons. While CcAco3 expression was always low, CcAco1 and CcAco2 genes were generally induced during the rapid phase of fruit growth along with the maximum in acidity and the beginning of the acid reduction. Two exceptions to this general pattern were found: 1 Clemenules mandarin failed inducing CcAco2 although acid levels were rapidly reduced; and 2 the acidless "Sucreña" orange

  9. [Effect of melaxen and valdoxan on free radical processes intensity, aconitate hydratase activity and citrate content in rats tissues under hyperthyroidism].

    Science.gov (United States)

    Gorbenko, M V; Popova, T N; Shul'gin, K K; Popov, S S; Agarkov, A A

    2014-01-01

    The influence of melaxen and valdoxan on the biochemiluminescence parameters, aconitate hydratase activity and citrate level in rats heart and liver during development of experimental hyperthyroidism has been investigated. Administration of these substances promoted a decrease of biochemiluminescence parameters, which had been increased in tissues of rats in response to the development of oxidative stress under hyperthyroidism. Aconitate hydratase activity and citrate concentration in rats liver and heart, growing at pathological conditions, changed towards control value after administration of the drugs correcting melatonin level. The results indicate the positive effect of valdoxan and melaxen on oxidative status of the organism under the development of experimental hyperthyroidism that is associated with antioxidant action of melatonin.

  10. Expression control of nitrile hydratase and amidase genes in Rhodococcus erythropolis and substrate specificities of the enzymes

    Czech Academy of Sciences Publication Activity Database

    Rucká, Lenka; Volkova, Olga; Pavlík, Adam; Kaplan, Ondřej; Kracík, M.; Nešvera, Jan; Martínková, Ludmila; Pátek, Miroslav

    2014-01-01

    Roč. 105, č. 6 (2014), s. 1179-1190 ISSN 0003-6072 R&D Projects: GA MŠk(CZ) LC06010; GA ČR(CZ) GAP504/11/0394 Institutional support: RVO:61388971 Keywords : Rhodococcus erythropolis * Amidase * Nitrile hydratase Subject RIV: EE - Microbiology, Virology Impact factor: 1.806, year: 2014

  11. Self-subunit swapping occurs in another gene type of cobalt nitrile hydratase.

    Directory of Open Access Journals (Sweden)

    Yi Liu

    Full Text Available Self-subunit swapping is one of the post-translational maturation of the cobalt-containing nitrile hydratase (Co-NHase family of enzymes. All of these NHases possess a gene organization of , which allows the activator protein to easily form a mediatory complex with the α-subunit of the NHase after translation. Here, we discovered that the incorporation of cobalt into another type of Co-NHase, with a gene organization of , was also dependent on self-subunit swapping. We successfully isolated a recombinant NHase activator protein (P14K of Pseudomonas putida NRRL-18668 by adding a Strep-tag N-terminal to the P14K gene. P14K was found to form a complex [α(StrepP14K(2] with the α-subunit of the NHase. The incorporation of cobalt into the NHase of P. putida was confirmed to be dependent on the α-subunit substitution between the cobalt-containing α(StrepP14K(2 and the cobalt-free NHase. Cobalt was inserted into cobalt-free α(StrepP14K(2 but not into cobalt-free NHase, suggesting that P14K functions not only as a self-subunit swapping chaperone but also as a metallochaperone. In addition, NHase from P. putida was also expressed by a mutant gene that was designed with a order. Our findings expand the general features of self-subunit swapping maturation.

  12. Expansion of ribosomally produced natural products: a nitrile hydratase- and Nif11-related precursor family

    Directory of Open Access Journals (Sweden)

    Mitchell Douglas A

    2010-05-01

    Full Text Available Abstract Background A new family of natural products has been described in which cysteine, serine and threonine from ribosomally-produced peptides are converted to thiazoles, oxazoles and methyloxazoles, respectively. These metabolites and their biosynthetic gene clusters are now referred to as thiazole/oxazole-modified microcins (TOMM. As exemplified by microcin B17 and streptolysin S, TOMM precursors contain an N-terminal leader sequence and C-terminal core peptide. The leader sequence contains binding sites for the posttranslational modifying enzymes which subsequently act upon the core peptide. TOMM peptides are small and highly variable, frequently missed by gene-finders and occasionally situated far from the thiazole/oxazole forming genes. Thus, locating a substrate for a particular TOMM pathway can be a challenging endeavor. Results Examination of candidate TOMM precursors has revealed a subclass with an uncharacteristically long leader sequence closely related to the enzyme nitrile hydratase. Members of this nitrile hydratase leader peptide (NHLP family lack the metal-binding residues required for catalysis. Instead, NHLP sequences display the classic Gly-Gly cleavage motif and have C-terminal regions rich in heterocyclizable residues. The NHLP family exhibits a correlated species distribution and local clustering with an ABC transport system. This study also provides evidence that a separate family, annotated as Nif11 nitrogen-fixing proteins, can serve as natural product precursors (N11P, but not always of the TOMM variety. Indeed, a number of cyanobacterial genomes show extensive N11P paralogous expansion, such as Nostoc, Prochlorococcus and Cyanothece, which replace the TOMM cluster with lanthionine biosynthetic machinery. Conclusions This study has united numerous TOMM gene clusters with their cognate substrates. These results suggest that two large protein families, the nitrile hydratases and Nif11, have been retailored for

  13. The integration of cyanide hydratase and tyrosinase catalysts enables effective degradation of cyanide and phenol in coking wastewaters

    Czech Academy of Sciences Publication Activity Database

    Martínková, Ludmila; Chmátal, Martin

    2016-01-01

    Roč. 102, October (2016), s. 90-95 ISSN 0043-1354 R&D Projects: GA TA ČR TA01021368; GA TA ČR(CZ) TA04021212; GA MŠk(CZ) LD12049 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Tyrosinase * Cyanide Subject RIV: CE - Biochemistry Impact factor: 6.942, year: 2016

  14. Insights into catalytic activity of industrial enzyme Co-nitrile hydratase. Docking studies of nitriles and amides.

    Science.gov (United States)

    Peplowski, Lukasz; Kubiak, Karina; Nowak, Wieslaw

    2007-07-01

    Nitrile hydratase (NHase) is an enzyme containing non-corrin Co3+ in the non-standard active site. NHases from Pseudonocardia thermophila JCM 3095 catalyse hydration of nitriles to corresponding amides. The efficiency of the enzyme is 100 times higher for aliphatic nitriles then aromatic ones. In order to understand better this selectivity dockings of a series of aliphatic and aromatic nitriles and related amides into a model protein based on an X-ray structure were performed. Substantial differences in binding modes were observed, showing better conformational freedom of aliphatic compounds. Distinct interactions with postranslationally modified cysteines present in the active site of the enzyme were observed. Modeling shows that water molecule activated by a metal ion may easily directly attack the docked acrylonitrile to transform this molecule into acryloamide. Thus docking studies provide support for one of the reaction mechanisms discussed in the literature.

  15. 4-Oxalocrotonate tautomerase, its homologue YwhB, and active vinylpyruvate hydratase : Synthesis and evaluation of 2-fluoro substrate analogues

    NARCIS (Netherlands)

    Johnson, William H; Wang, Susan C; Stanley, Thanuja M; Czerwinski, Robert M; Almrud, Jeffrey J; Poelarends, Gerrit J; Murzin, Alexey G; Whitman, Christian P

    2004-01-01

    A series of 2-fluoro-4-alkene and 2-fluoro-4-alkyne substrate analogues were synthesized and examined as potential inhibitors of three enzymes: 4-oxalocrotonate tautomerase (4-OT) and vinylpyruvate hydratase (VPH) from the catechol meta-fission pathway and a closely related 4-OT homologue found in

  16. Cyanide hydratase from Aspergillus niger K10: Overproduction in Escherichia coli, purification, characterization and use in continuous cyanide degradation

    Czech Academy of Sciences Publication Activity Database

    Rinágelová, Anna; Kaplan, Ondřej; Veselá, Alicja Barbara; Chmátal, Martin; Křenková, Alena; Plíhal, Ondřej; Pasquarelli, Fabrizia; Cantarella, M.; Martínková, Ludmila

    2014-01-01

    Roč. 49, č. 3 (2014), s. 445-450 ISSN 1359-5113 R&D Projects: GA ČR(CZ) GAP504/11/0394; GA TA ČR TA01021368 Institutional support: RVO:61388971 Keywords : Cyanide hydratase * Nitrilase * Aspergillus niger Subject RIV: CE - Biochemistry Impact factor: 2.516, year: 2014

  17. Cloning, purification, crystallization and preliminary X-ray diffraction analysis of nitrile hydratase from the themophilic Bacillus smithii SC-J05-1

    International Nuclear Information System (INIS)

    Hourai, Shinji; Ishii, Takeshi; Miki, Misao; Takashima, Yoshiki; Mitsuda, Satoshi; Yanagi, Kazunori

    2005-01-01

    The nitrile hydratase from the themophilic B. smithii SC-J05-1 (Bs NHase) has been purified, cloned and crystallized. Nitrile hydratase (NHase) converts nitriles to the corresponding amides and is recognized as having important industrial applications. Purification, cloning, crystallization and initial crystallographic studies of the NHase from Bacillus smithii SC-J05-1 (Bs NHase) were conducted to analyze the activity, specificity and thermal stability of this hydrolytic enzyme. Bs NHase was purified to homogeneity from microbial cells of B. smithii SC-J05-1 and the nucleotide sequences of both the α- and β-subunits were determined. Purified Bs NHase was used for crystallization and several crystal forms were obtained by the vapour-diffusion method. Microseeding and the addition of magnesium ions were essential components to obtain crystals suitable for X-ray diffraction analysis

  18. Identification and characterization of an oleate hydratase-encoding gene from Bifidobacterium breve.

    Science.gov (United States)

    O'Connell, Kerry Joan; Motherway, Mary O'Connell; Hennessey, Alan A; Brodhun, Florian; Ross, R Paul; Feussner, Ivo; Stanton, Catherine; Fitzgerald, Gerald F; van Sinderen, Douwe

    2013-01-01

    Bifidobacteria are common commensals of the mammalian gastrointestinal tract. Previous studies have suggested that a bifidobacterial myosin cross reactive antigen (MCRA) protein plays a role in bacterial stress tolerance, while this protein has also been linked to the biosynthesis of conjugated linoleic acid (CLA) in bifidobacteria. In order to increase our understanding on the role of MCRA in bifidobacteria we created and analyzed an insertion mutant of the MCRA-encoding gene of B. breve NCFB 2258. Our results demonstrate that the MCRA protein of B. breve NCFB 2258 does not appear to play a role in CLA production, yet is an oleate hydratase, which contributes to bifidobacterial solvent stress protection.

  19. Purification and characterization of acetylene hydratase of Pelobacter acetylenicus, a tungsten iron-sulfur protein.

    Science.gov (United States)

    Rosner, B M; Schink, B

    1995-10-01

    Acetylene hydratase of the mesophilic fermenting bacterium Pelobacter acetylenicus catalyzes the hydration of acetylene to acetaldehyde. Growth of P. acetylenicus with acetylene and specific acetylene hydratase activity depended on tungstate or, to a lower degree, molybdate supply in the medium. The specific enzyme activity in cell extract was highest after growth in the presence of tungstate. Enzyme activity was stable even after prolonged storage of the cell extract or of the purified protein under air. However, enzyme activity could be measured only in the presence of a strong reducing agent such as titanium(III) citrate or dithionite. The enzyme was purified 240-fold by ammonium sulfate precipitation, anion-exchange chromatography, size exclusion chromatography, and a second anion-exchange chromatography step, with a yield of 36%. The protein was a monomer with an apparent molecular mass of 73 kDa, as determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The isoelectric point was at pH 4.2. Per mol of enzyme, 4.8 mol of iron, 3.9 mol of acid-labile sulfur, and 0.4 mol of tungsten, but no molybdenum, were detected. The Km for acetylene as assayed in a coupled photometric test with yeast alcohol dehydrogenase and NADH was 14 microM, and the Vmax was 69 mumol.min-1.mg of protein-1. The optimum temperature for activity was 50 degrees C, and the apparent pH optimum was 6.0 to 6.5. The N-terminal amino acid sequence gave no indication of resemblance to any enzyme protein described so far.

  20. Biotransformation of nitriles to amides using soluble and immobilized nitrile hydratase from Rhodococcus erythropolis A4

    Czech Academy of Sciences Publication Activity Database

    Kubáč, David; Kaplan, Ondřej; Elišáková, Veronika; Pátek, Miroslav; Vejvoda, Vojtěch; Slámová, Kristýna; Tóthová, A.; Lemaire, M.; Gallienne, E.; Lutz-Wahl, S.; Fischer, L.; Kuzma, Marek; Pelantová, Helena; van Pelt, S.; Bolte, J.; Křen, Vladimír; Martínková, Ludmila

    2008-01-01

    Roč. 50, 2-4 (2008), s. 107-113 ISSN 1381-1177 R&D Projects: GA ČR GA203/05/2267; GA MŠk(CZ) LC06010; GA MŠk OC 171 Grant - others:XE(XE) ESF COST D25/0002/02; CZ(CZ) D10-CZ25/06-07; CZ(CZ) D-25 Institutional research plan: CEZ:AV0Z50200510 Keywords : rhodococcus erythropolis * nitrile hydratase * amidase Subject RIV: EE - Microbiology, Virology Impact factor: 2.015, year: 2008

  1. Molecular Pathways: Fumarate Hydratase-Deficient Kidney Cancer: Targeting the Warburg Effect in Cancer

    Science.gov (United States)

    Linehan, W. Marston; Rouault, Tracey A.

    2015-01-01

    Hereditary leiomyomatosis and renal cell carcinoma (HLRCC) is a hereditary cancer syndrome in which affected individuals are at risk for development of cutaneous and uterine leiomyomas and an aggressive form of type II papillary kidney cancer. HLRCC is characterized by germline mutation of the tricarboxylic acid cycle (TCA) enzyme, fumarate hydratase (FH). FH-deficient kidney cancer is characterized by impaired oxidative phosphorylation and a metabolic shift to aerobic glycolysis, a form of metabolic reprogramming referred to as the Warburg effect. Increased glycolysis generates ATP needed for increased cell proliferation. In FH-deficient kidney cancer levels of AMPK, a cellular energy sensor, are decreased; resulting in diminished p53 levels, decreased expression of the iron importer, DMT1, leading to low cellular iron levels, and to enhanced fatty acid synthesis by diminishing phosphorylation of acetyl CoA carboxylase, a rate limiting step for fatty acid synthesis. Increased fumarate and decreased iron levels in FH-deficient kidney cancer cells inactivate prolyl hydroxylases, leading to stabilization of HIF1α, and increased expression of genes such as vascular endothelial growth factor (VEGF) and GLUT1 to provide fuel needed for rapid growth demands. Several therapeutic approaches for targeting the metabolic basis of FH-deficient kidney cancer are under development or are being evaluated in clinical trials, including the use of agents such as metformin, which would reverse the inactivation of AMPK, approaches to inhibit glucose transport, LDH-A, the anti-oxidant response pathway, the heme oxygenase pathway and approaches to target the tumor vasculature and glucose transport with agents such as bevacizumab and erlotinib. These same types of metabolic shifts, to aerobic glycolysis with decreased oxidative phosphorylation, have been found in a wide variety of other cancer types. Targeting the metabolic basis of a rare cancer such as fumarate hydratase

  2. Metabolic reconstructions identify plant 3-methylglutaconyl-CoA hydratase that is crucial for branched-chain amino acid catabolism in mitochondria.

    Science.gov (United States)

    Latimer, Scott; Li, Yubing; Nguyen, Thuong T H; Soubeyrand, Eric; Fatihi, Abdelhak; Elowsky, Christian G; Block, Anna; Pichersky, Eran; Basset, Gilles J

    2018-05-09

    The proteinogenic branched-chain amino acids (BCAAs) leucine, isoleucine and valine are essential nutrients for mammals. In plants, BCAAs double as alternative energy sources when carbohydrates become limiting, the catabolism of BCAAs providing electrons to the respiratory chain and intermediates to the tricarboxylic acid cycle. Yet, the actual architecture of the degradation pathways of BCAAs is not well understood. In this study, gene network modeling in Arabidopsis and rice, and plant-prokaryote comparative genomics detected candidates for 3-methylglutaconyl-CoA hydratase (4.2.1.18), one of the missing plant enzymes of leucine catabolism. Alignments of these protein candidates sampled from various spermatophytes revealed non-homologous N-terminal extensions that are lacking in their bacterial counterparts, and green fluorescent protein-fusion experiments demonstrated that the Arabidopsis protein, product of gene At4g16800, is targeted to mitochondria. Recombinant At4g16800 catalyzed the dehydration of 3-hydroxymethylglutaryl-CoA into 3-methylglutaconyl-CoA, and displayed kinetic features similar to those of its prokaryotic homolog. When at4g16800 knockout plants were subjected to dark-induced carbon starvation, their rosette leaves displayed accelerated senescence as compared to control plants, and this phenotype was paralleled by a marked increase in the accumulation of free and total leucine, isoleucine and valine. The seeds of the at4g16800 mutant showed a similar accumulation of free BCAAs. These data suggest that 3-methylglutaconyl-CoA hydratase is not solely involved in the degradation of leucine, but is also a significant contributor to that of isoleucine and valine. Furthermore, evidence is shown that unlike the situation observed in Trypanosomatidae, leucine catabolism does not contribute to the formation of the terpenoid precursor mevalonate. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights

  3. Myosin-cross-reactive antigen (MCRA) protein from Bifidobacterium breve is a FAD-dependent fatty acid hydratase which has a function in stress protection

    LENUS (Irish Health Repository)

    Rosberg-Cody, Eva

    2011-02-17

    Abstract Background The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA) from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium) and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. Results MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. Conclusions MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA) production, and this protein has an additional function in bacterial stress protection.

  4. Myosin-cross-reactive antigen (MCRA protein from Bifidobacterium breve is a FAD-dependent fatty acid hydratase which has a function in stress protection

    Directory of Open Access Journals (Sweden)

    Ross R

    2011-02-01

    Full Text Available Abstract Background The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. Results MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. Conclusions MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA production, and this protein has an additional function in bacterial stress protection.

  5. Myosin-cross-reactive antigen (MCRA) protein from Bifidobacterium breve is a FAD-dependent fatty acid hydratase which has a function in stress protection.

    Science.gov (United States)

    Rosberg-Cody, Eva; Liavonchanka, Alena; Göbel, Cornelia; Ross, R Paul; O'Sullivan, Orla; Fitzgerald, Gerald F; Feussner, Ivo; Stanton, Catherine

    2011-02-17

    The aim of this study was to determine the catalytic activity and physiological role of myosin-cross-reactive antigen (MCRA) from Bifidobacterium breve NCIMB 702258. MCRA from B. breve NCIMB 702258 was cloned, sequenced and expressed in heterologous hosts (Lactococcus and Corynebacterium) and the recombinant proteins assessed for enzymatic activity against fatty acid substrates. MCRA catalysed the conversion of palmitoleic, oleic and linoleic acids to the corresponding 10-hydroxy fatty acids, but shorter chain fatty acids were not used as substrates, while the presence of trans-double bonds and double bonds beyond the position C12 abolished hydratase activity. The hydroxy fatty acids produced were not metabolised further. We also found that heterologous Lactococcus and Corynebacterium expressing MCRA accumulated increasing amounts of 10-HOA and 10-HOE in the culture medium. Furthermore, the heterologous cultures exhibited less sensitivity to heat and solvent stresses compared to corresponding controls. MCRA protein in B. breve can be classified as a FAD-containing double bond hydratase, within the carbon-oxygen lyase family, which may be catalysing the first step in conjugated linoleic acid (CLA) production, and this protein has an additional function in bacterial stress protection.

  6. A Role for Cytosolic Fumarate Hydratase in Urea Cycle Metabolism and Renal Neoplasia

    Directory of Open Access Journals (Sweden)

    Julie Adam

    2013-05-01

    Full Text Available The identification of mutated metabolic enzymes in hereditary cancer syndromes has established a direct link between metabolic dysregulation and cancer. Mutations in the Krebs cycle enzyme, fumarate hydratase (FH, predispose affected individuals to leiomyomas, renal cysts, and cancers, though the respective pathogenic roles of mitochondrial and cytosolic FH isoforms remain undefined. On the basis of comprehensive metabolomic analyses, we demonstrate that FH1-deficient cells and tissues exhibit defects in the urea cycle/arginine metabolism. Remarkably, transgenic re-expression of cytosolic FH ameliorated both renal cyst development and urea cycle defects associated with renal-specific FH1 deletion in mice. Furthermore, acute arginine depletion significantly reduced the viability of FH1-deficient cells in comparison to controls. Our findings highlight the importance of extramitochondrial metabolic pathways in FH-associated oncogenesis and the urea cycle/arginine metabolism as a potential therapeutic target.

  7. Fumarate Hydratase Deletion in Pancreatic β Cells Leads to Progressive Diabetes

    Directory of Open Access Journals (Sweden)

    Julie Adam

    2017-09-01

    Full Text Available We explored the role of the Krebs cycle enzyme fumarate hydratase (FH in glucose-stimulated insulin secretion (GSIS. Mice lacking Fh1 in pancreatic β cells (Fh1βKO mice appear normal for 6–8 weeks but then develop progressive glucose intolerance and diabetes. Glucose tolerance is rescued by expression of mitochondrial or cytosolic FH but not by deletion of Hif1α or Nrf2. Progressive hyperglycemia in Fh1βKO mice led to dysregulated metabolism in β cells, a decrease in glucose-induced ATP production, electrical activity, cytoplasmic [Ca2+]i elevation, and GSIS. Fh1 loss resulted in elevated intracellular fumarate, promoting succination of critical cysteines in GAPDH, GMPR, and PARK 7/DJ-1 and cytoplasmic acidification. Intracellular fumarate levels were increased in islets exposed to high glucose and in islets from human donors with type 2 diabetes (T2D. The impaired GSIS in islets from diabetic Fh1βKO mice was ameliorated after culture under normoglycemic conditions. These studies highlight the role of FH and dysregulated mitochondrial metabolism in T2D.

  8. Fumarate hydratase is a critical metabolic regulator of hematopoietic stem cell functions.

    Science.gov (United States)

    Guitart, Amelie V; Panagopoulou, Theano I; Villacreces, Arnaud; Vukovic, Milica; Sepulveda, Catarina; Allen, Lewis; Carter, Roderick N; van de Lagemaat, Louie N; Morgan, Marcos; Giles, Peter; Sas, Zuzanna; Gonzalez, Marta Vila; Lawson, Hannah; Paris, Jasmin; Edwards-Hicks, Joy; Schaak, Katrin; Subramani, Chithra; Gezer, Deniz; Armesilla-Diaz, Alejandro; Wills, Jimi; Easterbrook, Aaron; Coman, David; So, Chi Wai Eric; O'Carroll, Donal; Vernimmen, Douglas; Rodrigues, Neil P; Pollard, Patrick J; Morton, Nicholas M; Finch, Andrew; Kranc, Kamil R

    2017-03-06

    Strict regulation of stem cell metabolism is essential for tissue functions and tumor suppression. In this study, we investigated the role of fumarate hydratase (Fh1), a key component of the mitochondrial tricarboxylic acid (TCA) cycle and cytosolic fumarate metabolism, in normal and leukemic hematopoiesis. Hematopoiesis-specific Fh1 deletion (resulting in endogenous fumarate accumulation and a genetic TCA cycle block reflected by decreased maximal mitochondrial respiration) caused lethal fetal liver hematopoietic defects and hematopoietic stem cell (HSC) failure. Reexpression of extramitochondrial Fh1 (which normalized fumarate levels but not maximal mitochondrial respiration) rescued these phenotypes, indicating the causal role of cellular fumarate accumulation. However, HSCs lacking mitochondrial Fh1 (which had normal fumarate levels but defective maximal mitochondrial respiration) failed to self-renew and displayed lymphoid differentiation defects. In contrast, leukemia-initiating cells lacking mitochondrial Fh1 efficiently propagated Meis1 / Hoxa9 -driven leukemia. Thus, we identify novel roles for fumarate metabolism in HSC maintenance and hematopoietic differentiation and reveal a differential requirement for mitochondrial Fh1 in normal hematopoiesis and leukemia propagation. © 2017 Guitart et al.

  9. Activity Enhancement Based on the Chemical Equilibrium of Multiple-Subunit Nitrile Hydratase from Bordetella petrii.

    Science.gov (United States)

    Liu, Yi; Liu, Ping; Lin, Lu; Zhao, Yueqin; Zhong, Wenjuan; Wu, Lunjie; Zhou, Zhemin; Sun, Weifeng

    2016-09-01

    The maturation mechanism of nitrile hydratase (NHase) of Pseudomonas putida NRRL-18668 was discovered and named as "self-subunit swapping." Since the NHase of Bordetella petrii DSM 12804 is similar to that of P. putida, the NHase maturation of B. petrii is proposed to be the same as that of P. putida. However, there is no further information on the application of NHase according to these findings. We successfully rapidly purified NHase and its activator through affinity his tag, and found that the cell extracts of NHase possessed multiple types of protein ingredients including α, β, α2β2, and α(P14K)2 who were in a state of chemical equilibrium. Furthermore, the activity was significantly enhanced through adding extra α(P14K)2 to the cell extracts of NHase according to the chemical equilibrium. Our findings are useful for the activity enhancement of multiple-subunit enzyme and for the first time significantly increased the NHase activity according to the chemical equilibrium.

  10. Modelling non-redox enzymes: Anaerobic and aerobic acetylene ...

    Indian Academy of Sciences (India)

    Administrator

    Modelling non-redox enzymes: Anaerobic and aerobic acetylene hydratase. SABYASACHI SARKAR. Department of Chemistry, Indian Institute of Technology, Kanpur 208 016,. India. Acetaldehyde is the first metabolite produced during acetylene degradation by bacteria either aerobically or anaerobically. Conversion of ...

  11. Cyanide hydratases and cyanide dihydratases: emerging tools in the biodegradation and biodetection of cyanide.

    Science.gov (United States)

    Martínková, Ludmila; Veselá, Alicja Barbara; Rinágelová, Anna; Chmátal, Martin

    2015-11-01

    The purpose of this study is to summarize the current knowledge of the enzymes which are involved in the hydrolysis of cyanide, i.e., cyanide hydratases (CHTs; EC 4.2.1.66) and cyanide dihydratases (CynD; EC 3.5.5.1). CHTs are probably exclusively produced by filamentous fungi and widely occur in these organisms; in contrast, CynDs were only found in a few bacterial genera. CHTs differ from CynDs in their reaction products (formamide vs. formic acid and ammonia, respectively). Several CHTs were also found to transform nitriles but with lower relative activities compared to HCN. Mutants of CynDs and CHTs were constructed to study the structure-activity relationships in these enzymes or to improve their catalytic properties. The effect of the C-terminal part of the protein on the enzyme activity was determined by constructing the corresponding deletion mutants. CynDs are less active at alkaline pH than CHTs. To improve its bioremediation potential, CynD from Bacillus pumilus was engineered by directed evolution combined with site-directed mutagenesis, and its operation at pH 10 was thus enabled. Some of the enzymes have been tested for their potential to eliminate cyanide from cyanide-containing wastewaters. CynDs were also used to construct cyanide biosensors.

  12. Degradation of the metal-cyano complex tetracyanonickelate (II) by Fusarium oxysporum N-10.

    Science.gov (United States)

    Yanase, H; Sakamoto, A; Okamoto, K; Kita, K; Sato, Y

    2000-03-01

    A fungus with the ability to utilize a metalcyano compound, tetracyanonickelate (II) ¿K2[Ni (CN)4]; TCN¿, as its sole source of nitrogen was isolated from soil and identified as Fusarium oxysporum N-10. Both intact mycelia and cell-free extract of the strain catalyzed hydrolysis of TCN to formate and ammonia and produced formamide as an intermediate, thereby indicating that a hydratase and an amidase sequentially participated in the degradation of TCN. The enzyme catalyzing the hydration of TCN was purified approximately ten-fold from the cell-free extract of strain N-10 with a yield of 29%. The molecular mass of the active enzyme was estimated to be 160 kDa. The enzyme appears to exist as a homotetramer, each subunit having a molecular mass of 40 kDa. The enzyme also catalyzed the hydration of KCN, with a cyanide-hydrating activity 2 x 10(4) times greater than for TCN. The kinetic parameters for TCN and KCN indicated that hydratase isolated from F. oxysporum was a cyanide hydratase able to utilize a broad range of cyano compounds and nitriles as substrates.

  13. The glycolytic shift in fumarate-hydratase-deficient kidney cancer lowers AMPK levels, increases anabolic propensities and lowers cellular iron levels

    KAUST Repository

    Tong, Winghang; Sourbier, Carole; Kovtunovych, Gennadiy; Jeong, Suhyoung; Vira, Manish A.; Ghosh, Manik Chandra; Romero, Vladimir Valera; Sougrat, Rachid; Vaulont, Sophie; Viollet, Benoî t; Kim, Yeongsang; Lee, Sunmin; Trepel, Jane B.; Srinivasan, Ramaprasad; Bratslavsky, Gennady; Yang, Youfeng; Linehan, William Marston; Rouault, Tracey A.

    2011-01-01

    Inactivation of the TCA cycle enzyme, fumarate hydratase (FH), drives a metabolic shift to aerobic glycolysis in FH-deficient kidney tumors and cell lines from patients with hereditary leiomyomatosis renal cell cancer (HLRCC), resulting in decreased levels of AMP-activated kinase (AMPK) and p53 tumor suppressor, and activation of the anabolic factors, acetyl-CoA carboxylase and ribosomal protein S6. Reduced AMPK levels lead to diminished expression of the DMT1 iron transporter, and the resulting cytosolic iron deficiency activates the iron regulatory proteins, IRP1 and IRP2, and increases expression of the hypoxia inducible factor HIF-1α, but not HIF-2α. Silencing of HIF-1α or activation of AMPK diminishes invasive activities, indicating that alterations of HIF-1α and AMPK contribute to the oncogenic growth of FH-deficient cells. © 2011 Elsevier Inc.

  14. The glycolytic shift in fumarate-hydratase-deficient kidney cancer lowers AMPK levels, increases anabolic propensities and lowers cellular iron levels

    KAUST Repository

    Tong, Winghang

    2011-09-01

    Inactivation of the TCA cycle enzyme, fumarate hydratase (FH), drives a metabolic shift to aerobic glycolysis in FH-deficient kidney tumors and cell lines from patients with hereditary leiomyomatosis renal cell cancer (HLRCC), resulting in decreased levels of AMP-activated kinase (AMPK) and p53 tumor suppressor, and activation of the anabolic factors, acetyl-CoA carboxylase and ribosomal protein S6. Reduced AMPK levels lead to diminished expression of the DMT1 iron transporter, and the resulting cytosolic iron deficiency activates the iron regulatory proteins, IRP1 and IRP2, and increases expression of the hypoxia inducible factor HIF-1α, but not HIF-2α. Silencing of HIF-1α or activation of AMPK diminishes invasive activities, indicating that alterations of HIF-1α and AMPK contribute to the oncogenic growth of FH-deficient cells. © 2011 Elsevier Inc.

  15. No evidence for promoter region methylation of the succinate dehydrogenase and fumarate hydratase tumour suppressor genes in breast cancer

    Directory of Open Access Journals (Sweden)

    Dobrovic Alexander

    2009-09-01

    Full Text Available Abstract Background Succinate dehydrogenase (SDH and fumarate hydratase (FH are tricarboxylic acid (TCA cycle enzymes that are also known to act as tumour suppressor genes. Increased succinate or fumarate levels as a consequence of SDH and FH deficiency inhibit hypoxia inducible factor-1α (HIF-1α prolyl hydroxylases leading to sustained HIF-1α expression in tumours. Since HIF-1α is frequently expressed in breast carcinomas, DNA methylation at the promoter regions of the SDHA, SDHB, SDHC and SDHD and FH genes was evaluated as a possible mechanism in silencing of SDH and FH expression in breast carcinomas. Findings No DNA methylation was identified in the promoter regions of the SDHA, SDHB, SDHC, SDHD and FH genes in 72 breast carcinomas and 10 breast cancer cell lines using methylation-sensitive high resolution melting which detects both homogeneous and heterogeneous methylation. Conclusion These results show that inactivation via DNA methylation of the promoter CpG islands of SDH and FH is unlikely to play a major role in sporadic breast carcinomas.

  16. Nitrile hydratase of Rhodococcus erythropolis: characterization of the enzyme and the use of whole cells for biotransformation of nitriles.

    Science.gov (United States)

    Kamble, Ashwini L; Banoth, Linga; Meena, Vachan Singh; Singh, Amit; Chisti, Yusuf; Banerjee, U C

    2013-08-01

    The intracellular cobalt-type nitrile hydratase was purified from the bacterium Rhodococcuserythropolis. The pure enzyme consisted of two subunits of 29 and 30 kDa. The molecular weight of the native enzyme was estimated to be 65 kDa. At 25 °C the enzyme had a half-life of 25 h. The Michaelis-Menten constants K m and v max for the enzyme were 0.624 mM and 5.12 μmol/min/mg, respectively, using 3-cyanopyridine as the substrate. The enzyme-containing freely-suspended bacterial cells and the cells immobilized within alginate beads were evaluated for converting the various nitriles to amides. In a packed bed reactor, alginate beads (2 % alginate; 3 mm bead diameter) containing 200 mg/mL of cells, achieved a conversion of >90 % for benzonitrile and 4-cyanopyridine in 38 h (25 °C, pH 7.0) at a feed substrate concentration of 100 mM. The beads could be reused for up to six reaction cycles.

  17. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  18. Modeling Complex Systems

    CERN Document Server

    Boccara, Nino

    2010-01-01

    Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

  19. Structural studies of MFE-1: the 1.9 A crystal structure of the dehydrogenase part of rat peroxisomal MFE-1.

    Science.gov (United States)

    Taskinen, Jukka P; Kiema, Tiila R; Hiltunen, J Kalervo; Wierenga, Rik K

    2006-01-27

    The 1.9 A structure of the C-terminal dehydrogenase part of the rat peroxisomal monomeric multifunctional enzyme type 1 (MFE-1) has been determined. In this construct (residues 260-722 and referred to as MFE1-DH) the N-terminal hydratase part of MFE-1 has been deleted. The structure of MFE1-DH shows that it consists of an N-terminal helix, followed by a Rossmann-fold domain (domain C), followed by two tightly associated helical domains (domains D and E), which have similar topology. The structure of MFE1-DH is compared with the two known homologous structures: human mitochondrial 3-hydroxyacyl-CoA dehydrogenase (HAD; sequence identity is 33%) (which is dimeric and monofunctional) and with the dimeric multifunctional alpha-chain (alphaFOM; sequence identity is 28%) of the bacterial fatty acid beta-oxidation alpha2beta2-multienzyme complex. Like MFE-1, alphaFOM has an N-terminal hydratase part and a C-terminal dehydrogenase part, and the structure comparisons show that the N-terminal helix of MFE1-DH corresponds to the alphaFOM linker helix, located between its hydratase and dehydrogenase part. It is also shown that this helix corresponds to the C-terminal helix-10 of the hydratase/isomerase superfamily, suggesting that functionally it belongs to the N-terminal hydratase part of MFE-1.

  20. Modeling Complex Systems

    International Nuclear Information System (INIS)

    Schreckenberg, M

    2004-01-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)

  1. Nonparametric Bayesian Modeling of Complex Networks

    DEFF Research Database (Denmark)

    Schmidt, Mikkel Nørgaard; Mørup, Morten

    2013-01-01

    an infinite mixture model as running example, we go through the steps of deriving the model as an infinite limit of a finite parametric model, inferring the model parameters by Markov chain Monte Carlo, and checking the model?s fit and predictive performance. We explain how advanced nonparametric models......Modeling structure in complex networks using Bayesian nonparametrics makes it possible to specify flexible model structures and infer the adequate model complexity from the observed data. This article provides a gentle introduction to nonparametric Bayesian modeling of complex networks: Using...

  2. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    Science.gov (United States)

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  3. Complex matrix model duality

    International Nuclear Information System (INIS)

    Brown, T.W.

    2010-11-01

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  4. Complex matrix model duality

    Energy Technology Data Exchange (ETDEWEB)

    Brown, T.W.

    2010-11-15

    The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)

  5. Simulation in Complex Modelling

    DEFF Research Database (Denmark)

    Nicholas, Paul; Ramsgaard Thomsen, Mette; Tamke, Martin

    2017-01-01

    This paper will discuss the role of simulation in extended architectural design modelling. As a framing paper, the aim is to present and discuss the role of integrated design simulation and feedback between design and simulation in a series of projects under the Complex Modelling framework. Complex...... performance, engage with high degrees of interdependency and allow the emergence of design agency and feedback between the multiple scales of architectural construction. This paper presents examples for integrated design simulation from a series of projects including Lace Wall, A Bridge Too Far and Inflated...... Restraint developed for the research exhibition Complex Modelling, Meldahls Smedie Gallery, Copenhagen in 2016. Where the direct project aims and outcomes have been reported elsewhere, the aim for this paper is to discuss overarching strategies for working with design integrated simulation....

  6. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Enhancement of thermo-stability and product tolerance of Pseudomonas putida nitrile hydratase by fusing with self-assembling peptide.

    Science.gov (United States)

    Liu, Yi; Cui, Wenjing; Liu, Zhongmei; Cui, Youtian; Xia, Yuanyuan; Kobayashi, Michihiko; Zhou, Zhemin

    2014-09-01

    Self-assembling amphipathic peptides (SAPs) are the peptides that can spontaneously assemble into ordered nanostructures. It has been reported that the attachment of SAPs to the N- or C-terminus of an enzyme can benefit the thermo-stability of the enzyme. Here, we discovered that the thermo-stability and product tolerance of nitrile hydratase (NHase) were enhanced by fusing with two of the SAPs (EAK16 and ELK16). When the ELK16 was fused to the N-terminus of β-subunit, the resultant NHase (SAP-NHase-2) became an active inclusion body; EAK16 fused NHase in the N-terminus of β-subunit (SAP-NHase-1) and ELK16 fused NHase in the C-terminus of β-subunit (SAP-NHase-10) did not affect NHase solubility. Compared with the deactivation of the wild-type NHase after 30 min incubation at 50°C, SAP-NHase-1, SAP-NHase-2 and SAP-NHase-10 retained 45%, 30% and 50% activity; after treatment in the buffer containing 10% acrylamide, the wild-type retained 30% activity, while SAP-NHase-1, SAP-NHase-2 and SAP-NHase-10 retained 52%, 42% and 55% activity. These SAP-NHases with enhanced thermo-stability and product tolerance would be helpful for further industrial applications of the NHase. Copyright © 2014 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  8. Model complexity control for hydrologic prediction

    NARCIS (Netherlands)

    Schoups, G.; Van de Giesen, N.C.; Savenije, H.H.G.

    2008-01-01

    A common concern in hydrologic modeling is overparameterization of complex models given limited and noisy data. This leads to problems of parameter nonuniqueness and equifinality, which may negatively affect prediction uncertainties. A systematic way of controlling model complexity is therefore

  9. Multifaceted Modelling of Complex Business Enterprises.

    Science.gov (United States)

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

  10. Multifaceted Modelling of Complex Business Enterprises

    Science.gov (United States)

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  11. Modelling the structure of complex networks

    DEFF Research Database (Denmark)

    Herlau, Tue

    networks has been independently studied as mathematical objects in their own right. As such, there has been both an increased demand for statistical methods for complex networks as well as a quickly growing mathematical literature on the subject. In this dissertation we explore aspects of modelling complex....... The next chapters will treat some of the various symmetries, representer theorems and probabilistic structures often deployed in the modelling complex networks, the construction of sampling methods and various network models. The introductory chapters will serve to provide context for the included written...

  12. Catalytic Mechanism of Nitrile Hydratase Proposed by Time-resolved X-ray Crystallography Using a Novel Substrate, tert-Butylisonitrile*S⃞

    Science.gov (United States)

    Hashimoto, Koichi; Suzuki, Hiroyuki; Taniguchi, Kayoko; Noguchi, Takumi; Yohda, Masafumi; Odaka, Masafumi

    2008-01-01

    Nitrile hydratases (NHases) have an unusual iron or cobalt catalytic center with two oxidized cysteine ligands, cysteine-sulfinic acid and cysteine-sulfenic acid, catalyzing the hydration of nitriles to amides. Recently, we found that the NHase of Rhodococcus erythropolis N771 exhibited an additional catalytic activity, converting tert-butylisonitrile (tBuNC) to tert-butylamine. Taking advantage of the slow reactivity of tBuNC and the photoreactivity of nitrosylated NHase, we present the first structural evidence for the catalytic mechanism of NHase with time-resolved x-ray crystallography. By monitoring the reaction with attenuated total reflectance-Fourier transform infrared spectroscopy, the product from the isonitrile carbon was identified as a CO molecule. Crystals of nitrosylated inactive NHase were soaked with tBuNC. The catalytic reaction was initiated by photo-induced denitrosylation and stopped by flash cooling. tBuNC was first trapped at the hydrophobic pocket above the iron center and then coordinated to the iron ion at 120 min. At 440 min, the electron density of tBuNC was significantly altered, and a new electron density was observed near the isonitrile carbon as well as the sulfenate oxygen of αCys114. These results demonstrate that the substrate was coordinated to the iron and then attacked by a solvent molecule activated by αCys114-SOH. PMID:18948265

  13. Updating the debate on model complexity

    Science.gov (United States)

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  14. Complexity, Modeling, and Natural Resource Management

    Directory of Open Access Journals (Sweden)

    Paul Cilliers

    2013-09-01

    Full Text Available This paper contends that natural resource management (NRM issues are, by their very nature, complex and that both scientists and managers in this broad field will benefit from a theoretical understanding of complex systems. It starts off by presenting the core features of a view of complexity that not only deals with the limits to our understanding, but also points toward a responsible and motivating position. Everything we do involves explicit or implicit modeling, and as we can never have comprehensive access to any complex system, we need to be aware both of what we leave out as we model and of the implications of the choice of our modeling framework. One vantage point is never sufficient, as complexity necessarily implies that multiple (independent conceptualizations are needed to engage the system adequately. We use two South African cases as examples of complex systems - restricting the case narratives mainly to the biophysical domain associated with NRM issues - that make the point that even the behavior of the biophysical subsystems themselves are already complex. From the insights into complex systems discussed in the first part of the paper and the lessons emerging from the way these cases have been dealt with in reality, we extract five interrelated generic principles for practicing science and management in complex NRM environments. These principles are then further elucidated using four further South African case studies - organized as two contrasting pairs - and now focusing on the more difficult organizational and social side, comparing the human organizational endeavors in managing such systems.

  15. GenBank blastx search result: AK059236 [KOME

    Lifescience Database Archive (English)

    Full Text Available AK059236 001-024-F01 AB016078.1 Rhodococcus sp. N-771 genes for nitrile hydratase r...egulator 2 and 1, amidase, nitrile hydratase alpha and beta subunits and nitrile hydratase activator, complete cds.|BCT BCT 8e-18 +3 ...

  16. GenBank blastx search result: AK243402 [KOME

    Lifescience Database Archive (English)

    Full Text Available AK243402 J100064O18 AB016078.1 AB016078 Rhodococcus sp. N-771 genes for nitrile hyd...ratase regulator 2 and 1, amidase, nitrile hydratase alpha and beta subunits and nitrile hydratase activator, complete cds. BCT 1e-27 1 ...

  17. Sutherland models for complex reflection groups

    International Nuclear Information System (INIS)

    Crampe, N.; Young, C.A.S.

    2008-01-01

    There are known to be integrable Sutherland models associated to every real root system, or, which is almost equivalent, to every real reflection group. Real reflection groups are special cases of complex reflection groups. In this paper we associate certain integrable Sutherland models to the classical family of complex reflection groups. Internal degrees of freedom are introduced, defining dynamical spin chains, and the freezing limit taken to obtain static chains of Haldane-Shastry type. By considering the relation of these models to the usual BC N case, we are led to systems with both real and complex reflection groups as symmetries. We demonstrate their integrability by means of new Dunkl operators, associated to wreath products of dihedral groups

  18. Predictive Surface Complexation Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, Dimitri A. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Earth and Planetary Sciences

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  19. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    Science.gov (United States)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  20. Modeling Musical Complexity: Commentary on Eerola (2016

    Directory of Open Access Journals (Sweden)

    Joshua Albrecht

    2016-07-01

    Full Text Available In his paper, "Expectancy violation and information-theoretic models of melodic complexity," Eerola compares a number of models that correlate musical features of monophonic melodies with participant ratings of perceived melodic complexity. He finds that fairly strong results can be achieved using several different approaches to modeling perceived melodic complexity. The data used in this study are gathered from several previously published studies that use widely different types of melodies, including isochronous folk melodies, isochronous 12-tone rows, and rhythmically complex African folk melodies. This commentary first briefly reviews the article's method and main findings, then suggests a rethinking of the theoretical framework of the study. Finally, some of the methodological issues of the study are discussed.

  1. The effects of model and data complexity on predictions from species distributions models

    DEFF Research Database (Denmark)

    García-Callejas, David; Bastos, Miguel

    2016-01-01

    How complex does a model need to be to provide useful predictions is a matter of continuous debate across environmental sciences. In the species distributions modelling literature, studies have demonstrated that more complex models tend to provide better fits. However, studies have also shown...... that predictive performance does not always increase with complexity. Testing of species distributions models is challenging because independent data for testing are often lacking, but a more general problem is that model complexity has never been formally described in such studies. Here, we systematically...

  2. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  3. Epidemic modeling in complex realities.

    Science.gov (United States)

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  4. The Kuramoto model in complex networks

    Science.gov (United States)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  5. The FH mutation database: an online database of fumarate hydratase mutations involved in the MCUL (HLRCC tumor syndrome and congenital fumarase deficiency

    Directory of Open Access Journals (Sweden)

    Tomlinson Ian PM

    2008-03-01

    Full Text Available Abstract Background Fumarate hydratase (HGNC approved gene symbol – FH, also known as fumarase, is an enzyme of the tricarboxylic acid (TCA cycle, involved in fundamental cellular energy production. First described by Zinn et al in 1986, deficiency of FH results in early onset, severe encephalopathy. In 2002, the Multiple Leiomyoma Consortium identified heterozygous germline mutations of FH in patients with multiple cutaneous and uterine leiomyomas, (MCUL: OMIM 150800. In some families renal cell cancer also forms a component of the complex and as such has been described as hereditary leiomyomatosis and renal cell cancer (HLRCC: OMIM 605839. The identification of FH as a tumor suppressor was an unexpected finding and following the identification of subunits of succinate dehydrogenase in 2000 and 2001, was only the second description of the involvement of an enzyme of intermediary metabolism in tumorigenesis. Description The FH mutation database is a part of the TCA cycle gene mutation database (formerly the succinate dehydrogenase gene mutation database and is based on the Leiden Open (source Variation Database (LOVD system. The variants included in the database were derived from the published literature and annotated to conform to current mutation nomenclature. The FH database applies HGVS nomenclature guidelines, and will assist researchers in applying these guidelines when directly submitting new sequence variants online. Since the first molecular characterization of an FH mutation by Bourgeron et al in 1994, a series of reports of both FH deficiency patients and patients with MCUL/HLRRC have described 107 variants, of which 93 are thought to be pathogenic. The most common type of mutation is missense (57%, followed by frameshifts & nonsense (27%, and diverse deletions, insertions and duplications. Here we introduce an online database detailing all reported FH sequence variants. Conclusion The FH mutation database strives to systematically

  6. Linking Complexity and Sustainability Theories: Implications for Modeling Sustainability Transitions

    Directory of Open Access Journals (Sweden)

    Camaren Peter

    2014-03-01

    Full Text Available In this paper, we deploy a complexity theory as the foundation for integration of different theoretical approaches to sustainability and develop a rationale for a complexity-based framework for modeling transitions to sustainability. We propose a framework based on a comparison of complex systems’ properties that characterize the different theories that deal with transitions to sustainability. We argue that adopting a complexity theory based approach for modeling transitions requires going beyond deterministic frameworks; by adopting a probabilistic, integrative, inclusive and adaptive approach that can support transitions. We also illustrate how this complexity-based modeling framework can be implemented; i.e., how it can be used to select modeling techniques that address particular properties of complex systems that we need to understand in order to model transitions to sustainability. In doing so, we establish a complexity-based approach towards modeling sustainability transitions that caters for the broad range of complex systems’ properties that are required to model transitions to sustainability.

  7. Complexity-aware simple modeling.

    Science.gov (United States)

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  9. Elements of complexity in subsurface modeling, exemplified with three case studies

    Energy Technology Data Exchange (ETDEWEB)

    Freedman, Vicky L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Truex, Michael J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rockhold, Mark [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bacon, Diana H. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Freshley, Mark D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wellman, Dawn M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, and 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  10. Cumulative complexity: a functional, patient-centered model of patient complexity can improve research and practice.

    Science.gov (United States)

    Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M

    2012-10-01

    To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. On spin and matrix models in the complex plane

    International Nuclear Information System (INIS)

    Damgaard, P.H.; Heller, U.M.

    1993-01-01

    We describe various aspects of statistical mechanics defined in the complex temperature or coupling-constant plane. Using exactly solvable models, we analyse such aspects as renormalization group flows in the complex plane, the distribution of partition function zeros, and the question of new coupling-constant symmetries of complex-plane spin models. The double-scaling form of matrix models is shown to be exactly equivalent to finite-size scaling of two-dimensional spin systems. This is used to show that the string susceptibility exponents derived from matrix models can be obtained numerically with very high accuracy from the scaling of finite-N partition function zeros in the complex plane. (orig.)

  12. Deterministic ripple-spreading model for complex networks.

    Science.gov (United States)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  13. Modeling OPC complexity for design for manufacturability

    Science.gov (United States)

    Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong

    2005-11-01

    Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data

  14. Modeling the Structure and Complexity of Engineering Routine Design Problems

    NARCIS (Netherlands)

    Jauregui Becker, Juan Manuel; Wits, Wessel Willems; van Houten, Frederikus J.A.M.

    2011-01-01

    This paper proposes a model to structure routine design problems as well as a model of its design complexity. The idea is that having a proper model of the structure of such problems enables understanding its complexity, and likewise, a proper understanding of its complexity enables the development

  15. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  16. Modelling of information processes management of educational complex

    Directory of Open Access Journals (Sweden)

    Оксана Николаевна Ромашкова

    2014-12-01

    Full Text Available This work concerns information model of the educational complex which includes several schools. A classification of educational complexes formed in Moscow is given. There are also a consideration of the existing organizational structure of the educational complex and a suggestion of matrix management structure. Basic management information processes of the educational complex were conceptualized.

  17. Complex networks under dynamic repair model

    Science.gov (United States)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  18. Predictive modelling of complex agronomic and biological systems.

    Science.gov (United States)

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.

  19. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  20. Computational Modeling of Complex Protein Activity Networks

    NARCIS (Netherlands)

    Schivo, Stefano; Leijten, Jeroen; Karperien, Marcel; Post, Janine N.; Prignet, Claude

    2017-01-01

    Because of the numerous entities interacting, the complexity of the networks that regulate cell fate makes it impossible to analyze and understand them using the human brain alone. Computational modeling is a powerful method to unravel complex systems. We recently described the development of a

  1. Foundations for Streaming Model Transformations by Complex Event Processing.

    Science.gov (United States)

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  2. Different Epidemic Models on Complex Networks

    International Nuclear Information System (INIS)

    Zhang Haifeng; Small, Michael; Fu Xinchu

    2009-01-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  3. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  4. Smart modeling and simulation for complex systems practice and theory

    CERN Document Server

    Ren, Fenghui; Zhang, Minjie; Ito, Takayuki; Tang, Xijin

    2015-01-01

    This book aims to provide a description of these new Artificial Intelligence technologies and approaches to the modeling and simulation of complex systems, as well as an overview of the latest scientific efforts in this field such as the platforms and/or the software tools for smart modeling and simulating complex systems. These tasks are difficult to accomplish using traditional computational approaches due to the complex relationships of components and distributed features of resources, as well as the dynamic work environments. In order to effectively model the complex systems, intelligent technologies such as multi-agent systems and smart grids are employed to model and simulate the complex systems in the areas of ecosystem, social and economic organization, web-based grid service, transportation systems, power systems and evacuation systems.

  5. Universal correlators for multi-arc complex matrix models

    International Nuclear Information System (INIS)

    Akemann, G.

    1997-01-01

    The correlation functions of the multi-arc complex matrix model are shown to be universal for any finite number of arcs. The universality classes are characterized by the support of the eigenvalue density and are conjectured to fall into the same classes as the ones recently found for the Hermitian model. This is explicitly shown to be true for the case of two arcs, apart from the known result for one arc. The basic tool is the iterative solution of the loop equation for the complex matrix model with multiple arcs, which provides all multi-loop correlators up to an arbitrary genus. Explicit results for genus one are given for any number of arcs. The two-arc solution is investigated in detail, including the double-scaling limit. In addition universal expressions for the string susceptibility are given for both the complex and Hermitian model. (orig.)

  6. A Practical Philosophy of Complex Climate Modelling

    Science.gov (United States)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  7. Understanding complex urban systems multidisciplinary approaches to modeling

    CERN Document Server

    Gurr, Jens; Schmidt, J

    2014-01-01

    Understanding Complex Urban Systems takes as its point of departure the insight that the challenges of global urbanization and the complexity of urban systems cannot be understood – let alone ‘managed’ – by sectoral and disciplinary approaches alone. But while there has recently been significant progress in broadening and refining the methodologies for the quantitative modeling of complex urban systems, in deepening the theoretical understanding of cities as complex systems, or in illuminating the implications for urban planning, there is still a lack of well-founded conceptual thinking on the methodological foundations and the strategies of modeling urban complexity across the disciplines. Bringing together experts from the fields of urban and spatial planning, ecology, urban geography, real estate analysis, organizational cybernetics, stochastic optimization, and literary studies, as well as specialists in various systems approaches and in transdisciplinary methodologies of urban analysis, the volum...

  8. A complex autoregressive model and application to monthly temperature forecasts

    Directory of Open Access Journals (Sweden)

    X. Gu

    2005-11-01

    Full Text Available A complex autoregressive model was established based on the mathematic derivation of the least squares for the complex number domain which is referred to as the complex least squares. The model is different from the conventional way that the real number and the imaginary number are separately calculated. An application of this new model shows a better forecast than forecasts from other conventional statistical models, in predicting monthly temperature anomalies in July at 160 meteorological stations in mainland China. The conventional statistical models include an autoregressive model, where the real number and the imaginary number are separately disposed, an autoregressive model in the real number domain, and a persistence-forecast model.

  9. Reassessing Geophysical Models of the Bushveld Complex in 3D

    Science.gov (United States)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  10. Geometric Modelling with a-Complexes

    NARCIS (Netherlands)

    Gerritsen, B.H.M.; Werff, K. van der; Veltkamp, R.C.

    2001-01-01

    The shape of real objects can be so complicated, that only a sampling data point set can accurately represent them. Analytic descriptions are too complicated or impossible. Natural objects, for example, can be vague and rough with many holes. For this kind of modelling, a-complexes offer advantages

  11. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    Science.gov (United States)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  12. Coping with Complexity Model Reduction and Data Analysis

    CERN Document Server

    Gorban, Alexander N

    2011-01-01

    This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.

  13. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    Energy Technology Data Exchange (ETDEWEB)

    Bonten, Luc T.C., E-mail: luc.bonten@wur.nl [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Groenenberg, Jan E. [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands); Meesenburg, Henning [Northwest German Forest Research Station, Abt. Umweltkontrolle, Sachgebiet Intensives Umweltmonitoring, Goettingen (Germany); Vries, Wim de [Alterra-Wageningen UR, Soil Science Centre, P.O. Box 47, 6700 AA Wageningen (Netherlands)

    2011-10-15

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: > Surface complexation models can be well applied in field studies. > Soil chemistry under a forest site is adequately modelled using generic parameters. > The model is easily extended with extra elements within the existing framework. > Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  14. Using advanced surface complexation models for modelling soil chemistry under forests: Solling forest, Germany

    International Nuclear Information System (INIS)

    Bonten, Luc T.C.; Groenenberg, Jan E.; Meesenburg, Henning; Vries, Wim de

    2011-01-01

    Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well. - Highlights: → Surface complexation models can be well applied in field studies. → Soil chemistry under a forest site is adequately modelled using generic parameters. → The model is easily extended with extra elements within the existing framework. → Surface complexation models can show the linkages between major soil chemistry and trace element behaviour. - Surface complexation models with generic parameters make calibration of sorption superfluous in dynamic modelling of deposition impacts on soil chemistry under nature areas.

  15. A marketing mix model for a complex and turbulent environment

    Directory of Open Access Journals (Sweden)

    R. B. Mason

    2007-12-01

    Full Text Available Purpose: This paper is based on the proposition that the choice of marketing tactics is determined, or at least significantly influenced, by the nature of the company’s external environment. It aims to illustrate the type of marketing mix tactics that are suggested for a complex and turbulent environment when marketing and the environment are viewed through a chaos and complexity theory lens. Design/Methodology/Approach: Since chaos and complexity theories are proposed as a good means of understanding the dynamics of complex and turbulent markets, a comprehensive review and analysis of literature on the marketing mix and marketing tactics from a chaos and complexity viewpoint was conducted. From this literature review, a marketing mix model was conceptualised. Findings: A marketing mix model considered appropriate for success in complex and turbulent environments was developed. In such environments, the literature suggests destabilising marketing activities are more effective, whereas stabilising type activities are more effective in simple, stable environments. Therefore the model proposes predominantly destabilising type tactics as appropriate for a complex and turbulent environment such as is currently being experienced in South Africa. Implications: This paper is of benefit to marketers by emphasising a new way to consider the future marketing activities of their companies. How this model can assist marketers and suggestions for research to develop and apply this model are provided. It is hoped that the model suggested will form the basis of empirical research to test its applicability in the turbulent South African environment. Originality/Value: Since businesses and markets are complex adaptive systems, using complexity theory to understand how to cope in complex, turbulent environments is necessary, but has not been widely researched. In fact, most chaos and complexity theory work in marketing has concentrated on marketing strategy, with

  16. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    Science.gov (United States)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  17. Polystochastic Models for Complexity

    CERN Document Server

    Iordache, Octavian

    2010-01-01

    This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...

  18. Effect of growth media on cell envelope composition and nitrile hydratase stability in Rhodococcus rhodochrous strain DAP 96253.

    Science.gov (United States)

    Tucker, Trudy-Ann; Crow, Sidney A; Pierce, George E

    2012-11-01

    Rhodococcus is an important industrial microorganism that possesses diverse metabolic capabilities; it also has a cell envelope, composed of an outer layer of mycolic acids and glycolipids. Selected Rhodococcus species when induced are capable of transforming nitriles to the corresponding amide by the enzyme nitrile hydratase (NHase), and subsequently to the corresponding acid via an amidase. This nitrile biochemistry has generated interest in using the rhodococci as biocatalysts. It was hypothesized that altering sugars in the growth medium might impact cell envelope components and have effects on NHase. When the primary carbon source in growth media was changed from glucose to fructose, maltose, or maltodextrin, the NHase activity increased. Cells grown in the presence of maltose and maltodextrin showed the highest activities against propionitrile, 197 and 202 units/mg cdw, respectively. Stability of NHase was also affected as cells grown in the presence of maltose and maltodextrin retained more NHase activity at 55 °C (45 and 23 %, respectively) than cells grown in the presence of glucose or fructose (19 and 10 %, respectively). Supplementation of trehalose in the growth media resulted in increased NHase stability at 55 °C, as cells grown in the presence of glucose retained 40 % NHase activity as opposed to 19 % without the presence of trehalose. Changes in cell envelope components, such mycolic acids and glycolipids, were evaluated by high-performance liquid chromatography (HPLC) and thin-layer chromatography (TLC), respectively. Changing sugars and the addition of inducing components for NHase, such as cobalt and urea in growth media, resulted in changes in mycolic acid profiles. Mycolic acid content increased 5 times when cobalt and urea were added to media with glucose. Glycolipids levels were also affected by the changes in sugars and addition of inducing components. This research demonstrates that carbohydrate selection impacts NHase activity and

  19. NCBI nr-aa BLAST: CBRC-CREM-01-1371 [SEVENS

    Lifescience Database Archive (English)

    Full Text Available CBRC-CREM-01-1371 ref|YP_926913.1| Phosphopyruvate hydratase [Shewanella amazonensi...s SB2B] gb|ABL99243.1| Phosphopyruvate hydratase [Shewanella amazonensis SB2B] YP_926913.1 6e-90 71% ...

  20. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    Science.gov (United States)

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Simulation and Analysis of Complex Biological Processes: an Organisation Modelling Perspective

    NARCIS (Netherlands)

    Bosse, T.; Jonker, C.M.; Treur, J.

    2005-01-01

    This paper explores how the dynamics of complex biological processes can be modelled and simulated as an organisation of multiple agents. This modelling perspective identifies organisational structure occurring in complex decentralised processes and handles complexity of the analysis of the dynamics

  2. Dynamic complexities in a parasitoid-host-parasitoid ecological model

    International Nuclear Information System (INIS)

    Yu Hengguo; Zhao Min; Lv Songjuan; Zhu Lili

    2009-01-01

    Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model

  3. Dynamic complexities in a parasitoid-host-parasitoid ecological model

    Energy Technology Data Exchange (ETDEWEB)

    Yu Hengguo [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China); Zhao Min [School of Life and Environmental Science, Wenzhou University, Wenzhou, Zhejiang 325027 (China)], E-mail: zmcn@tom.com; Lv Songjuan; Zhu Lili [School of Mathematic and Information Science, Wenzhou University, Wenzhou, Zhejiang 325035 (China)

    2009-01-15

    Chaotic dynamics have been observed in a wide range of population models. In this study, the complex dynamics in a discrete-time ecological model of parasitoid-host-parasitoid are presented. The model shows that the superiority coefficient not only stabilizes the dynamics, but may strongly destabilize them as well. Many forms of complex dynamics were observed, including pitchfork bifurcation with quasi-periodicity, period-doubling cascade, chaotic crisis, chaotic bands with narrow or wide periodic window, intermittent chaos, and supertransient behavior. Furthermore, computation of the largest Lyapunov exponent demonstrated the chaotic dynamic behavior of the model.

  4. What do we gain from simplicity versus complexity in species distribution models?

    Science.gov (United States)

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species

  5. Algebraic computability and enumeration models recursion theory and descriptive complexity

    CERN Document Server

    Nourani, Cyrus F

    2016-01-01

    This book, Algebraic Computability and Enumeration Models: Recursion Theory and Descriptive Complexity, presents new techniques with functorial models to address important areas on pure mathematics and computability theory from the algebraic viewpoint. The reader is first introduced to categories and functorial models, with Kleene algebra examples for languages. Functorial models for Peano arithmetic are described toward important computational complexity areas on a Hilbert program, leading to computability with initial models. Infinite language categories are also introduced to explain descriptive complexity with recursive computability with admissible sets and urelements. Algebraic and categorical realizability is staged on several levels, addressing new computability questions with omitting types realizably. Further applications to computing with ultrafilters on sets and Turing degree computability are examined. Functorial models computability is presented with algebraic trees realizing intuitionistic type...

  6. Modeling complex work systems - method meets reality

    NARCIS (Netherlands)

    van der Veer, Gerrit C.; Hoeve, Machteld; Lenting, Bert

    1996-01-01

    Modeling an existing task situation is often a first phase in the (re)design of information systems. For complex systems design, this model should consider both the people and the organization involved, the work, and situational aspects. Groupware Task Analysis (GTA) as part of a method for the

  7. Intrinsic Uncertainties in Modeling Complex Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  8. Fatigue modeling of materials with complex microstructures

    DEFF Research Database (Denmark)

    Qing, Hai; Mishnaevsky, Leon

    2011-01-01

    with the phenomenological model of fatigue damage growth. As a result, the fatigue lifetime of materials with complex structures can be determined as a function of the parameters of their structures. As an example, the fatigue lifetimes of wood modeled as a cellular material with multilayered, fiber reinforced walls were...

  9. Stability of Rotor Systems: A Complex Modelling Approach

    DEFF Research Database (Denmark)

    Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

    1996-01-01

    A large class of rotor systems can be modelled by a complex matrix differential equation of secondorder. The angular velocity of the rotor plays the role of a parameter. We apply the Lyapunov matrix equation in a complex setting and prove two new stability results which are compared...

  10. Complex Systems and Self-organization Modelling

    CERN Document Server

    Bertelle, Cyrille; Kadri-Dahmani, Hakima

    2009-01-01

    The concern of this book is the use of emergent computing and self-organization modelling within various applications of complex systems. The authors focus their attention both on the innovative concepts and implementations in order to model self-organizations, but also on the relevant applicative domains in which they can be used efficiently. This book is the outcome of a workshop meeting within ESM 2006 (Eurosis), held in Toulouse, France in October 2006.

  11. Understanding complex urban systems integrating multidisciplinary data in urban models

    CERN Document Server

    Gebetsroither-Geringer, Ernst; Atun, Funda; Werner, Liss

    2016-01-01

    This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers—including municipal politicians, spatial planners, and citizen groups—in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches—and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Age...

  12. Minimum-complexity helicopter simulation math model

    Science.gov (United States)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  13. Elastic Network Model of a Nuclear Transport Complex

    Science.gov (United States)

    Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.

    2010-05-01

    The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.

  14. Sparse Estimation Using Bayesian Hierarchical Prior Modeling for Real and Complex Linear Models

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand; Manchón, Carles Navarro; Badiu, Mihai Alin

    2015-01-01

    In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex-valued m......In sparse Bayesian learning (SBL), Gaussian scale mixtures (GSMs) have been used to model sparsity-inducing priors that realize a class of concave penalty functions for the regression task in real-valued signal models. Motivated by the relative scarcity of formal tools for SBL in complex...... error, and robustness in low and medium signal-to-noise ratio regimes....

  15. Complex Networks in Psychological Models

    Science.gov (United States)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  16. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  17. Higher genus correlators for the complex matrix model

    International Nuclear Information System (INIS)

    Ambjorn, J.; Kristhansen, C.F.; Makeenko, Y.M.

    1992-01-01

    In this paper, the authors describe an iterative scheme which allows us to calculate any multi-loop correlator for the complex matrix model to any genus using only the first in the chain of loop equations. The method works for a completely general potential and the results contain no explicit reference to the couplings. The genus g contribution to the m-loop correlator depends on a finite number of parameters, namely at most 4g - 2 + m. The authors find the generating functional explicitly up to genus three. The authors show as well that the model is equivalent to an external field problem for the complex matrix model with a logarithmic potential

  18. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    Science.gov (United States)

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  19. Modeling of Complex Life Cycle Prediction Based on Cell Division

    Directory of Open Access Journals (Sweden)

    Fucheng Zhang

    2017-01-01

    Full Text Available Effective fault diagnosis and reasonable life expectancy are of great significance and practical engineering value for the safety, reliability, and maintenance cost of equipment and working environment. At present, the life prediction methods of the equipment are equipment life prediction based on condition monitoring, combined forecasting model, and driven data. Most of them need to be based on a large amount of data to achieve the problem. For this issue, we propose learning from the mechanism of cell division in the organism. We have established a moderate complexity of life prediction model across studying the complex multifactor correlation life model. In this paper, we model the life prediction of cell division. Experiments show that our model can effectively simulate the state of cell division. Through the model of reference, we will use it for the equipment of the complex life prediction.

  20. Mathematical Models to Determine Stable Behavior of Complex Systems

    Science.gov (United States)

    Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

    2018-05-01

    The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

  1. Facing urban complexity : towards cognitive modelling. Part 1. Modelling as a cognitive mediator

    Directory of Open Access Journals (Sweden)

    Sylvie Occelli

    2002-03-01

    Full Text Available Over the last twenty years, complexity issues have been a central theme of enquiry for the modelling field. Whereas contributing to both a critical revisiting of the existing methods and opening new ways of reasoning, the effectiveness (and sense of modelling activity was rarely questioned. Acknowledgment of complexity however has been a fruitful spur new and more sophisticated methods in order to improve understanding and advance geographical sciences. However its contribution to tackle urban problems in everyday life has been rather poor and mainly limited to rhetorical claims about the potentialities of the new approach. We argue that although complexity has put the classical modelling activity in serious distress, it is disclosing new potentialities, which are still largely unnoticed. These are primarily related to what the authors has called the structural cognitive shift, which involves both the contents and role of modelling activity. This paper is a first part of a work aimed to illustrate the main features of this shift and discuss its main consequences on the modelling activity. We contend that a most relevant aspect of novelty lies in the new role of modelling as a cognitive mediator, i.e. as a kind of interface between the various components of a modelling process and the external environment to which a model application belongs.

  2. BlenX-based compositional modeling of complex reaction mechanisms

    Directory of Open Access Journals (Sweden)

    Judit Zámborszky

    2010-02-01

    Full Text Available Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.

  3. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  4. The utility of Earth system Models of Intermediate Complexity

    NARCIS (Netherlands)

    Weber, S.L.

    2010-01-01

    Intermediate-complexity models are models which describe the dynamics of the atmosphere and/or ocean in less detail than conventional General Circulation Models (GCMs). At the same time, they go beyond the approach taken by atmospheric Energy Balance Models (EBMs) or ocean box models by

  5. Modelling, Estimation and Control of Networked Complex Systems

    CERN Document Server

    Chiuso, Alessandro; Frasca, Mattia; Rizzo, Alessandro; Schenato, Luca; Zampieri, Sandro

    2009-01-01

    The paradigm of complexity is pervading both science and engineering, leading to the emergence of novel approaches oriented at the development of a systemic view of the phenomena under study; the definition of powerful tools for modelling, estimation, and control; and the cross-fertilization of different disciplines and approaches. This book is devoted to networked systems which are one of the most promising paradigms of complexity. It is demonstrated that complex, dynamical networks are powerful tools to model, estimate, and control many interesting phenomena, like agent coordination, synchronization, social and economics events, networks of critical infrastructures, resources allocation, information processing, or control over communication networks. Moreover, it is shown how the recent technological advances in wireless communication and decreasing in cost and size of electronic devices are promoting the appearance of large inexpensive interconnected systems, each with computational, sensing and mobile cap...

  6. EVALUATING THE NOVEL METHODS ON SPECIES DISTRIBUTION MODELING IN COMPLEX FOREST

    Directory of Open Access Journals (Sweden)

    C. H. Tu

    2012-07-01

    Full Text Available The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM and maximum entropy (MAXENT. However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC, growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT, to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ. In the forest with low complex (small BSZ, the accuracies of SVM (kappa = 0.87 and DT (0.86 models were slightly higher than that of MAXENT (0.84. In the more complex situation (large BSZ, MAXENT kept high kappa value (0.85, whereas SVM (0.61 and DT (0.57 models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species’ potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  7. Generalized complex geometry, generalized branes and the Hitchin sigma model

    International Nuclear Information System (INIS)

    Zucchini, Roberto

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  8. Narrowing the gap between network models and real complex systems

    OpenAIRE

    Viamontes Esquivel, Alcides

    2014-01-01

    Simple network models that focus only on graph topology or, at best, basic interactions are often insufficient to capture all the aspects of a dynamic complex system. In this thesis, I explore those limitations, and some concrete methods of resolving them. I argue that, in order to succeed at interpreting and influencing complex systems, we need to take into account  slightly more complex parts, interactions and information flows in our models.This thesis supports that affirmation with five a...

  9. Modeling geophysical complexity: a case for geometric determinism

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2007-01-01

    Full Text Available It has been customary in the last few decades to employ stochastic models to represent complex data sets encountered in geophysics, particularly in hydrology. This article reviews a deterministic geometric procedure to data modeling, one that represents whole data sets as derived distributions of simple multifractal measures via fractal functions. It is shown how such a procedure may lead to faithful holistic representations of existing geophysical data sets that, while complementing existing representations via stochastic methods, may also provide a compact language for geophysical complexity. The implications of these ideas, both scientific and philosophical, are stressed.

  10. Complex versus simple models: ion-channel cardiac toxicity prediction.

    Science.gov (United States)

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  11. Complex versus simple models: ion-channel cardiac toxicity prediction

    Directory of Open Access Journals (Sweden)

    Hitesh B. Mistry

    2018-02-01

    Full Text Available There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model Bnet was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the Bnet model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  12. Knowledge-based inspection:modelling complex processes with the integrated Safeguards Modelling Method (iSMM)

    International Nuclear Information System (INIS)

    Abazi, F.

    2011-01-01

    Increased level of complexity in almost every discipline and operation today raises the demand for knowledge in order to successfully run an organization whether to generate profit or to attain a non-profit mission. Traditional way of transferring knowledge to information systems rich in data structures and complex algorithms continue to hinder the ability to swiftly turnover concepts into operations. Diagrammatic modelling commonly applied in engineering in order to represent concepts or reality remains to be an excellent way of converging knowledge from domain experts. The nuclear verification domain represents ever more a matter which has great importance to the World safety and security. Demand for knowledge about nuclear processes and verification activities used to offset potential misuse of nuclear technology will intensify with the growth of the subject technology. This Doctoral thesis contributes with a model-based approach for representing complex process such as nuclear inspections. The work presented contributes to other domains characterized with knowledge intensive and complex processes. Based on characteristics of a complex process a conceptual framework was established as the theoretical basis for creating a number of modelling languages to represent the domain. The integrated Safeguards Modelling Method (iSMM) is formalized through an integrated meta-model. The diagrammatic modelling languages represent the verification domain and relevant nuclear verification aspects. Such a meta-model conceptualizes the relation between practices of process management, knowledge management and domain specific verification principles. This fusion is considered as necessary in order to create quality processes. The study also extends the formalization achieved through a meta-model by contributing with a formalization language based on Pattern Theory. Through the use of graphical and mathematical constructs of the theory, process structures are formalized enhancing

  13. Network-oriented modeling addressing complexity of cognitive, affective and social interactions

    CERN Document Server

    Treur, Jan

    2016-01-01

    This book presents a new approach that can be applied to complex, integrated individual and social human processes. It provides an alternative means of addressing complexity, better suited for its purpose than and effectively complementing traditional strategies involving isolation and separation assumptions. Network-oriented modeling allows high-level cognitive, affective and social models in the form of (cyclic) graphs to be constructed, which can be automatically transformed into executable simulation models. The modeling format used makes it easy to take into account theories and findings about complex cognitive and social processes, which often involve dynamics based on interrelating cycles. Accordingly, it makes it possible to address complex phenomena such as the integration of emotions within cognitive processes of all kinds, of internal simulations of the mental processes of others, and of social phenomena such as shared understandings and collective actions. A variety of sample models – including ...

  14. Building a pseudo-atomic model of the anaphase-promoting complex

    International Nuclear Information System (INIS)

    Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; Fonseca, Paula C. A. da; Barford, David

    2013-01-01

    This article describes an example of molecular replacement in which atomic models are used to interpret electron-density maps determined using single-particle electron-microscopy data. The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14–15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex

  15. Modelling the complex dynamics of vegetation, livestock and rainfall ...

    African Journals Online (AJOL)

    Open Access DOWNLOAD FULL TEXT ... In this paper, we present mathematical models that incorporate ideas from complex systems theory to integrate several strands of rangeland theory in a hierarchical framework. ... Keywords: catastrophe theory; complexity theory; disequilibrium; hysteresis; moving attractors

  16. Modeling Complex Nesting Structures in International Business Research

    DEFF Research Database (Denmark)

    Nielsen, Bo Bernhard; Nielsen, Sabina

    2013-01-01

    hierarchical random coefficient models (RCM) are often used for the analysis of multilevel phenomena, IB issues often result in more complex nested structures. This paper illustrates how cross-nested multilevel modeling allowing for predictor variables and cross-level interactions at multiple (crossed) levels...

  17. ANS main control complex three-dimensional computer model development

    International Nuclear Information System (INIS)

    Cleaves, J.E.; Fletcher, W.M.

    1993-01-01

    A three-dimensional (3-D) computer model of the Advanced Neutron Source (ANS) main control complex is being developed. The main control complex includes the main control room, the technical support center, the materials irradiation control room, computer equipment rooms, communications equipment rooms, cable-spreading rooms, and some support offices and breakroom facilities. The model will be used to provide facility designers and operations personnel with capabilities for fit-up/interference analysis, visual ''walk-throughs'' for optimizing maintain-ability, and human factors and operability analyses. It will be used to determine performance design characteristics, to generate construction drawings, and to integrate control room layout, equipment mounting, grounding equipment, electrical cabling, and utility services into ANS building designs. This paper describes the development of the initial phase of the 3-D computer model for the ANS main control complex and plans for its development and use

  18. Surface-complexation models for sorption onto heterogeneous surfaces

    International Nuclear Information System (INIS)

    Harvey, K.B.

    1997-10-01

    This report provides a description of the discrete-logK spectrum model, together with a description of its derivation, and of its place in the larger context of surface-complexation modelling. The tools necessary to apply the discrete-logK spectrum model are discussed, and background information appropriate to this discussion is supplied as appendices. (author)

  19. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    Science.gov (United States)

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  20. Model-Based Approach to the Evaluation of Task Complexity in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Ham, Dong Han

    2007-02-01

    This study developed a model-based method for evaluating task complexity and examined the ways of evaluating the complexity of tasks designed for abnormal situations and daily task situations in NPPs. The main results of this study can be summarised as follows. First, this study developed a conceptual framework for studying complexity factors and a model of complexity factors that classifies complexity factors according to the types of knowledge that human operators use. Second, this study developed a more practical model of task complexity factors and identified twenty-one complexity factors based on the model. The model emphasizes that a task is a system to be designed and its complexity has several dimensions. Third, we developed a method of identifying task complexity factors and evaluating task complexity qualitatively based on the developed model of task complexity factors. This method can be widely used in various task situations. Fourth, this study examined the applicability of TACOM to abnormal situations and daily task situations, such as maintenance and confirmed that it can be reasonably used in those situations. Fifth, we developed application examples to demonstrate the use of the theoretical results of this study. Lastly, this study reinterpreted well-know principles for designing information displays in NPPs in terms of task complexity and suggested a way of evaluating the conceptual design of displays in an analytical way by using the concept of task complexity. All of the results of this study will be used as a basis when evaluating the complexity of tasks designed on procedures or information displays and designing ways of improving human performance in NPPs

  1. Understanding the implementation of complex interventions in health care: the normalization process model

    Directory of Open Access Journals (Sweden)

    Rogers Anne

    2007-09-01

    Full Text Available Abstract Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration. Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model.

  2. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    Science.gov (United States)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  3. Modeling the propagation of mobile malware on complex networks

    Science.gov (United States)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  4. Nostradamus 2014 prediction, modeling and analysis of complex systems

    CERN Document Server

    Suganthan, Ponnuthurai; Chen, Guanrong; Snasel, Vaclav; Abraham, Ajith; Rössler, Otto

    2014-01-01

    The prediction of behavior of complex systems, analysis and modeling of its structure is a vitally important problem in engineering, economy and generally in science today. Examples of such systems can be seen in the world around us (including our bodies) and of course in almost every scientific discipline including such “exotic” domains as the earth’s atmosphere, turbulent fluids, economics (exchange rate and stock markets), population growth, physics (control of plasma), information flow in social networks and its dynamics, chemistry and complex networks. To understand such complex dynamics, which often exhibit strange behavior, and to use it in research or industrial applications, it is paramount to create its models. For this purpose there exists a rich spectrum of methods, from classical such as ARMA models or Box Jenkins method to modern ones like evolutionary computation, neural networks, fuzzy logic, geometry, deterministic chaos amongst others. This proceedings book is a collection of accepted ...

  5. Glass Durability Modeling, Activated Complex Theory (ACT)

    International Nuclear Information System (INIS)

    CAROL, JANTZEN

    2005-01-01

    The most important requirement for high-level waste glass acceptance for disposal in a geological repository is the chemical durability, expressed as a glass dissolution rate. During the early stages of glass dissolution in near static conditions that represent a repository disposal environment, a gel layer resembling a membrane forms on the glass surface through which ions exchange between the glass and the leachant. The hydrated gel layer exhibits acid/base properties which are manifested as the pH dependence of the thickness and nature of the gel layer. The gel layer has been found to age into either clay mineral assemblages or zeolite mineral assemblages. The formation of one phase preferentially over the other has been experimentally related to changes in the pH of the leachant and related to the relative amounts of Al +3 and Fe +3 in a glass. The formation of clay mineral assemblages on the leached glass surface layers ,lower pH and Fe +3 rich glasses, causes the dissolution rate to slow to a long-term steady state rate. The formation of zeolite mineral assemblages ,higher pH and Al +3 rich glasses, on leached glass surface layers causes the dissolution rate to increase and return to the initial high forward rate. The return to the forward dissolution rate is undesirable for long-term performance of glass in a disposal environment. An investigation into the role of glass stoichiometry, in terms of the quasi-crystalline mineral species in a glass, has shown that the chemistry and structure in the parent glass appear to control the activated surface complexes that form in the leached layers, and these mineral complexes ,some Fe +3 rich and some Al +3 rich, play a role in whether or not clays or zeolites are the dominant species formed on the leached glass surface. The chemistry and structure, in terms of Q distributions of the parent glass, are well represented by the atomic ratios of the glass forming components. Thus, glass dissolution modeling using simple

  6. Stability of rotor systems: A complex modelling approach

    DEFF Research Database (Denmark)

    Kliem, Wolfhard; Pommer, Christian; Stoustrup, Jakob

    1998-01-01

    The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...... approach applying bounds of appropriate Rayleigh quotients. The rotor systems tested are: a simple Laval rotor, a Laval rotor with additional elasticity and damping in the bearings, and a number of rotor systems with complex symmetric 4 x 4 randomly generated matrices.......The dynamics of a large class of rotor systems can be modelled by a linearized complex matrix differential equation of second order, Mz + (D + iG)(z) over dot + (K + iN)z = 0, where the system matrices M, D, G, K and N are real symmetric. Moreover M and K are assumed to be positive definite and D...

  7. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    Science.gov (United States)

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  8. Complexity effects in choice experiments-based models

    NARCIS (Netherlands)

    Dellaert, B.G.C.; Donkers, B.; van Soest, A.H.O.

    2012-01-01

    Many firms rely on choice experiment–based models to evaluate future marketing actions under various market conditions. This research investigates choice complexity (i.e., number of alternatives, number of attributes, and utility similarity between the most attractive alternatives) and individual

  9. A multi-element cosmological model with a complex space-time topology

    Science.gov (United States)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  10. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    Science.gov (United States)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  11. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    Science.gov (United States)

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  12. DEVELOPING INDUSTRIAL ROBOT SIMULATION MODEL TUR10-K USING “UNIVERSAL MECHANISM” SOFTWARE COMPLEX

    Directory of Open Access Journals (Sweden)

    Vadim Vladimirovich Chirkov

    2018-02-01

    Full Text Available Manipulation robots are complex spatial mechanical systems having five or six degrees of freedom, and sometimes more. For this reason, modeling manipulative robots movement, even in the kinematic formulation, is a complex mathematical task. If one moves from kinematic modeling of motion to dynamic modeling then there must be taken into account the inertial properties of the modeling object. In this case, analytical constructing of such a complex object mathematical model as a manipulation robot becomes practically impossible. Therefore, special computer-aided design systems, called CAE-systems, are used for modeling complex mechanical systems. The purpose of the paper is simulation model construction of a complex mechanical system, such as the industrial robot TUR10-K, to obtain its dynamic characteristics. Developing such models makes it possible to reduce the complexity of designing complex systems process and to obtain the necessary characteristics. Purpose. Developing the simulation model of the industrial robot TUR10-K and obtaining dynamic characteristics of the mechanism. Methodology: the article is used a computer simulation method. Results: There is obtained the simulation model of the robot and its dynamic characteristics. Practical implications: the results can be used in the mechanical systems design and various simulation models.

  13. System Testability Analysis for Complex Electronic Devices Based on Multisignal Model

    International Nuclear Information System (INIS)

    Long, B; Tian, S L; Huang, J G

    2006-01-01

    It is necessary to consider the system testability problems for electronic devices during their early design phase because modern electronic devices become smaller and more compositive while their function and structure are more complex. Multisignal model, combining advantage of structure model and dependency model, is used to describe the fault dependency relationship for the complex electronic devices, and the main testability indexes (including optimal test program, fault detection rate, fault isolation rate, etc.) to evaluate testability and corresponding algorithms are given. The system testability analysis process is illustrated for USB-GPIB interface circuit with TEAMS toolbox. The experiment results show that the modelling method is simple, the computation speed is rapid and this method has important significance to improve diagnostic capability for complex electronic devices

  14. New approaches in agent-based modeling of complex financial systems

    Science.gov (United States)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  15. Mathematical modelling of complex contagion on clustered networks

    Science.gov (United States)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  16. Mathematical modelling of complex contagion on clustered networks

    Directory of Open Access Journals (Sweden)

    David J. P. O'Sullivan

    2015-09-01

    Full Text Available The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010, adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the complex contagion effects of social reinforcement are important in such diffusion, in contrast to simple contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010, to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  17. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  18. Advances in dynamic network modeling in complex transportation systems

    CERN Document Server

    Ukkusuri, Satish V

    2013-01-01

    This book focuses on the latest in dynamic network modeling, including route guidance and traffic control in transportation systems and other complex infrastructure networks. Covers dynamic traffic assignment, flow modeling, mobile sensor deployment and more.

  19. Developing an agent-based model on how different individuals solve complex problems

    Directory of Open Access Journals (Sweden)

    Ipek Bozkurt

    2015-01-01

    Full Text Available Purpose: Research that focuses on the emotional, mental, behavioral and cognitive capabilities of individuals has been abundant within disciplines such as psychology, sociology, and anthropology, among others. However, when facing complex problems, a new perspective to understand individuals is necessary. The main purpose of this paper is to develop an agent-based model and simulation to gain understanding on the decision-making and problem-solving abilities of individuals. Design/Methodology/approach: The micro-level analysis modeling and simulation paradigm Agent-Based Modeling Through the use of Agent-Based Modeling, insight is gained on how different individuals with different profiles deal with complex problems. Using previous literature from different bodies of knowledge, established theories and certain assumptions as input parameters, a model is built and executed through a computer simulation. Findings: The results indicate that individuals with certain profiles have better capabilities to deal with complex problems. Moderate profiles could solve the entire complex problem, whereas profiles within extreme conditions could not. This indicates that having a strong predisposition is not the ideal way when approaching complex problems, and there should always be a component from the other perspective. The probability that an individual may use these capabilities provided by the opposite predisposition provides to be a useful option. Originality/value: The originality of the present research stems from how individuals are profiled, and the model and simulation that is built to understand how they solve complex problems. The development of the agent-based model adds value to the existing body of knowledge within both social sciences, and modeling and simulation.

  20. Between Complexity and Parsimony: Can Agent-Based Modelling Resolve the Trade-off

    DEFF Research Database (Denmark)

    Nielsen, Helle Ørsted; Malawska, Anna Katarzyna

    2013-01-01

    to BR- based policy studies would be to couple research on bounded ra-tionality with agent-based modeling. Agent-based models (ABMs) are computational models for simulating the behavior and interactions of any number of decision makers in a dynamic system. Agent-based models are better suited than...... are general equilibrium models for capturing behavior patterns of complex systems. ABMs may have the potential to represent complex systems without oversimplifying them. At the same time, research in bounded rationality and behavioral economics has already yielded many insights that could inform the modeling......While Herbert Simon espoused development of general models of behavior, he also strongly advo-cated that these models be based on realistic assumptions about humans and therefore reflect the complexity of human cognition and social systems (Simon 1997). Hence, the model of bounded rationality...

  1. Complex groundwater flow systems as traveling agent models

    Directory of Open Access Journals (Sweden)

    Oliver López Corona

    2014-10-01

    Full Text Available Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.

  2. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  3. Modeling Air-Quality in Complex Terrain Using Mesoscale and ...

    African Journals Online (AJOL)

    Air-quality in a complex terrain (Colorado-River-Valley/Grand-Canyon Area, Southwest U.S.) is modeled using a higher-order closure mesoscale model and a higher-order closure dispersion model. Non-reactive tracers have been released in the Colorado-River valley, during winter and summer 1992, to study the ...

  4. a Range Based Method for Complex Facade Modeling

    Science.gov (United States)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  5. A RANGE BASED METHOD FOR COMPLEX FACADE MODELING

    Directory of Open Access Journals (Sweden)

    A. Adami

    2012-09-01

    Full Text Available 3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes and the final results (a more detailed and complex mesh versus an approximate and more simple solid model. Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and

  6. A cognitive model for software architecture complexity

    NARCIS (Netherlands)

    Bouwers, E.; Lilienthal, C.; Visser, J.; Van Deursen, A.

    2010-01-01

    Evaluating the complexity of the architecture of a softwaresystem is a difficult task. Many aspects have to be considered to come to a balanced assessment. Several architecture evaluation methods have been proposed, but very few define a quality model to be used during the evaluation process. In

  7. Reduced Complexity Volterra Models for Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Hacıoğlu Rıfat

    2001-01-01

    Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

  8. Infinite Multiple Membership Relational Modeling for Complex Networks

    DEFF Research Database (Denmark)

    Mørup, Morten; Schmidt, Mikkel Nørgaard; Hansen, Lars Kai

    Learning latent structure in complex networks has become an important problem fueled by many types of networked data originating from practically all fields of science. In this paper, we propose a new non-parametric Bayesian multiplemembership latent feature model for networks. Contrary to existing...... multiplemembership models that scale quadratically in the number of vertices the proposedmodel scales linearly in the number of links admittingmultiple-membership analysis in large scale networks. We demonstrate a connection between the single membership relational model and multiple membership models and show...

  9. Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2011-01-01

    Roč. 24, č. 2 (2011), s. 171-182 ISSN 0893-6080 R&D Project s: GA ČR GA201/08/1744 Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : linear approximation schemes * variable-basis approximation schemes * model complexity * worst-case errors * neural networks * kernel models Subject RIV: IN - Informatics, Computer Science Impact factor: 2.182, year: 2011

  10. Applicability of surface complexation modelling in TVO's studies on sorption of radionuclides

    International Nuclear Information System (INIS)

    Carlsson, T.

    1994-03-01

    The report focuses on the possibility of applying surface complexation theories to the conditions at a potential repository site in Finland and of doing proper experimental work in order to determine necessary constants for the models. The report provides background information on: (1) what type experiments should be carried out in order to produce data for surface complexation modelling of sorption phenomena under potential Finnish repository conditions, and (2) how to design and perform properly such experiments, in order to gather data, develop models or both. The report does not describe in detail how proper surface complexation experiments or modelling should be carried out. The work contains several examples of information that may be valuable in both modelling and experimental work. (51 refs., 6 figs., 4 tabs.)

  11. Agent-Based and Macroscopic Modeling of the Complex Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Aleksejus Kononovičius

    2013-08-01

    Full Text Available Purpose – The focus of this contribution is the correspondence between collective behavior and inter-individual interactions in the complex socio-economic systems. Currently there is a wide selection of papers proposing various models for the both collective behavior and inter-individual interactions in the complex socio-economic systems. Yet the papers directly relating these two concepts are still quite rare. By studying this correspondence we discuss a cutting edge approach to the modeling of complex socio-economic systems. Design/methodology/approach – The collective behavior is often modeled using stochastic and ordinary calculus, while the inter-individual interactions are modeled using agent-based models. In order to obtain the ideal model, one should start from these frameworks and build a bridge to reach another. This is a formidable task, if we consider the top-down approach, namely starting from the collective behavior and moving towards inter-individual interactions. The bottom-up approach also fails, if complex inter-individual interaction models are considered, yet in this case we can start with simple models and increase the complexity as needed. Findings – The bottom-up approach, considering simple agent-based herding model as a model for the inter-individual interactions, allows us to derive certain macroscopic models of the complex socio-economic systems from the agent-based perspective. This provides interesting insights into the collective behavior patterns observed in the complex socio-economic systems. Research limitations/implications –The simplicity of the agent-based herding model might be considered to be somewhat limiting. Yet this simplicity implies that the model is highly universal. It reproduces universal features of social behavior and also can be further extended to fit different socio-economic scenarios. Practical implications – Insights provided in this contribution might be used to modify existing

  12. A low-complexity interacting multiple model filter for maneuvering target tracking

    KAUST Repository

    Khalid, Syed Safwan; Abrar, Shafayat

    2017-01-01

    In this work, we address the target tracking problem for a coordinate-decoupled Markovian jump-mean-acceleration based maneuvering mobility model. A novel low-complexity alternative to the conventional interacting multiple model (IMM) filter is proposed for this class of mobility models. The proposed tracking algorithm utilizes a bank of interacting filters where the interactions are limited to the mixing of the mean estimates, and it exploits a fixed off-line computed Kalman gain matrix for the entire filter bank. Consequently, the proposed filter does not require matrix inversions during on-line operation which significantly reduces its complexity. Simulation results show that the performance of the low-complexity proposed scheme remains comparable to that of the traditional (highly-complex) IMM filter. Furthermore, we derive analytical expressions that iteratively evaluate the transient and steady-state performance of the proposed scheme, and establish the conditions that ensure the stability of the proposed filter. The analytical findings are in close accordance with the simulated results.

  13. A low-complexity interacting multiple model filter for maneuvering target tracking

    KAUST Repository

    Khalid, Syed Safwan

    2017-01-22

    In this work, we address the target tracking problem for a coordinate-decoupled Markovian jump-mean-acceleration based maneuvering mobility model. A novel low-complexity alternative to the conventional interacting multiple model (IMM) filter is proposed for this class of mobility models. The proposed tracking algorithm utilizes a bank of interacting filters where the interactions are limited to the mixing of the mean estimates, and it exploits a fixed off-line computed Kalman gain matrix for the entire filter bank. Consequently, the proposed filter does not require matrix inversions during on-line operation which significantly reduces its complexity. Simulation results show that the performance of the low-complexity proposed scheme remains comparable to that of the traditional (highly-complex) IMM filter. Furthermore, we derive analytical expressions that iteratively evaluate the transient and steady-state performance of the proposed scheme, and establish the conditions that ensure the stability of the proposed filter. The analytical findings are in close accordance with the simulated results.

  14. Modeling of anaerobic digestion of complex substrates

    International Nuclear Information System (INIS)

    Keshtkar, A. R.; Abolhamd, G.; Meyssami, B.; Ghaforian, H.

    2003-01-01

    A structured mathematical model of anaerobic conversion of complex organic materials in non-ideally cyclic-batch reactors for biogas production has been developed. The model is based on multiple-reaction stoichiometry (enzymatic hydrolysis, acidogenesis, aceto genesis and methano genesis), microbial growth kinetics, conventional material balances in the liquid and gas phases for a cyclic-batch reactor, liquid-gas interactions, liquid-phase equilibrium reactions and a simple mixing model which considers the reactor volume in two separate sections: the flow-through and the retention regions. The dynamic model describes the effects of reactant's distribution resulting from the mixing conditions, time interval of feeding, hydraulic retention time and mixing parameters on the process performance. The model is applied in the simulation of anaerobic digestion of cattle manure under different operating conditions. The model is compared with experimental data and good correlations are obtained

  15. Parametric Linear Hybrid Automata for Complex Environmental Systems Modeling

    Directory of Open Access Journals (Sweden)

    Samar Hayat Khan Tareen

    2015-07-01

    Full Text Available Environmental systems, whether they be weather patterns or predator-prey relationships, are dependent on a number of different variables, each directly or indirectly affecting the system at large. Since not all of these factors are known, these systems take on non-linear dynamics, making it difficult to accurately predict meaningful behavioral trends far into the future. However, such dynamics do not warrant complete ignorance of different efforts to understand and model close approximations of these systems. Towards this end, we have applied a logical modeling approach to model and analyze the behavioral trends and systematic trajectories that these systems exhibit without delving into their quantification. This approach, formalized by René Thomas for discrete logical modeling of Biological Regulatory Networks (BRNs and further extended in our previous studies as parametric biological linear hybrid automata (Bio-LHA, has been previously employed for the analyses of different molecular regulatory interactions occurring across various cells and microbial species. As relationships between different interacting components of a system can be simplified as positive or negative influences, we can employ the Bio-LHA framework to represent different components of the environmental system as positive or negative feedbacks. In the present study, we highlight the benefits of hybrid (discrete/continuous modeling which lead to refinements among the fore-casted behaviors in order to find out which ones are actually possible. We have taken two case studies: an interaction of three microbial species in a freshwater pond, and a more complex atmospheric system, to show the applications of the Bio-LHA methodology for the timed hybrid modeling of environmental systems. Results show that the approach using the Bio-LHA is a viable method for behavioral modeling of complex environmental systems by finding timing constraints while keeping the complexity of the model

  16. Modeling and complexity of stochastic interacting Lévy type financial price dynamics

    Science.gov (United States)

    Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao

    2018-06-01

    In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series

  17. Surface complexation modeling of zinc sorption onto ferrihydrite.

    Science.gov (United States)

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength

  18. Applications of Nonlinear Dynamics Model and Design of Complex Systems

    CERN Document Server

    In, Visarath; Palacios, Antonio

    2009-01-01

    This edited book is aimed at interdisciplinary, device-oriented, applications of nonlinear science theory and methods in complex systems. In particular, applications directed to nonlinear phenomena with space and time characteristics. Examples include: complex networks of magnetic sensor systems, coupled nano-mechanical oscillators, nano-detectors, microscale devices, stochastic resonance in multi-dimensional chaotic systems, biosensors, and stochastic signal quantization. "applications of nonlinear dynamics: model and design of complex systems" brings together the work of scientists and engineers that are applying ideas and methods from nonlinear dynamics to design and fabricate complex systems.

  19. Passengers, Crowding and Complexity : Models for passenger oriented public transport

    NARCIS (Netherlands)

    P.C. Bouman (Paul)

    2017-01-01

    markdownabstractPassengers, Crowding and Complexity was written as part of the Complexity in Public Transport (ComPuTr) project funded by the Netherlands Organisation for Scientific Research (NWO). This thesis studies in three parts how microscopic data can be used in models that have the potential

  20. FRAM Modelling Complex Socio-technical Systems

    CERN Document Server

    Hollnagel, Erik

    2012-01-01

    There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.

  1. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Science.gov (United States)

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  2. A SIMULATION MODEL OF THE GAS COMPLEX

    Directory of Open Access Journals (Sweden)

    Sokolova G. E.

    2016-06-01

    Full Text Available The article considers the dynamics of gas production in Russia, the structure of sales in the different market segments, as well as comparative dynamics of selling prices on these segments. Problems of approach to the creation of the gas complex using a simulation model, allowing to estimate efficiency of the project and determine the stability region of the obtained solutions. In the presented model takes into account the unit repayment of the loan, allowing with the first year of simulation to determine the possibility of repayment of the loan. The model object is a group of gas fields, which is determined by the minimum flow rate above which the project is cost-effective. In determining the minimum source flow rate for the norm of discount is taken as a generalized weighted average percentage on debt and equity taking into account risk premiums. He also serves as the lower barrier to internal rate of return below which the project is rejected as ineffective. Analysis of the dynamics and methods of expert evaluation allow to determine the intervals of variation of the simulated parameters, such as the price of gas and the exit gas complex at projected capacity. Calculated using the Monte Carlo method, for each random realization of the model simulated values of parameters allow to obtain a set of optimal for each realization of values minimum yield of wells, and also allows to determine the stability region of the solution.

  3. Understanding large multiprotein complexes: applying a multiple allosteric networks model to explain the function of the Mediator transcription complex.

    Science.gov (United States)

    Lewis, Brian A

    2010-01-15

    The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.

  4. The effects of model complexity and calibration period on groundwater recharge simulations

    Science.gov (United States)

    Moeck, Christian; Van Freyberg, Jana; Schirmer, Mario

    2017-04-01

    A significant number of groundwater recharge models exist that vary in terms of complexity (i.e., structure and parametrization). Typically, model selection and conceptualization is very subjective and can be a key source of uncertainty in the recharge simulations. Another source of uncertainty is the implicit assumption that model parameters, calibrated over historical periods, are also valid for the simulation period. To the best of our knowledge there is no systematic evaluation of the effect of the model complexity and calibration strategy on the performance of recharge models. To address this gap, we utilized a long-term recharge data set (20 years) from a large weighting lysimeter. We performed a differential split sample test with four groundwater recharge models that vary in terms of complexity. They were calibrated using six calibration periods with climatically contrasting conditions in a constrained Monte Carlo approach. Despite the climatically contrasting conditions, all models performed similarly well during the calibration. However, during validation a clear effect of the model structure on model performance was evident. The more complex, physically-based models predicted recharge best, even when calibration and prediction periods had very different climatic conditions. In contrast, more simplistic soil-water balance and lumped model performed poorly under such conditions. For these models we found a strong dependency on the chosen calibration period. In particular, our analysis showed that this can have relevant implications when using recharge models as decision-making tools in a broad range of applications (e.g. water availability, climate change impact studies, water resource management, etc.).

  5. Entropy, complexity, and Markov diagrams for random walk cancer models.

    Science.gov (United States)

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  6. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  7. Assessment of wear dependence parameters in complex model of cutting tool wear

    Science.gov (United States)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  8. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

    Science.gov (United States)

    Thompson, Kimberly M.

    Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

  9. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  10. Complex Road Intersection Modelling Based on Low-Frequency GPS Track Data

    Science.gov (United States)

    Huang, J.; Deng, M.; Zhang, Y.; Liu, H.

    2017-09-01

    It is widely accepted that digital map becomes an indispensable guide for human daily traveling. Traditional road network maps are produced in the time-consuming and labour-intensive ways, such as digitizing printed maps and extraction from remote sensing images. At present, a large number of GPS trajectory data collected by floating vehicles makes it a reality to extract high-detailed and up-to-date road network information. Road intersections are often accident-prone areas and very critical to route planning and the connectivity of road networks is mainly determined by the topological geometry of road intersections. A few studies paid attention on detecting complex road intersections and mining the attached traffic information (e.g., connectivity, topology and turning restriction) from massive GPS traces. To the authors' knowledge, recent studies mainly used high frequency (1 s sampling rate) trajectory data to detect the crossroads regions or extract rough intersection models. It is still difficult to make use of low frequency (20-100 s) and easily available trajectory data to modelling complex road intersections geometrically and semantically. The paper thus attempts to construct precise models for complex road intersection by using low frequency GPS traces. We propose to firstly extract the complex road intersections by a LCSS-based (Longest Common Subsequence) trajectory clustering method, then delineate the geometry shapes of complex road intersections by a K-segment principle curve algorithm, and finally infer the traffic constraint rules inside the complex intersections.

  11. Structured analysis and modeling of complex systems

    Science.gov (United States)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  12. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    Science.gov (United States)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  13. Evaluation of soil flushing of complex contaminated soil: An experimental and modeling simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Sung Mi; Kang, Christina S. [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Jonghwa [Department of Industrial Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of); Kim, Han S., E-mail: hankim@konkuk.ac.kr [Department of Environmental Engineering, Konkuk University, 120 Neungdong-ro, Gwangjin-gu, Seoul 143-701 (Korea, Republic of)

    2015-04-28

    Highlights: • Remediation of complex contaminated soil achieved by sequential soil flushing. • Removal of Zn, Pb, and heavy petroleum oils using 0.05 M citric acid and 2% SDS. • Unified desorption distribution coefficients modeled and experimentally determined. • Nonequilibrium models for the transport behavior of complex contaminants in soils. - Abstract: The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, K{sub Ubatch} and K{sub Lbatch}, converged to constant values as C{sub e} increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of K{sub Ubatch} and K{sub Lbatch} were very close to those of K{sub Ufit} and K{sub Lfit} determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination.

  14. A computational framework for modeling targets as complex adaptive systems

    Science.gov (United States)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  15. Model-based safety architecture framework for complex systems

    NARCIS (Netherlands)

    Schuitemaker, Katja; Rajabali Nejad, Mohammadreza; Braakhuis, J.G.; Podofillini, Luca; Sudret, Bruno; Stojadinovic, Bozidar; Zio, Enrico; Kröger, Wolfgang

    2015-01-01

    The shift to transparency and rising need of the general public for safety, together with the increasing complexity and interdisciplinarity of modern safety-critical Systems of Systems (SoS) have resulted in a Model-Based Safety Architecture Framework (MBSAF) for capturing and sharing architectural

  16. Contrasting model complexity under a changing climate in a headwaters catchment.

    Science.gov (United States)

    Foster, L.; Williams, K. H.; Maxwell, R. M.

    2017-12-01

    Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater

  17. Modeling and simulation for fewer-axis grinding of complex surface

    Science.gov (United States)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  18. Nonlinear model of epidemic spreading in a complex social network.

    Science.gov (United States)

    Kosiński, Robert A; Grabowski, A

    2007-10-01

    The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.

  19. Deciphering the complexity of acute inflammation using mathematical models.

    Science.gov (United States)

    Vodovotz, Yoram

    2006-01-01

    Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.

  20. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.

    Science.gov (United States)

    Transtrum, Mark K; Qiu, Peng

    2016-05-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior.

  1. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    Science.gov (United States)

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-04-11

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Green IT engineering concepts, models, complex systems architectures

    CERN Document Server

    Kondratenko, Yuriy; Kacprzyk, Janusz

    2017-01-01

    This volume provides a comprehensive state of the art overview of a series of advanced trends and concepts that have recently been proposed in the area of green information technologies engineering as well as of design and development methodologies for models and complex systems architectures and their intelligent components. The contributions included in the volume have their roots in the authors’ presentations, and vivid discussions that have followed the presentations, at a series of workshop and seminars held within the international TEMPUS-project GreenCo project in United Kingdom, Italy, Portugal, Sweden and the Ukraine, during 2013-2015 and at the 1st - 5th Workshops on Green and Safe Computing (GreenSCom) held in Russia, Slovakia and the Ukraine. The book presents a systematic exposition of research on principles, models, components and complex systems and a description of industry- and society-oriented aspects of the green IT engineering. A chapter-oriented structure has been adopted for this book ...

  3. An Ontology for Modeling Complex Inter-relational Organizations

    Science.gov (United States)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  4. Efficient Simulation Modeling of an Integrated High-Level-Waste Processing Complex

    International Nuclear Information System (INIS)

    Gregory, Michael V.; Paul, Pran K.

    2000-01-01

    An integrated computational tool named the Production Planning Model (ProdMod) has been developed to simulate the operation of the entire high-level-waste complex (HLW) at the Savannah River Site (SRS) over its full life cycle. ProdMod is used to guide SRS management in operating the waste complex in an economically efficient and environmentally sound manner. SRS HLW operations are modeled using coupled algebraic equations. The dynamic nature of plant processes is modeled in the form of a linear construct in which the time dependence is implicit. Batch processes are modeled in discrete event-space, while continuous processes are modeled in time-space. The ProdMod methodology maps between event-space and time-space such that the inherent mathematical discontinuities in batch process simulation are avoided without sacrificing any of the necessary detail in the batch recipe steps. Modeling the processes separately in event- and time-space using linear constructs, and then coupling the two spaces, has accelerated the speed of simulation compared to a typical dynamic simulation. The ProdMod simulator models have been validated against operating data and other computer codes. Case studies have demonstrated the usefulness of the ProdMod simulator in developing strategies that demonstrate significant cost savings in operating the SRS HLW complex and in verifying the feasibility of newly proposed processes

  5. Modeling data irregularities and structural complexities in data envelopment analysis

    CERN Document Server

    Zhu, Joe

    2007-01-01

    In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. This book deals with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling of both qualitative and quantitative data. This handbook treatment deals with specific data problems including: imprecise or inaccurate data; missing data; qualitative data; outliers; undesirable outputs; quality data; statistical analysis; software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.

  6. Model-based identification and use of task complexity factors of human integrated systems

    International Nuclear Information System (INIS)

    Ham, Dong-Han; Park, Jinkyun; Jung, Wondea

    2012-01-01

    Task complexity is one of the conceptual constructs that are critical to explain and predict human performance in human integrated systems. A basic approach to evaluating the complexity of tasks is to identify task complexity factors and measure them. Although a great deal of task complexity factors have been studied, there is still a lack of conceptual frameworks for identifying and organizing them analytically, which can be generally used irrespective of the types of domains and tasks. This study proposes a model-based approach to identifying and using task complexity factors, which has two facets—the design aspects of a task and complexity dimensions. Three levels of design abstraction, which are functional, behavioral, and structural aspects of a task, characterize the design aspect of a task. The behavioral aspect is further classified into five cognitive processing activity types. The complexity dimensions explain a task complexity from different perspectives, which are size, variety, and order/organization. Twenty-one task complexity factors are identified by the combination of the attributes of each facet. Identification and evaluation of task complexity factors based on this model is believed to give insights for improving the design quality of tasks. This model for complexity factors can also be used as a referential framework for allocating tasks and designing information aids. The proposed approach is applied to procedure-based tasks of nuclear power plants (NPPs) as a case study to demonstrate its use. Last, we compare the proposed approach with other studies and then suggest some future research directions.

  7. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    Science.gov (United States)

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  8. Low-complexity Behavioral Model for Predictive Maintenance of Railway Turnouts

    DEFF Research Database (Denmark)

    Barkhordari, Pegah; Galeazzi, Roberto; Tejada, Alejandro de Miguel

    2017-01-01

    together with the Eigensystem Realization Algorithm – a type of subspace identification – to identify a fourth order model of the infrastructure. The robustness and predictive capability of the low-complexity behavioral model to reproduce track responses under different types of train excitations have been......Maintenance of railway infrastructures represents a major cost driver for any infrastructure manager since reliability and dependability must be guaranteed at all times. Implementation of predictive maintenance policies relies on the availability of condition monitoring systems able to assess...... the infrastructure health state. The core of any condition monitoring system is the a-priori knowledge about the process to be monitored, in the form of either mathematical models of different complexity or signal features characterizing the healthy/faulty behavior. This study investigates the identification...

  9. Modelling the self-organization and collapse of complex networks

    Indian Academy of Sciences (India)

    Modelling the self-organization and collapse of complex networks. Sanjay Jain Department of Physics and Astrophysics, University of Delhi Jawaharlal Nehru Centre for Advanced Scientific Research, Bangalore Santa Fe Institute, Santa Fe, New Mexico.

  10. From complex to simple: interdisciplinary stochastic models

    International Nuclear Information System (INIS)

    Mazilu, D A; Zamora, G; Mazilu, I

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions for certain physical quantities, such as the time dependence of the length of the microtubules, and diffusion coefficients. The second one is a stochastic adsorption model with applications in surface deposition, epidemics and voter systems. We introduce the ‘empty interval method’ and show sample calculations for the time-dependent particle density. These models can serve as an introduction to the field of non-equilibrium statistical physics, and can also be used as a pedagogical tool to exemplify standard statistical physics concepts, such as random walks or the kinetic approach of the master equation. (paper)

  11. Mathematical modeling of complexing in the scandium-salicylic acid-isoamyl alcohol system

    International Nuclear Information System (INIS)

    Evseev, A.M.; Smirnova, N.S.; Fadeeva, V.I.; Tikhomirova, T.I.; Kir'yanov, Yu.A.

    1984-01-01

    Mathematical modeling of an equilibrium multicomponent physicochemical system for extraction of Sc salicylate complexes by isoamyl alcohol was conducted. To calculate the equilibrium concentrations of Sc complexes different with respect to the content and composition, the system of nonlinear algebraic mass balance equations was solved. Experimental data on the extraction of Sc salicylates by isoamyl alcohol versus the pH of the solution at a constant Sc concentration and different concentration of salicylate-ions were used for construction of the mathematical model. The stability constants of ScHSal 2+ , Sc(HSal) 3 , ScOH(HSal) 2 , ScoH(HSal) 2 complexes were calculated

  12. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    Science.gov (United States)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  13. Kolmogorov complexity, pseudorandom generators and statistical models testing

    Czech Academy of Sciences Publication Activity Database

    Šindelář, Jan; Boček, Pavel

    2002-01-01

    Roč. 38, č. 6 (2002), s. 747-759 ISSN 0023-5954 R&D Projects: GA ČR GA102/99/1564 Institutional research plan: CEZ:AV0Z1075907 Keywords : Kolmogorov complexity * pseudorandom generators * statistical models testing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.341, year: 2002

  14. Coupled economic-ecological models for ecosystem-based fishery management: Exploration of trade-offs between model complexity and management needs

    DEFF Research Database (Denmark)

    Thunberg, Eric; Holland, Dan; Nielsen, J. Rasmus

    2012-01-01

    Ecosystem based fishery management has moved beyond rhetorical statements calling for a more holistic approach to resource management, to implementing decisions on resource use that are compatible with goals of maintaining ecosystem health and resilience. Coupled economic-ecological models...... are a primary tool for informing these decisions. Recognizing the importance of these models, the International Council for the Exploration of the Seas (ICES) formed a Study Group on Integration of Economics, Stock Assessment and Fisheries Management (SGIMM) to explore alternative modelling approaches...... and ecological systems are inherently complex, models are abstractions of these systems incorporating varying levels of complexity depending on available data and the management issues to be addressed. The objective of this special session was to assess the pros and cons of increasing model complexity...

  15. GEOQUIMICO : an interactive tool for comparing sorption conceptual models (surface complexation modeling versus K[D])

    International Nuclear Information System (INIS)

    Hammond, Glenn E.; Cygan, Randall Timothy

    2007-01-01

    Within reactive geochemical transport, several conceptual models exist for simulating sorption processes in the subsurface. Historically, the K D approach has been the method of choice due to ease of implementation within a reactive transport model and straightforward comparison with experimental data. However, for modeling complex sorption phenomenon (e.g. sorption of radionuclides onto mineral surfaces), this approach does not systematically account for variations in location, time, or chemical conditions, and more sophisticated methods such as a surface complexation model (SCM) must be utilized. It is critical to determine which conceptual model to use; that is, when the material variation becomes important to regulatory decisions. The geochemical transport tool GEOQUIMICO has been developed to assist in this decision-making process. GEOQUIMICO provides a user-friendly framework for comparing the accuracy and performance of sorption conceptual models. The model currently supports the K D and SCM conceptual models. The code is written in the object-oriented Java programming language to facilitate model development and improve code portability. The basic theory underlying geochemical transport and the sorption conceptual models noted above is presented in this report. Explanations are provided of how these physicochemical processes are instrumented in GEOQUIMICO and a brief verification study comparing GEOQUIMICO results to data found in the literature is given

  16. A coupled mass transfer and surface complexation model for uranium (VI) removal from wastewaters

    International Nuclear Information System (INIS)

    Lenhart, J.; Figueroa, L.A.; Honeyman, B.D.

    1994-01-01

    A remediation technique has been developed for removing uranium (VI) from complex contaminated groundwater using flake chitin as a biosorbent in batch and continuous flow configurations. With this system, U(VI) removal efficiency can be predicted using a model that integrates surface complexation models, mass transport limitations and sorption kinetics. This integration allows the reactor model to predict removal efficiencies for complex groundwaters with variable U(VI) concentrations and other constituents. The system has been validated using laboratory-derived kinetic data in batch and CSTR systems to verify the model predictions of U(VI) uptake from simulated contaminated groundwater

  17. A Framework for Modeling and Analyzing Complex Distributed Systems

    National Research Council Canada - National Science Library

    Lynch, Nancy A; Shvartsman, Alex Allister

    2005-01-01

    Report developed under STTR contract for topic AF04-T023. This Phase I project developed a modeling language and laid a foundation for computational support tools for specifying, analyzing, and verifying complex distributed system designs...

  18. Numerical Modeling of Fluid-Structure Interaction with Rheologically Complex Fluids

    OpenAIRE

    Chen, Xingyuan

    2014-01-01

    In the present work the interaction between rheologically complex fluids and elastic solids is studied by means of numerical modeling. The investigated complex fluids are non-Newtonian viscoelastic fluids. The fluid-structure interaction (FSI) of this kind is frequently encountered in injection molding, food processing, pharmaceutical engineering and biomedicine. The investigation via experiments is costly, difficult or in some cases, even impossible. Therefore, research is increasingly aided...

  19. Modelling the dynamics of the health-production complex in livestock herds

    DEFF Research Database (Denmark)

    Sørensen, J.T.; Enevoldsen, Carsten

    1992-01-01

    This paper reviews how the dynamics of the health-production complex in livestock herds is mimicked by livestock herd simulation models. Twelve models simulating the dynamics of dairy, beef, sheep and sow herds were examined. All models basically included options to alter input and output...

  20. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    Science.gov (United States)

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  1. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    Science.gov (United States)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  2. A comprehensive model of anaerobic bioconversion of complex substrates to biogas

    DEFF Research Database (Denmark)

    Angelidaki, Irini; Ellegaard, Lars; Ahring, Birgitte Kiær

    1999-01-01

    A dynamic model describing the anaerobic degradation of complex material, and codigestion of different types of wastes, was developed based on a model previously described (Angelidaki et al., 1993). in the model, the substrate is described by its composition of basic organic components, i.e., car...

  3. Socio-Environmental Resilience and Complex Urban Systems Modeling

    Science.gov (United States)

    Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir

    2017-04-01

    The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water

  4. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    Science.gov (United States)

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  5. RATING MODELS AND INFORMATION TECHNOLOGIES APPLICATION FOR MANAGEMENT OF ADMINISTRATIVE-TERRITORIAL COMPLEXES

    Directory of Open Access Journals (Sweden)

    O. M. Pshinko

    2016-12-01

    Full Text Available Purpose. The paper aims to develop rating models and related information technologies designed to resolve the tasks of strategic planning of the administrative and territorial units’ development, as well as the tasks of multi-criteria control of inhomogeneous multiparameter objects operation. Methodology. When solving problems of strategic planning of administrative and territorial development and heterogeneous classes management of objects under control, a set of agreed methods is used. Namely the multi-criteria properties analysis for objects of planning and management, diagnostics of the state parameters, forecasting and management of complex systems of different classes. Their states are estimated by sets of different quality indicators, as well as represented by the individual models of operation process. A new information technology is proposed and created to implement the strategic planning and management tasks. This technology uses the procedures for solving typical tasks, that are implemented in MS SQL Server. Findings. A new approach to develop models of analyze and management of complex systems classes based on the ratings has been proposed. Rating models development for analysis of multicriteria and multiparameter systems has been obtained. The management of these systems is performed on the base of parameters of the current and predicted state by non-uniform distribution of resources. The procedure of sensitivity analysis of the changes in the rating model of inhomogeneous distribution of resources parameters has been developed. The information technology of strategic planning and management of heterogeneous classes of objects based on the rating model has been created. Originality. This article proposes a new approach of the rating indicators’ using as a general model for strategic planning of the development and management of heterogeneous objects that can be characterized by the sets of parameters measured on different scales

  6. The complex formation-partition and partition-association models of solvent extraction of ions

    International Nuclear Information System (INIS)

    Siekierski, S.

    1976-01-01

    Two models of the extraction process have been proposed. In the first model it is assumed that the partitioning neutral species is at first formed in the aqueous phase and then transferred into the organic phase. The second model is based on the assumption that equivalent amounts of cations are at first transferred from the aqueous into the organic phase and then associated to form a neutral molecule. The role of the solubility parameter in extraction and the relation between the solubility of liquid organic substances in water and the partition of complexes have been discussed. The extraction of simple complexes and complexes with organic ligands has been discussed using the first model. Partition coefficients have been calculated theoretically and compared with experimental values in some very simple cases. The extraction of ion pairs has been discussed using the partition-association model and the concept of single-ion partition coefficients. (author)

  7. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    Science.gov (United States)

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  8. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    Science.gov (United States)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  9. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  10. Complex singlet extension of the standard model

    International Nuclear Information System (INIS)

    Barger, Vernon; McCaskey, Mathew; Langacker, Paul; Ramsey-Musolf, Michael; Shaughnessy, Gabe

    2009-01-01

    We analyze a simple extension of the standard model (SM) obtained by adding a complex singlet to the scalar sector (cxSM). We show that the cxSM can contain one or two viable cold dark matter candidates and analyze the conditions on the parameters of the scalar potential that yield the observed relic density. When the cxSM potential contains a global U(1) symmetry that is both softly and spontaneously broken, it contains both a viable dark matter candidate and the ingredients necessary for a strong first order electroweak phase transition as needed for electroweak baryogenesis. We also study the implications of the model for discovery of a Higgs boson at the Large Hadron Collider.

  11. Complexity and agent-based modelling in urban research

    DEFF Research Database (Denmark)

    Fertner, Christian

    influence on the bigger system. Traditional scientific methods or theories often tried to simplify, not accounting complex relations of actors and decision-making. The introduction of computers in simulation made new approaches in modelling, as for example agent-based modelling (ABM), possible, dealing......Urbanisation processes are results of a broad variety of actors or actor groups and their behaviour and decisions based on different experiences, knowledge, resources, values etc. The decisions done are often on a micro/individual level but resulting in macro/collective behaviour. In urban research...

  12. A structural model of the E. coli PhoB Dimer in the transcription initiation complex

    Directory of Open Access Journals (Sweden)

    Tung Chang-Shung

    2012-03-01

    Full Text Available Abstract Background There exist > 78,000 proteins and/or nucleic acids structures that were determined experimentally. Only a small portion of these structures corresponds to those of protein complexes. While homology modeling is able to exploit knowledge-based potentials of side-chain rotomers and backbone motifs to infer structures for new proteins, no such general method exists to extend our understanding of protein interaction motifs to novel protein complexes. Results We use a Motif Binding Geometries (MBG approach, to infer the structure of a protein complex from the database of complexes of homologous proteins taken from other contexts (such as the helix-turn-helix motif binding double stranded DNA, and demonstrate its utility on one of the more important regulatory complexes in biology, that of the RNA polymerase initiating transcription under conditions of phosphate starvation. The modeled PhoB/RNAP/σ-factor/DNA complex is stereo-chemically reasonable, has sufficient interfacial Solvent Excluded Surface Areas (SESAs to provide adequate binding strength, is physically meaningful for transcription regulation, and is consistent with a variety of known experimental constraints. Conclusions Based on a straightforward and easy to comprehend concept, "proteins and protein domains that fold similarly could interact similarly", a structural model of the PhoB dimer in the transcription initiation complex has been developed. This approach could be extended to enable structural modeling and prediction of other bio-molecular complexes. Just as models of individual proteins provide insight into molecular recognition, catalytic mechanism, and substrate specificity, models of protein complexes will provide understanding into the combinatorial rules of cellular regulation and signaling.

  13. A framework for modelling the complexities of food and water security under globalisation

    Science.gov (United States)

    Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.

    2018-01-01

    We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  14. Comparing flood loss models of different complexity

    Science.gov (United States)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  15. Modeling and simulation of complex systems a framework for efficient agent-based modeling and simulation

    CERN Document Server

    Siegfried, Robert

    2014-01-01

    Robert Siegfried presents a framework for efficient agent-based modeling and simulation of complex systems. He compares different approaches for describing structure and dynamics of agent-based models in detail. Based on this evaluation the author introduces the "General Reference Model for Agent-based Modeling and Simulation" (GRAMS). Furthermore he presents parallel and distributed simulation approaches for execution of agent-based models -from small scale to very large scale. The author shows how agent-based models may be executed by different simulation engines that utilize underlying hard

  16. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches

    Energy Technology Data Exchange (ETDEWEB)

    Walke, Russell C. [Quintessa Limited, The Hub, 14 Station Road, Henley-on-Thames (United Kingdom); Kirchner, Gerald [University of Hamburg, ZNF, Beim Schlump 83, 20144 Hamburg (Germany); Xu, Shulan; Dverstorp, Bjoern [Swedish Radiation Safety Authority, SE-171 16 Stockholm (Sweden)

    2014-07-01

    Geological facilities are the preferred option for disposal of high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long time scales. Safety cases developed in support of geological disposal include assessment of potential impacts on humans and wildlife in order to demonstrate compliance with regulatory criteria. As disposal programmes move from site-independent/generic assessments through site selection to applications for construction/operation and closure, the degree of understanding of the present-day site increases, together with increased site-specific information. Assessments need to strike a balance between simple models and more complex approaches that draw more extensively on this site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The complex biosphere model was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's model is built on a landscape evolution model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. The site is located on the Baltic coast with a terrestrial landscape including lakes, mires, forest and agriculture. The land at the site is projected to continue to rise due to post-glacial uplift leading to ecosystem transitions in excess of ten thousand years. The simple biosphere models developed for this study include the most plausible transport processes and represent various types of ecosystem. The complex biosphere models adopt a relatively coarse representation of the near-surface strata, which is shown to be conservative, but also to under-estimate the time scale required for potential doses to reach equilibrium with radionuclide fluxes

  17. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    Science.gov (United States)

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and

  18. The complex sine-Gordon model on a half line

    International Nuclear Information System (INIS)

    Tzamtzis, Georgios

    2003-01-01

    In this thesis, we study the complex sine-Gordon model on a half line. The model in the bulk is an integrable (1+1) dimensional field theory which is U(1) gauge invariant and comprises a generalisation of the sine-Gordon theory. It accepts soliton and breather solutions. By introducing suitably selected boundary conditions we may consider the model on a half line. Through such conditions the model can be shown to remain integrable and various aspects of the boundary theory can be examined. The first chapter serves as a brief introduction to some basic concepts of integrability and soliton solutions. As an example of an integrable system with soliton solutions, the sine-Gordon model is presented both in the bulk and on a half line. These results will serve as a useful guide for the model at hand. The introduction finishes with a brief overview of the two methods that will be used on the fourth chapter in order to obtain the quantum spectrum of the boundary complex sine-Gordon model. In the second chapter the model is properly introduced along with a brief literature review. Different realisations of the model and their connexions are discussed. The vacuum of the theory is investigated. Soliton solutions are given and a discussion on the existence of breathers follows. Finally the collapse of breather solutions to single solitons is demonstrated and the chapter concludes with a different approach to the breather problem. In the third chapter, we construct the lowest conserved currents and through them we find suitable boundary conditions that allow for their conservation in the presence of a boundary. The boundary term is added to the Lagrangian and the vacuum is reexamined in the half line case. The reflection process of solitons from the boundary is studied and the time-delay is calculated. Finally we address the existence of boundary-bound states. In the fourth chapter we study the quantum complex sine-Gordon model. We begin with a brief overview of the theory in

  19. Association of a multi-synthetase complex with translating ribosomes in the archaeon Thermococcus kodakarensis

    DEFF Research Database (Denmark)

    Raina, Medha; Elgamal, Sara; Santangelo, Thomas J

    2012-01-01

    -dependent methyltransferase 144, GTP cyclohydrolase 398, DNA topoisomerase VI subunit A 209, DNA topoisomerase VI subunit B 192, Type A Flavoprotein 911, NAD(P)H:rubredoxin oxidoreductase (Fatty acid metabolism) 120, NAD(P)H:rubredoxin oxidoreductase 120, cofactor-independent phosphoglycerate mutase 909, bis(5'-adenosyl...... subunit 2 255, glycerol kinase 257, phosphomannomutase-related protein 321, ribose-5-phosphate isomerase A 107, phosphate transport regulator 193, isopentenyl pyrophosphate isomerase (mevanolate Pathway) 500, amino acid kinase 203, NADH:polysulfide oxidoreductase 203, 5'-methylthioadenosine phosphorylase......, cysteine desulfurase 521, hydrogenase maturation protein HypF 235, iron-molybdenum cofactor-binding protein 192, ATPase 260, 4Fe-4S cluster-binding protein 254, phosphopyruvate hydratase 650, fructose-1,6-bisphosphatase 140, aspartate carbamoyltransferase catalytic subunit 158, Bipolar DNA helicase 448...

  20. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  1. QMU as an approach to strengthening the predictive capabilities of complex models.

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge and relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity. Third

  2. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  3. Dynamical Behaviors in Complex-Valued Love Model With or Without Time Delays

    Science.gov (United States)

    Deng, Wei; Liao, Xiaofeng; Dong, Tao

    2017-12-01

    In this paper, a novel version of nonlinear model, i.e. a complex-valued love model with two time delays between two individuals in a love affair, has been proposed. A notable feature in this model is that we separate the emotion of one individual into real and imaginary parts to represent the variation and complexity of psychophysiological emotion in romantic relationship instead of just real domain, and make our model much closer to reality. This is because love is a complicated cognitive and social phenomenon, full of complexity, diversity and unpredictability, which refers to the coexistence of different aspects of feelings, states and attitudes ranging from joy and trust to sadness and disgust. By analyzing associated characteristic equation of linearized equations for our model, it is found that the Hopf bifurcation occurs when the sum of time delays passes through a sequence of critical value. Stability of bifurcating cyclic love dynamics is also derived by applying the normal form theory and the center manifold theorem. In addition, it is also shown that, for some appropriate chosen parameters, chaotic behaviors can appear even without time delay.

  4. MODELS AND METHODS OF SAFETY-ORIENTED PROJECT MANAGEMENT OF DEVELOPMENT OF COMPLEX SYSTEMS: METHODOLOGICAL APPROACH

    Directory of Open Access Journals (Sweden)

    Олег Богданович ЗАЧКО

    2016-03-01

    Full Text Available The methods and models of safety-oriented project management of the development of complex systems are proposed resulting from the convergence of existing approaches in project management in contrast to the mechanism of value-oriented management. A cognitive model of safety oriented project management of the development of complex systems is developed, which provides a synergistic effect that is to move the system from the original (pre condition in an optimal one from the viewpoint of life safety - post-project state. The approach of assessment the project complexity is proposed, which consists in taking into account the seasonal component of a time characteristic of life cycles of complex organizational and technical systems with occupancy. This enabled to take into account the seasonal component in simulation models of life cycle of the product operation in complex organizational and technical system, modeling the critical points of operation of systems with occupancy, which forms a new methodology for safety-oriented management of projects, programs and portfolios of projects with the formalization of the elements of complexity.

  5. Where to from here? Future applications of mental models of complex performance

    International Nuclear Information System (INIS)

    Hahn, H.A.; Nelson, W.R.; Blackman, H.S.

    1988-01-01

    The purpose of this paper is to raise issues for discussion regarding the applications of mental models in the study of complex performance. Applications for training, expert systems and decision aids, job selection, workstation design, and other complex environments are considered. 1 ref

  6. Expectancy-Violation and Information-Theoretic Models of Melodic Complexity

    Directory of Open Access Journals (Sweden)

    Tuomas Eerola

    2016-07-01

    Full Text Available The present study assesses two types of models for melodic complexity: one based on expectancy violations and the other one related to an information-theoretic account of redundancy in music. Seven different datasets spanning artificial sequences, folk and pop songs were used to refine and assess the models. The refinement eliminated unnecessary components from both types of models. The final analysis pitted three variants of the two model types against each other and could explain from 46-74% of the variance in the ratings across the datasets. The most parsimonious models were identified with an information-theoretic criterion. This suggested that the simplified expectancy-violation models were the most efficient for these sets of data. However, the differences between all optimized models were subtle in terms both of performance and simplicity.

  7. Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.

    Science.gov (United States)

    Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker

    2015-11-01

    Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.

  8. Holocene glacier variability: three case studies using an intermediate-complexity climate model

    NARCIS (Netherlands)

    Weber, S.L.; Oerlemans, J.

    2003-01-01

    Synthetic glacier length records are generated for the Holocene epoch using a process-based glacier model coupled to the intermediate-complexity climate model ECBilt. The glacier model consists of a massbalance component and an ice-flow component. The climate model is forced by the insolation change

  9. Inclusion Complexes of Sunscreen Agents with β-Cyclodextrin: Spectroscopic and Molecular Modeling Studies

    Directory of Open Access Journals (Sweden)

    Nathir A. F. Al-Rawashdeh

    2013-01-01

    Full Text Available The inclusion complexes of selected sunscreen agents, namely, oxybenzone (Oxy, octocrylene (Oct, and ethylhexyl-methoxycinnamate (Cin with β-cyclodextrin (β-CD were studied by UV-Vis spectroscopy, differential scanning calorimetry (DSC, 13C NMR techniques, and molecular mechanics (MM calculations and modeling. Molecular modeling (MM study of the entire process of the formation of 1 : 1 stoichiometry sunscreen agent/β-cyclodextrin structures has been used to contribute to the understanding and rationalization of the experimental results. Molecular mechanics calculations, together with 13C NMR measurements, for the complex with β-CD have been used to describe details of the structural, energetic, and dynamic features of host-guest complex. Accurate structures of CD inclusion complexes have been derived from molecular mechanics (MM calculations and modeling. The photodegradation reaction of the sunscreen agents' molecules in lotion was explored using UV-Vis spectroscopy. It has been demonstrated that the photostability of these selected sunscreen agents has been enhanced upon forming inclusion complexes with β-CD in lotion. The results of this study demonstrate that β-CD can be utilized as photostabilizer additive for enhancing the photostability of the selected sunscreen agents' molecules.

  10. The Model of Complex Structure of Quark

    Science.gov (United States)

    Liu, Rongwu

    2017-09-01

    In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.

  11. On sampling and modeling complex systems

    International Nuclear Information System (INIS)

    Marsili, Matteo; Mastromatteo, Iacopo; Roudi, Yasser

    2013-01-01

    The study of complex systems is limited by the fact that only a few variables are accessible for modeling and sampling, which are not necessarily the most relevant ones to explain the system behavior. In addition, empirical data typically undersample the space of possible states. We study a generic framework where a complex system is seen as a system of many interacting degrees of freedom, which are known only in part, that optimize a given function. We show that the underlying distribution with respect to the known variables has the Boltzmann form, with a temperature that depends on the number of unknown variables. In particular, when the influence of the unknown degrees of freedom on the known variables is not too irregular, the temperature decreases as the number of variables increases. This suggests that models can be predictable only when the number of relevant variables is less than a critical threshold. Concerning sampling, we argue that the information that a sample contains on the behavior of the system is quantified by the entropy of the frequency with which different states occur. This allows us to characterize the properties of maximally informative samples: within a simple approximation, the most informative frequency size distributions have power law behavior and Zipf’s law emerges at the crossover between the under sampled regime and the regime where the sample contains enough statistics to make inferences on the behavior of the system. These ideas are illustrated in some applications, showing that they can be used to identify relevant variables or to select the most informative representations of data, e.g. in data clustering. (paper)

  12. Complex Coronary Hemodynamics - Simple Analog Modelling as an Educational Tool.

    Science.gov (United States)

    Parikh, Gaurav R; Peter, Elvis; Kakouros, Nikolaos

    2017-01-01

    Invasive coronary angiography remains the cornerstone for evaluation of coronary stenoses despite there being a poor correlation between luminal loss assessment by coronary luminography and myocardial ischemia. This is especially true for coronary lesions deemed moderate by visual assessment. Coronary pressure-derived fractional flow reserve (FFR) has emerged as the gold standard for the evaluation of hemodynamic significance of coronary artery stenosis, which is cost effective and leads to improved patient outcomes. There are, however, several limitations to the use of FFR including the evaluation of serial stenoses. In this article, we discuss the electronic-hydraulic analogy and the utility of simple electrical modelling to mimic the coronary circulation and coronary stenoses. We exemplify the effect of tandem coronary lesions on the FFR by modelling of a patient with sequential disease segments and complex anatomy. We believe that such computational modelling can serve as a powerful educational tool to help clinicians better understand the complexity of coronary hemodynamics and improve patient care.

  13. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  14. A viscoelastic-viscoplastic model for short-fibre reinforced polymers with complex fibre orientations

    Directory of Open Access Journals (Sweden)

    Nciri M.

    2015-01-01

    Full Text Available This paper presents an innovative approach for the modelling of viscous behaviour of short-fibre reinforced composites (SFRC with complex distributions of fibre orientations and for a wide range of strain rates. As an alternative to more complex homogenisation methods, the model is based on an additive decomposition of the state potential for the computation of composite’s macroscopic behaviour. Thus, the composite material is seen as the assembly of a matrix medium and several linear elastic fibre media. The division of short fibres into several families means that complex distributions of orientation or random orientation can be easily modelled. The matrix behaviour is strain-rate sensitive, i.e. viscoelastic and/or viscoplastic. Viscoelastic constitutive laws are based on a generalised linear Maxwell model and the modelling of the viscoplasticity is based on an overstress approach. The model is tested for the case of a polypropylene reinforced with short-glass fibres with distributed orientations and subjected to uniaxial tensile tests, in different loading directions and under different strain rates. Results demonstrate the efficiency of the model over a wide range of strain rates.

  15. A framework for modelling the complexities of food and water security under globalisation

    Directory of Open Access Journals (Sweden)

    B. J. Dermody

    2018-01-01

    Full Text Available We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  16. An investigation of nitrile transforming enzymes in the chemo-enzymatic synthesis of the taxol sidechain.

    Science.gov (United States)

    Wilding, Birgit; Veselá, Alicja B; Perry, Justin J B; Black, Gary W; Zhang, Meng; Martínková, Ludmila; Klempier, Norbert

    2015-07-28

    Paclitaxel (taxol) is an antimicrotubule agent widely used in the treatment of cancer. Taxol is prepared in a semisynthetic route by coupling the N-benzoyl-(2R,3S)-3-phenylisoserine sidechain to the baccatin III core structure. Precursors of the taxol sidechain have previously been prepared in chemoenzymatic approaches using acylases, lipases, and reductases, mostly featuring the enantioselective, enzymatic step early in the reaction pathway. Here, nitrile hydrolysing enzymes, namely nitrile hydratases and nitrilases, are investigated for the enzymatic hydrolysis of two different sidechain precursors. Both sidechain precursors, an openchain α-hydroxy-β-amino nitrile and a cyanodihydrooxazole, are suitable for coupling to baccatin III directly after the enzymatic step. An extensive set of nitrilases and nitrile hydratases was screened towards their activity and selectivity in the hydrolysis of two taxol sidechain precursors and their epimers. A number of nitrilases and nitrile hydratases converted both sidechain precursors and their epimers.

  17. A density-based clustering model for community detection in complex networks

    Science.gov (United States)

    Zhao, Xiang; Li, Yantao; Qu, Zehui

    2018-04-01

    Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.

  18. Applying complexity theory: A primer for identifying and modeling firm anomalies

    Directory of Open Access Journals (Sweden)

    Arch G. Woodside

    2018-01-01

    Full Text Available This essay elaborates on the usefulness of embracing complexity theory, modeling outcomes rather than directionality, and modeling complex rather than simple outcomes in strategic management. Complexity theory includes the tenet that most antecedent conditions are neither sufficient nor necessary for the occurrence of a specific outcome. Identifying a firm by individual antecedents (i.e., non-innovative versus highly innovative, small versus large size in sales or number of employees, or serving local versus international markets provides shallow information in modeling specific outcomes (e.g., high sales growth or high profitability—even if directional analyses (e.g., regression analysis, including structural equation modeling indicates that the independent (main effects of the individual antecedents relate to outcomes directionally—because firm (case anomalies almost always occur to main effects. Examples: a number of highly innovative firms have low sales while others have high sales and a number of non-innovative firms have low sales while others have high sales. Breaking-away from the current dominant logic of directionality testing—null hypotheses statistic testing (NHST—to embrace somewhat precise outcome testing (SPOT is necessary for extracting highly useful information about the causes of anomalies—associations opposite to expected and “statistically significant” main effects. The study of anomalies extends to identifying the occurrences of four-corner strategy outcomes: firms doing well in favorable circumstances, firms doing badly in favorable circumstances, firms doing well in unfavorable circumstances, and firms doing badly in unfavorable circumstances. Models of four-corner strategy outcomes advances strategic management beyond the current dominant logic of directional modeling of single outcomes.

  19. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation

    Energy Technology Data Exchange (ETDEWEB)

    Veselská, Veronika, E-mail: veselskav@fzp.czu.cz [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Fajgar, Radek [Department of Analytical and Material Chemistry, Institute of Chemical Process Fundamentals of the CAS, v.v.i., Rozvojová 135/1, CZ-16502, Prague (Czech Republic); Číhalová, Sylva [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic); Bolanz, Ralph M. [Institute of Geosciences, Friedrich-Schiller-University Jena, Carl-Zeiss-Promenade 10, DE-07745, Jena (Germany); Göttlicher, Jörg; Steininger, Ralph [ANKA Synchrotron Radiation Facility, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, DE-76344, Eggenstein-Leopoldshafen (Germany); Siddique, Jamal A.; Komárek, Michael [Department of Environmental Geosciences, Faculty of Environmental Sciences, Czech University of Life Sciences Prague, Kamýcka 129, CZ-16521, Prague (Czech Republic)

    2016-11-15

    Highlights: • Study of Cr(VI) adsorption on soil minerals over a large range of conditions. • Combined surface complexation modeling and spectroscopic techniques. • Diffuse-layer and triple-layer models used to obtain fits to experimental data. • Speciation of Cr(VI) and Cr(III) was assessed. - Abstract: This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3–10), ionic strengths (0.001–0.1 M KNO{sub 3}), sorbate concentrations (10{sup −4}, 10{sup −5}, and 10{sup −6} M Cr(VI)), and sorbate/sorbent ratios (50–500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes.

  20. arXiv Spin models in complex magnetic fields: a hard sign problem

    CERN Document Server

    de Forcrand, Philippe

    2018-01-01

    Coupling spin models to complex external fields can give rise to interesting phenomena like zeroes of the partition function (Lee-Yang zeroes, edge singularities) or oscillating propagators. Unfortunately, it usually also leads to a severe sign problem that can be overcome only in special cases; if the partition function has zeroes, the sign problem is even representation-independent at these points. In this study, we couple the N-state Potts model in different ways to a complex external magnetic field and discuss the above mentioned phenomena and their relations based on analytic calculations (1D) and results obtained using a modified cluster algorithm (general D) that in many cases either cures or at least drastically reduces the sign-problem induced by the complex external field.

  1. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    Science.gov (United States)

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  2. FACET: A simulation software framework for modeling complex societal processes and interactions

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, J. H.

    2000-06-02

    FACET, the Framework for Addressing Cooperative Extended Transactions, was developed at Argonne National Laboratory to address the need for a simulation software architecture in the style of an agent-based approach, but with sufficient robustness, expressiveness, and flexibility to be able to deal with the levels of complexity seen in real-world social situations. FACET is an object-oriented software framework for building models of complex, cooperative behaviors of agents. It can be used to implement simulation models of societal processes such as the complex interplay of participating individuals and organizations engaged in multiple concurrent transactions in pursuit of their various goals. These transactions can be patterned on, for example, clinical guidelines and procedures, business practices, government and corporate policies, etc. FACET can also address other complex behaviors such as biological life cycles or manufacturing processes. To date, for example, FACET has been applied to such areas as land management, health care delivery, avian social behavior, and interactions between natural and social processes in ancient Mesopotamia.

  3. Suppressor screen and phenotype analyses revealed an emerging role of the Monofunctional peroxisomal enoyl-CoA hydratase 2 in compensated cell enlargement

    Directory of Open Access Journals (Sweden)

    Mana eKatano

    2016-02-01

    Full Text Available Efficient use of seed nutrient reserves is crucial for germination and establishment of plant seedlings. Mobilizing seed oil reserves in Arabidopsis involves β-oxidation, the glyoxylate cycle, and gluconeogenesis, which provide essential energy and the carbon skeletons needed to sustain seedling growth until photoautotrophy is acquired. We demonstrated that H+-PPase activity is required for gluconeogenesis. Lack of H+-PPase in fugu5 mutants increases cytosolic pyrophosphate (PPi levels, which partially reduces sucrose synthesis de novo and inhibits cell division. In contrast, post-mitotic cell expansion in cotyledons was unusually enhanced, a phenotype called compensation. Therefore, it appears that PPi inhibits several cellular functions, including cell cycling, to trigger compensated cell enlargement (CCE. Here, we mutagenized fugu5-1 seeds with 12C6+ heavy-ion irradiation and screened mutations that restrain CCE to gain insight into the genetic pathway(s involved in CCE. We isolated A#3-1, in which cell size was severely reduced, but cell number remained similar to that of original fugu5-1. Moreover, cell number decreased in A#3-1 single mutant (A#3-1sm, similar to that of fugu5-1, but cell size was almost equal to that of the wild type. Surprisingly, A#3-1 mutation did not affect CCE in other compensation exhibiting mutant backgrounds, such as an3-4 and fugu2-1/fas1-6. Subsequent map-based cloning combined with genome sequencing and HRM curve analysis identified enoyl-CoA hydratase 2 (ECH2 as the causal gene of A#3-1. The above phenotypes were consistently observed in the ech2-1 allele and supplying sucrose restored the morphological and cellular phenotypes in fugu5-1, ech2-1, A#3-1sm, fugu5-1 ech2-1 and A#3-1;fugu5-1. Taken together, these results suggest that defects in either H+-PPase or ECH2 compromise cell proliferation due to defects in mobilizing stored lipids. In contrast, ECH2 alone likely promotes CCE during the post-mitotic cell

  4. Development of structural model of adaptive training complex in ergatic systems for professional use

    Science.gov (United States)

    Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.

    2018-03-01

    The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.

  5. Towards a Unified Theory of Health-Disease: I. Health as a complex model-object

    Directory of Open Access Journals (Sweden)

    Naomar Almeida-Filho

    2013-06-01

    Full Text Available Theory building is one of the most crucial challenges faced by basic, clinical and population research, which form the scientific foundations of health practices in contemporary societies. The objective of the study is to propose a Unified Theory of Health-Disease as a conceptual tool for modeling health-disease-care in the light of complexity approaches. With this aim, the epistemological basis of theoretical work in the health field and concepts related to complexity theory as concerned to health problems are discussed. Secondly, the concepts of model-object, multi-planes of occurrence, modes of health and disease-illness-sickness complex are introduced and integrated into a unified theoretical framework. Finally, in the light of recent epistemological developments, the concept of Health-Disease-Care Integrals is updated as a complex reference object fit for modeling health-related processes and phenomena.

  6. Modelling and simulating in-stent restenosis with complex automata

    NARCIS (Netherlands)

    Hoekstra, A.G.; Lawford, P.; Hose, R.

    2010-01-01

    In-stent restenosis, the maladaptive response of a blood vessel to injury caused by the deployment of a stent, is a multiscale system involving a large number of biological and physical processes. We describe a Complex Automata Model for in-stent restenosis, coupling bulk flow, drug diffusion, and

  7. Surface complexation modeling of the effects of phosphate on uranium(VI) adsorption

    Energy Technology Data Exchange (ETDEWEB)

    Romero-Gonzalez, M.R.; Cheng, T.; Barnett, M.O. [Auburn Univ., AL (United States). Dept. of Civil Engeneering; Roden, E.E. [Wisconsin Univ., Madison, WI (United States). Dept. of Geology and Geophysics

    2007-07-01

    Previous published data for the adsorption of U(VI) and/or phosphate onto amorphous Fe(III) oxides (hydrous ferric oxide, HFO) and crystalline Fe(III) oxides (goethite) was examined. These data were then used to test the ability of a commonly-used surface complexation model (SCM) to describe the adsorption of U(VI) and phosphate onto pure amorphous and crystalline Fe(III) oxides and synthetic goethite-coated sand, a surrogate for a natural Fe(III)-coated material, using the component additivity (CA) approach. Our modeling results show that this model was able to describe U(VI) adsorption onto both amorphous and crystalline Fe(III) oxides and also goethite-coated sand quite well in the absence of phosphate. However, because phosphate adsorption exhibits a stronger dependence on Fe(III) oxide type than U(VI) adsorption, we could not use this model to consistently describe phosphate adsorption onto both amorphous and crystalline Fe(III) oxides and goethite-coated sand. However, the effects of phosphate on U(VI) adsorption could be incorporated into the model to describe U(VI) adsorption to both amorphous and crystalline Fe(III) oxides and goethite-coated sand, at least for an initial approximation. These results illustrate both the potential and limitations of using surface complexation models developed from pure systems to describe metal/radionuclide adsorption under more complex conditions. (orig.)

  8. Surface complexation modeling of the effects of phosphate on uranium(VI) adsorption

    International Nuclear Information System (INIS)

    Romero-Gonzalez, M.R.; Cheng, T.; Barnett, M.O.; Roden, E.E.

    2007-01-01

    Previous published data for the adsorption of U(VI) and/or phosphate onto amorphous Fe(III) oxides (hydrous ferric oxide, HFO) and crystalline Fe(III) oxides (goethite) was examined. These data were then used to test the ability of a commonly-used surface complexation model (SCM) to describe the adsorption of U(VI) and phosphate onto pure amorphous and crystalline Fe(III) oxides and synthetic goethite-coated sand, a surrogate for a natural Fe(III)-coated material, using the component additivity (CA) approach. Our modeling results show that this model was able to describe U(VI) adsorption onto both amorphous and crystalline Fe(III) oxides and also goethite-coated sand quite well in the absence of phosphate. However, because phosphate adsorption exhibits a stronger dependence on Fe(III) oxide type than U(VI) adsorption, we could not use this model to consistently describe phosphate adsorption onto both amorphous and crystalline Fe(III) oxides and goethite-coated sand. However, the effects of phosphate on U(VI) adsorption could be incorporated into the model to describe U(VI) adsorption to both amorphous and crystalline Fe(III) oxides and goethite-coated sand, at least for an initial approximation. These results illustrate both the potential and limitations of using surface complexation models developed from pure systems to describe metal/radionuclide adsorption under more complex conditions. (orig.)

  9. COMPLEX OF NUMERICAL MODELS FOR COMPUTATION OF AIR ION CONCENTRATION IN PREMISES

    Directory of Open Access Journals (Sweden)

    M. M. Biliaiev

    2016-04-01

    Full Text Available Purpose. The article highlights the question about creation the complex numerical models in order to calculate the ions concentration fields in premises of various purpose and in work areas. Developed complex should take into account the main physical factors influencing the formation of the concentration field of ions, that is, aerodynamics of air jets in the room, presence of furniture, equipment, placement of ventilation holes, ventilation mode, location of ionization sources, transfer of ions under the electric field effect, other factors, determining the intensity and shape of the field of concentration of ions. In addition, complex of numerical models has to ensure conducting of the express calculation of the ions concentration in the premises, allowing quick sorting of possible variants and enabling «enlarged» evaluation of air ions concentration in the premises. Methodology. The complex numerical models to calculate air ion regime in the premises is developed. CFD numerical model is based on the use of aerodynamics, electrostatics and mass transfer equations, and takes into account the effect of air flows caused by the ventilation operation, diffusion, electric field effects, as well as the interaction of different polarities ions with each other and with the dust particles. The proposed balance model for computation of air ion regime indoors allows operative calculating the ions concentration field considering pulsed operation of the ionizer. Findings. The calculated data are received, on the basis of which one can estimate the ions concentration anywhere in the premises with artificial air ionization. An example of calculating the negative ions concentration on the basis of the CFD numerical model in the premises with reengineering transformations is given. On the basis of the developed balance model the air ions concentration in the room volume was calculated. Originality. Results of the air ion regime computation in premise, which

  10. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    Energy Technology Data Exchange (ETDEWEB)

    Li, Rui, E-mail: lirui1401@bjtu.edu.cn; Wang, Jun

    2016-01-08

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  11. Interacting price model and fluctuation behavior analysis from Lempel–Ziv complexity and multi-scale weighted-permutation entropy

    International Nuclear Information System (INIS)

    Li, Rui; Wang, Jun

    2016-01-01

    A financial price model is developed based on the voter interacting system in this work. The Lempel–Ziv complexity is introduced to analyze the complex behaviors of the stock market. Some stock market stylized facts including fat tails, absence of autocorrelation and volatility clustering are investigated for the proposed price model firstly. Then the complexity of fluctuation behaviors of the real stock markets and the proposed price model are mainly explored by Lempel–Ziv complexity (LZC) analysis and multi-scale weighted-permutation entropy (MWPE) analysis. A series of LZC analyses of the returns and the absolute returns of daily closing prices and moving average prices are performed. Moreover, the complexity of the returns, the absolute returns and their corresponding intrinsic mode functions (IMFs) derived from the empirical mode decomposition (EMD) with MWPE is also investigated. The numerical empirical study shows similar statistical and complex behaviors between the proposed price model and the real stock markets, which exhibits that the proposed model is feasible to some extent. - Highlights: • A financial price dynamical model is developed based on the voter interacting system. • Lempel–Ziv complexity is the firstly applied to investigate the stock market dynamics system. • MWPE is employed to explore the complexity fluctuation behaviors of the stock market. • Empirical results show the feasibility of the proposed financial model.

  12. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  13. A mouse model of mitochondrial complex III dysfunction induced by myxothiazol

    Energy Technology Data Exchange (ETDEWEB)

    Davoudi, Mina [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Kallijärvi, Jukka; Marjavaara, Sanna [Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Kotarsky, Heike; Hansson, Eva [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Levéen, Per [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Fellman, Vineta, E-mail: Vineta.Fellman@med.lu.se [Pediatrics, Department of Clinical Sciences, Lund, Lund University, Lund 22185 (Sweden); Folkhälsan Research Center, Biomedicum Helsinki, University of Helsinki, 00014 (Finland); Children’s Hospital, Helsinki University Hospital, University of Helsinki, Helsinki 00029 (Finland)

    2014-04-18

    Highlights: • Reversible chemical inhibition of complex III in wild type mouse. • Myxothiazol causes decreased complex III activity in mouse liver. • The model is useful for therapeutic trials to improve mitochondrial function. - Abstract: Myxothiazol is a respiratory chain complex III (CIII) inhibitor that binds to the ubiquinol oxidation site Qo of CIII. It blocks electron transfer from ubiquinol to cytochrome b and thus inhibits CIII activity. It has been utilized as a tool in studies of respiratory chain function in in vitro and cell culture models. We developed a mouse model of biochemically induced and reversible CIII inhibition using myxothiazol. We administered myxothiazol intraperitoneally at a dose of 0.56 mg/kg to C57Bl/J6 mice every 24 h and assessed CIII activity, histology, lipid content, supercomplex formation, and gene expression in the livers of the mice. A reversible CIII activity decrease to 50% of control value occurred at 2 h post-injection. At 74 h only minor histological changes in the liver were found, supercomplex formation was preserved and no significant changes in the expression of genes indicating hepatotoxicity or inflammation were found. Thus, myxothiazol-induced CIII inhibition can be induced in mice for four days in a row without overt hepatotoxicity or lethality. This model could be utilized in further studies of respiratory chain function and pharmacological approaches to mitochondrial hepatopathies.

  14. Using multi-criteria analysis of simulation models to understand complex biological systems

    Science.gov (United States)

    Maureen C. Kennedy; E. David. Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  15. Experimental determination and modeling of arsenic complexation with humic and fulvic acids.

    Science.gov (United States)

    Fakour, Hoda; Lin, Tsair-Fuh

    2014-08-30

    The complexation of humic acid (HA) and fulvic acid (FA) with arsenic (As) in water was studied. Experimental results indicate that arsenic may form complexes with HA and FA with a higher affinity for arsenate than for arsenite. With the presence of iron oxide based adsorbents, binding of arsenic to HA/FA in water was significantly suppressed, probably due to adsorption of As and HA/FA. A two-site ligand binding model, considering only strong and weak site types of binding affinity, was successfully developed to describe the complexation of arsenic on the two natural organic fractions. The model showed that the numbers of weak sites were more than 10 times those of strong sites on both HA and FA for both arsenic species studied. The numbers of both types of binding sites were found to be proportional to the HA concentrations, while the apparent stability constants, defined for describing binding affinity between arsenic and the sites, are independent of the HA concentrations. To the best of our knowledge, this is the first study to characterize the impact of HA concentrations on the applicability of the ligand binding model, and to extrapolate the model to FA. The obtained results may give insights on the complexation of arsenic in HA/FA laden groundwater and on the selection of more effective adsorption-based treatment methods for natural waters. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Beam model for seismic analysis of complex shear wall structure based on the strain energy equivalence

    International Nuclear Information System (INIS)

    Reddy, G.R.; Mahajan, S.C.; Suzuki, Kohei

    1997-01-01

    A nuclear reactor building structure consists of shear walls with complex geometry, beams and columns. The complexity of the structure is explained in the section Introduction. Seismic analysis of the complex reactor building structure using the continuum mechanics approach may produce good results but this method is very difficult to apply. Hence, the finite element approach is found to be an useful technique for solving the dynamic equations of the reactor building structure. In this approach, the model which uses finite elements such as brick, plate and shell elements may produce accurate results. However, this model also poses some difficulties which are explained in the section Modeling Techniques. Therefore, seismic analysis of complex structures is generally carried out using a lumped mass beam model. This model is preferred because of its simplicity and economy. Nevertheless, mathematical modeling of a shear wall structure as a beam requires specialized skill and a thorough understanding of the structure. For accurate seismic analysis, it is necessary to model more realistically the stiffness, mass and damping. In linear seismic analysis, modeling of the mass and damping may pose few problems compared to modeling the stiffness. When used to represent a complex structure, the stiffness of the beam is directly related to the shear wall section properties such as area, shear area and moment of inertia. Various beam models which are classified based on the method of stiffness evaluation are also explained under the section Modeling Techniques. In the section Case Studies the accuracy and simplicity of the beam models are explained. Among various beam models, the one which evaluates the stiffness using strain energy equivalence proves to be the simplest and most accurate method for modeling the complex shear wall structure. (author)

  17. The relation between geometry, hydrology and stability of complex hillslopes examined using low-dimensional hydrological models

    NARCIS (Netherlands)

    Talebi, A.

    2008-01-01

    Key words: Hillslope geometry, Hillslope hydrology, Hillslope stability, Complex hillslopes, Modeling shallow landslides, HSB model, HSB-SM model.

    The hydrologic response of a hillslope to rainfall involves a complex, transient saturated-unsaturated interaction that usually leads to a

  18. The Complexity of Developmental Predictions from Dual Process Models

    Science.gov (United States)

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  19. Development and evaluation of a musculoskeletal model of the elbow joint complex

    Science.gov (United States)

    Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.

  20. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

    Science.gov (United States)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

  1. Logic-based hierarchies for modeling behavior of complex dynamic systems with applications

    International Nuclear Information System (INIS)

    Hu, Y.S.; Modarres, M.

    2000-01-01

    Most complex systems are best represented in the form of a hierarchy. The Goal Tree Success Tree and Master Logic Diagram (GTST-MLD) are proven powerful hierarchic methods to represent complex snap-shot of plant knowledge. To represent dynamic behaviors of complex systems, fuzzy logic is applied to replace binary logic to extend the power of GTST-MLD. Such a fuzzy-logic-based hierarchy is called Dynamic Master Logic Diagram (DMLD). This chapter discusses comparison of the use of GTST-DMLD when applied as a modeling tool for systems whose relationships are modeled by either physical, binary logical or fuzzy logical relationships. This is shown by applying GTST-DMLD to the Direct Containment Heating (DCH) phenomenon at pressurized water reactors which is an important safety issue being addressed by the nuclear industry. (orig.)

  2. Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model

    Science.gov (United States)

    Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.

    2018-01-01

    The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast

  3. Chaos from simple models to complex systems

    CERN Document Server

    Cencini, Massimo; Vulpiani, Angelo

    2010-01-01

    Chaos: from simple models to complex systems aims to guide science and engineering students through chaos and nonlinear dynamics from classical examples to the most recent fields of research. The first part, intended for undergraduate and graduate students, is a gentle and self-contained introduction to the concepts and main tools for the characterization of deterministic chaotic systems, with emphasis to statistical approaches. The second part can be used as a reference by researchers as it focuses on more advanced topics including the characterization of chaos with tools of information theor

  4. COMPLEX SIMULATION MODEL OF TRAIN BREAKING-UP PROCESS AT THE HUMPS

    Directory of Open Access Journals (Sweden)

    E. B. Demchenko

    2015-11-01

    Full Text Available Purpose. One of the priorities of station sorting complex functioning improvement is the breaking-up process energy consumptions reduction, namely: fuel consumption for train pushing and electric energy consumption for cut braking. In this regard, an effective solution of the problem of energy consumption reduction at breaking-up subsystem requires a comprehensive handling of train pushing and cut rolling down processes. At the same time, the analysis showed that the current task of pushing process improvement and cut rolling down effectiveness increase are solved separately. To solve this problem it is necessary to develop the complex simulation model of train breaking up process at humps. Methodology. Pushing process simulation was done based on adapted under the shunting conditions traction calculations. In addition, the features of shunting locomotives work at the humps were taken into account. In order to realize the current pushing mode the special algorithm of hump locomotive controlling, which along with the safety shunting operation requirements takes into account behavioral factors associated with engineer control actions was applied. This algorithm provides train smooth acceleration and further movement with speed, which is close to the set speed. Hump locomotive fuel consumptions were determined based on the amount of mechanical work performed by locomotive traction. Findings. The simulation model of train pushing process was developed and combined with existing cut rolling down model. Cut initial velocity is determined during simulation process. The obtained initial velocity is used for further cut rolling process modeling. In addition, the modeling resulted in sufficiently accurate determination of the fuel rates consumed for train breaking-up. Originality. The simulation model of train breaking-up process at the humps, which in contrast to the existing models allows reproducing complexly all the elements of this process in detail

  5. Modelling small-angle scattering data from complex protein-lipid systems

    DEFF Research Database (Denmark)

    Kynde, Søren Andreas Røssell

    This thesis consists of two parts. The rst part is divided into five chapters. Chapter 1 gives a general introduction to the bio-molecular systems that have been studied. These are membrane proteins and their lipid environments in the form of phospholipid nanodiscs. Membrane proteins...... the techniques very well suited for the study of the nanodisc system. Chapter 3 explains two different modelling approaches that can be used in the analysis of small-angle scattering data from lipid-protein complexes. These are the continuous approach where the system of interest is modelled as a few regular...... combine the bene ts of each of the methods and give unique structural information about relevant bio-molecular complexes in solution. Chapter 4 describes the work behind a proposal of a small-angle neutron scattering instrument for the European Spallation Source under construction in Lund. The instrument...

  6. Multiagent model and mean field theory of complex auction dynamics

    Science.gov (United States)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  7. Multiagent model and mean field theory of complex auction dynamics

    International Nuclear Information System (INIS)

    Chen, Qinghua; Wang, Yougui; Huang, Zi-Gang; Lai, Ying-Cheng

    2015-01-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena. (paper)

  8. Comparisons of complex network based models and real train flow model to analyze Chinese railway vulnerability

    International Nuclear Information System (INIS)

    Ouyang, Min; Zhao, Lijing; Hong, Liu; Pan, Zhezhe

    2014-01-01

    Recently numerous studies have applied complex network based models to study the performance and vulnerability of infrastructure systems under various types of attacks and hazards. But how effective are these models to capture their real performance response is still a question worthy of research. Taking the Chinese railway system as an example, this paper selects three typical complex network based models, including purely topological model (PTM), purely shortest path model (PSPM), and weight (link length) based shortest path model (WBSPM), to analyze railway accessibility and flow-based vulnerability and compare their results with those from the real train flow model (RTFM). The results show that the WBSPM can produce the train routines with 83% stations and 77% railway links identical to the real routines and can approach the RTFM the best for railway vulnerability under both single and multiple component failures. The correlation coefficient for accessibility vulnerability from WBSPM and RTFM under single station failures is 0.96 while it is 0.92 for flow-based vulnerability; under multiple station failures, where each station has the same failure probability fp, the WBSPM can produce almost identical vulnerability results with those from the RTFM under almost all failures scenarios when fp is larger than 0.62 for accessibility vulnerability and 0.86 for flow-based vulnerability

  9. THE MODEL OF LIFELONG EDUCATION IN A TECHNICAL UNIVERSITY AS A MULTILEVEL EDUCATIONAL COMPLEX

    Directory of Open Access Journals (Sweden)

    Svetlana V. Sergeyeva

    2016-06-01

    Full Text Available Introduction: the current leading trend of the educational development is characterised by its continuity. Institutions of higher education as multi-level educational complexes nurture favourable conditions for realisation of the strategy of lifelong education. Today a technical university offering training of future engineers is facing a topic issue of creating a multilevel educational complex. Materials and Methods: this paper is put together on the basis of modern Russian and foreign scientific literature about lifelong education. The authors used theoretical methods of scientific research: systemstructural analysis, synthesis, modeling, analysis and generalisations of concepts. Results: the paper presents a model of lifelong education developed by authors for a technical university as a multilevel educational complex. It is realised through a set of principles: multi-level and continuity, integration, conformity and quality, mobility, anticipation, openness, social partnership and feedback. In accordance with the purpose, objectives and principles, the content part of the model is formed. The syllabi following the described model are run in accordance with the training levels undertaken by a technical university as a multilevel educational complex. All syllabi are based on the gradual nature of their implementation. In this regard, the authors highlight three phases: diagnostic, constructive and transformative, assessing. Discussion and Conclusions: the expected result of the created model of lifelong education development in a technical university as a multilevel educational complex is presented by a graduate trained for effective professional activity, competitive, prepared and sought-after at the regional labour market.

  10. [Transformation of 2- and 4-cyanopyridines by free and immobilized cells of nitrile-hydrolyzing bacteria].

    Science.gov (United States)

    Maksimova, Iu G; Vasil'ev, D M; Ovechkina, G V; Maksimov, A Iu; Demakov, V A

    2013-01-01

    The transformation dynamics of 2- and 4-cyanopyridines by cells suspended and adsorbed on inorganic carriers has been studied in the Rhodococcus ruber gt 1 strain possessing nitrile hydratase activity and the Pseudomonas fluorescens C2 strain containing nitrilase. It was shown that both nitrile hydratase and nitrilase activities of immobilized cells against 2-cyanopyridine were 1.5-4 times lower compared to 4-cyanopyridine and 1.6-2 times lower than the activities of free cells against 2-cyanpopyridine. The possibility of obtaining isonicotinic acid during the combined conversion of 4-cyanopyridine by a mixed suspension of R. ruber gt 1 cells with a high level of nitrile hydratase activity and R. erythropolis 11-2 cells with a pronounced activity of amidase has been shown. Immobilization of Rhodococcus cells on raw coal and Pseudomonas cells on china clay was shown to yield a heterogeneous biocatalyst for the efficient transformation of cyanopyridines into respective amides and carbonic acids.

  11. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  12. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    Science.gov (United States)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  13. A binary logistic regression model with complex sampling design of ...

    African Journals Online (AJOL)

    2017-09-03

    Sep 3, 2017 ... Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. .... Data was entered into STATA-12 and analyzed using. SPSS-21. .... lack of access/too far or costs too much. 35. 1.2.

  14. Extension of association models to complex chemicals

    DEFF Research Database (Denmark)

    Avlund, Ane Søgaard

    Summary of “Extension of association models to complex chemicals”. Ph.D. thesis by Ane Søgaard Avlund The subject of this thesis is application of SAFT type equations of state (EoS). Accurate and predictive thermodynamic models are important in many industries including the petroleum industry......; CPA and sPC-SAFT. Phase equilibrium and monomer fraction calculations with sPC-SAFT for methanol are used in the thesis to illustrate the importance of parameter estimation when using SAFT. Different parameter sets give similar pure component vapor pressure and liquid density results, whereas very...... association is presented in the thesis, and compared to the corresponding lattice theory. The theory for intramolecular association is then applied in connection with sPC-SAFT for mixtures containing glycol ethers. Calculations with sPC-SAFT (without intramolecular association) are presented for comparison...

  15. A dynamic simulation model of the Savannah River Site high level waste complex

    International Nuclear Information System (INIS)

    Gregory, M.V.; Aull, J.E.; Dimenna, R.A.

    1994-01-01

    A detailed, dynamic simulation entire high level radioactive waste complex at the Savannah River Site has been developed using SPEEDUP(tm) software. The model represents mass transfer, evaporation, precipitation, sludge washing, effluent treatment, and vitrification unit operation processes through the solution of 7800 coupled differential and algebraic equations. Twenty-seven discrete chemical constituents are tracked through the unit operations. The simultaneous simultaneous simulation of concurrent batch and continuous processes is achieved by several novel, customized SPEEDUP(tm) algorithms. Due to the model's computational burden, a high-end work station is required: simulation of a years operation of the complex requires approximately three CPU hours on an IBM RS/6000 Model 590 processor. The model will be used to develop optimal high level waste (HLW) processing strategies over a thirty year time horizon. It will be employed to better understand the dynamic inter-relationships between different HLW unit operations, and to suggest strategies that will maximize available working tank space during the early years of operation and minimize overall waste processing cost over the long-term history of the complex. Model validation runs are currently underway with comparisons against actual plant operating data providing an excellent match

  16. Rethinking the Psychogenic Model of Complex Regional Pain Syndrome: Somatoform Disorders and Complex Regional Pain Syndrome

    Science.gov (United States)

    Hill, Renee J.; Chopra, Pradeep; Richardi, Toni

    2012-01-01

    Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338

  17. The evolution model of Uppsala in light of the complex adaptive systems approach

    Directory of Open Access Journals (Sweden)

    Rennaly Alves da Silva

    2013-11-01

    Full Text Available The behavioral approach to the internationalization of companies explains that the movements toward external markets occur in accordance with the increasing commitment of resources to mitigate the effects of uncertainty and reduce the perception of risk. Evidence indicates that the theories and practices developed in the domestic market may not be able to explain the reality of companies that operate in international markets. Thus, the Paradigm of Complexity presents itself as a comprehensive alternative to realize the relationships within organizations and markets. Accordingly, the aim of this theoretical paper is to analyze the evolution of the Uppsala Model between years 1975 and 2010 with the understanding of the companies in the process of internationalization as Complex Adaptive Systems, in accordance with the Model Kelly and Allison (1998. Four propositions are presented that show the links between the approaches. The most surprising is the perception that the conceptual evolution of the Uppsala Model seems to accompany the evolution of complexity levels, presented in Model Kelly and Allison.

  18. Computational model of dose response for low-LET-induced complex chromosomal aberrations

    International Nuclear Information System (INIS)

    Eidelman, Y.A.; Andreev, S.G.

    2015-01-01

    Experiments with full-colour mFISH chromosome painting have revealed high yield of radiation-induced complex chromosomal aberrations (CAs). The ratio of complex to simple aberrations is dependent on cell type and linear energy transfer. Theoretical analysis has demonstrated that the mechanism of CA formation as a result of interaction between lesions at a surface of chromosome territories does not explain high complexes-to-simples ratio in human lymphocytes. The possible origin of high yields of γ-induced complex CAs was investigated in the present work by computer simulation. CAs were studied on the basis of chromosome structure and dynamics modelling and the hypothesis of CA formation on nuclear centres. The spatial organisation of all chromosomes in a human interphase nucleus was predicted by simulation of mitosis-to-interphase chromosome structure transition. Two scenarios of CA formation were analysed, 'static' (existing in a nucleus prior to irradiation) centres and 'dynamic' (formed in response to irradiation) centres. The modelling results reveal that under certain conditions, both scenarios explain quantitatively the dose-response relationships for both simple and complex γ-induced inter-chromosomal exchanges observed by mFISH chromosome painting in the first post-irradiation mitosis in human lymphocytes. (authors)

  19. The solution of a chiral random matrix model with complex eigenvalues

    International Nuclear Information System (INIS)

    Akemann, G

    2003-01-01

    We describe in detail the solution of the extension of the chiral Gaussian unitary ensemble (chGUE) into the complex plane. The correlation functions of the model are first calculated for a finite number of N complex eigenvalues, where we exploit the existence of orthogonal Laguerre polynomials in the complex plane. When taking the large-N limit we derive new correlation functions in the case of weak and strong non-Hermiticity, thus describing the transition from the chGUE to a generalized Ginibre ensemble. We briefly discuss applications to the Dirac operator eigenvalue spectrum in quantum chromodynamics with non-vanishing chemical potential. This is an extended version of hep-th/0204068

  20. A study of the logical model of capital market complexity theories

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Analyzes the shortcomings of the classic capital market theories based on EMH and discloses the complexity essence of the capital market. Considering the capital market a complicated, interactive and adaptable dynamic system, with complexity science as the method for researching the operation law of the capital market, this paper constructs a nonlinear logical model to analyze the applied realm, focal point and interrelationship of such theories as dissipative structure theory, chaos theory, fractal theory, synergetics theory, catastrophe theory and scale theory, and summarizes and discusses the achievements and problems of each theory.Based on the research, the paper foretells the developing direction of complexity science in a capital market.

  1. A Simple Model for Complex Fabrication of MEMS based Pressure Sensor: A Challenging Approach

    Directory of Open Access Journals (Sweden)

    Himani SHARMA

    2010-08-01

    Full Text Available In this paper we have presented the simple model for complex fabrication of MEMS based absolute micro pressure sensor. This kind of modeling is extremely useful for determining its complexity in fabrication steps and provides complete information about process sequence to be followed during manufacturing. Therefore, the need for test iteration decreases and cost, time can be reduced significantly. By using DevEdit tool (part of SILVACO tool, a behavioral model of pressure sensor have been presented and implemented.

  2. Predicting the future completing models of observed complex systems

    CERN Document Server

    Abarbanel, Henry

    2013-01-01

    Predicting the Future: Completing Models of Observed Complex Systems provides a general framework for the discussion of model building and validation across a broad spectrum of disciplines. This is accomplished through the development of an exact path integral for use in transferring information from observations to a model of the observed system. Through many illustrative examples drawn from models in neuroscience, fluid dynamics, geosciences, and nonlinear electrical circuits, the concepts are exemplified in detail. Practical numerical methods for approximate evaluations of the path integral are explored, and their use in designing experiments and determining a model's consistency with observations is investigated. Using highly instructive examples, the problems of data assimilation and the means to treat them are clearly illustrated. This book will be useful for students and practitioners of physics, neuroscience, regulatory networks, meteorology and climate science, network dynamics, fluid dynamics, and o...

  3. Model complexity and choice of model approaches for practical simulations of CO2 injection, migration, leakage and long-term fate

    Energy Technology Data Exchange (ETDEWEB)

    Celia, Michael A. [Princeton Univ., NJ (United States)

    2016-12-30

    This report documents the accomplishments achieved during the project titled “Model complexity and choice of model approaches for practical simulations of CO2 injection,migration, leakage and long-term fate” funded by the US Department of Energy, Office of Fossil Energy. The objective of the project was to investigate modeling approaches of various levels of complexity relevant to geologic carbon storage (GCS) modeling with the goal to establish guidelines on choice of modeling approach.

  4. Framework for Modelling Multiple Input Complex Aggregations for Interactive Installations

    DEFF Research Database (Denmark)

    Padfield, Nicolas; Andreasen, Troels

    2012-01-01

    on fuzzy logic and provides a method for variably balancing interaction and user input with the intention of the artist or director. An experimental design is presented, demonstrating an intuitive interface for parametric modelling of a complex aggregation function. The aggregation function unifies...

  5. Business model innovation in electricity supply markets: The role of complex value in the United Kingdom

    International Nuclear Information System (INIS)

    Hall, Stephen; Roelich, Katy

    2016-01-01

    This research investigates the new opportunities that business model innovations are creating in electricity supply markets at the sub-national scale. These local supply business models can offer significant benefits to the electricity system, but also generate economic, social, and environmental values that are not well accounted for in current policy or regulation. This paper uses the UK electricity supply market to investigate new business models which rely on more complex value propositions than the incumbent utility model. Nine archetypal local supply business models are identified and their value propositions, value capture methods, and barriers to market entry are analysed. This analysis defines 'complex value' as a key concept in understanding business model innovation in the energy sector. The process of complex value identification poses a challenge to energy researchers, commercial firms and policymakers in liberalised markets; to investigate the opportunities for system efficiency and diverse outcomes that new supplier business models can offer to the electricity system. - Highlights: •Business models of energy supply markets shape energy transitions. •The British system misses four opportunities of local electricity supply. •Nine new business model archetypes of local supply are analysed. •New electricity business models have complex value propositions. •A process for policy response to business model innovation is presented.

  6. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    DEFF Research Database (Denmark)

    Eby, M.; Weaver, A. J.; Alexander, K.

    2013-01-01

    Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE...... and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20...

  7. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    Science.gov (United States)

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  8. Semiotic aspects of control and modeling relations in complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, C.

    1996-08-01

    A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.

  9. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

    Science.gov (United States)

    Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

    As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

  10. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A. [Sandia National Labs., Albuquerque, NM (United States); Hull, A.L. [Sandia National Labs., Livermore, CA (United States); Carey, D.A.; Sim, J.R.; Smith, M.G. [Allied-Signal Aerospace Co., Kansas City, MO (United States). Kansas City Div.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  11. Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Lin Min; Chen Tianlun

    2005-01-01

    Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.

  12. Collaborative Management of Complex Major Construction Projects: AnyLogic-Based Simulation Modelling

    Directory of Open Access Journals (Sweden)

    Na Zhao

    2016-01-01

    Full Text Available Complex supply chain system collaborative management of major construction projects effectively integrates the different participants in the construction project. This paper establishes a simulation model based on AnyLogic to reveal the collaborative elements in the complex supply chain management system and the modes of action as well as the transmission problems of the intent information. Thus it is promoting the participants to become an organism with coordinated development and coevolution. This study can help improve the efficiency and management of the complex system of major construction projects.

  13. The dynamic complexity of a three species food chain model

    International Nuclear Information System (INIS)

    Lv Songjuan; Zhao Min

    2008-01-01

    In this paper, a three-species food chain model is analytically investigated on theories of ecology and using numerical simulation. Bifurcation diagrams are obtained for biologically feasible parameters. The results show that the system exhibits rich complexity features such as stable, periodic and chaotic dynamics

  14. Fundamentals of complex networks models, structures and dynamics

    CERN Document Server

    Chen, Guanrong; Li, Xiang

    2014-01-01

    Complex networks such as the Internet, WWW, transportationnetworks, power grids, biological neural networks, and scientificcooperation networks of all kinds provide challenges for futuretechnological development. In particular, advanced societies havebecome dependent on large infrastructural networks to an extentbeyond our capability to plan (modeling) and to operate (control).The recent spate of collapses in power grids and ongoing virusattacks on the Internet illustrate the need for knowledge aboutmodeling, analysis of behaviors, optimized planning and performancecontrol in such networks. F

  15. Deconvolution of Complex 1D NMR Spectra Using Objective Model Selection.

    Directory of Open Access Journals (Sweden)

    Travis S Hughes

    Full Text Available Fluorine (19F NMR has emerged as a useful tool for characterization of slow dynamics in 19F-labeled proteins. One-dimensional (1D 19F NMR spectra of proteins can be broad, irregular and complex, due to exchange of probe nuclei between distinct electrostatic environments; and therefore cannot be deconvoluted and analyzed in an objective way using currently available software. We have developed a Python-based deconvolution program, decon1d, which uses Bayesian information criteria (BIC to objectively determine which model (number of peaks would most likely produce the experimentally obtained data. The method also allows for fitting of intermediate exchange spectra, which is not supported by current software in the absence of a specific kinetic model. In current methods, determination of the deconvolution model best supported by the data is done manually through comparison of residual error values, which can be time consuming and requires model selection by the user. In contrast, the BIC method used by decond1d provides a quantitative method for model comparison that penalizes for model complexity helping to prevent over-fitting of the data and allows identification of the most parsimonious model. The decon1d program is freely available as a downloadable Python script at the project website (https://github.com/hughests/decon1d/.

  16. Specific combination of compound heterozygous mutations in 17β-hydroxysteroid dehydrogenase type 4 (HSD17B4 defines a new subtype of D-bifunctional protein deficiency

    Directory of Open Access Journals (Sweden)

    McMillan Hugh J

    2012-11-01

    Full Text Available Abstract Background D-bifunctional protein (DBP deficiency is typically apparent within the first month of life with most infants demonstrating hypotonia, psychomotor delay and seizures. Few children survive beyond two years of age. Among patients with prolonged survival all demonstrate severe gross motor delay, absent language development, and severe hearing and visual impairment. DBP contains three catalytically active domains; an N-terminal dehydrogenase, a central hydratase and a C-terminal sterol carrier protein-2-like domain. Three subtypes of the disease are identified based upon the domain affected; DBP type I results from a combined deficiency of dehydrogenase and hydratase activity; DBP type II from isolated hydratase deficiency and DBP type III from isolated dehydrogenase deficiency. Here we report two brothers (16½ and 14 years old with DBP deficiency characterized by normal early childhood followed by sensorineural hearing loss, progressive cerebellar and sensory ataxia and subclinical retinitis pigmentosa. Methods and results Biochemical analysis revealed normal levels of plasma VLCFA, phytanic acid and pristanic acid, and normal bile acids in urine; based on these results no diagnosis was made. Exome analysis was performed using the Agilent SureSelect 50Mb All Exon Kit and the Illumina HiSeq 2000 next-generation-sequencing (NGS platform. Compound heterozygous mutations were identified by exome sequencing and confirmed by Sanger sequencing within the dehydrogenase domain (c.101C>T; p.Ala34Val and hydratase domain (c.1547T>C; p.Ile516Thr of the 17β-hydroxysteroid dehydrogenase type 4 gene (HSD17B4. These mutations have been previously reported in patients with severe-forms of DBP deficiency, however each mutation was reported in combination with another mutation affecting the same domain. Subsequent studies in fibroblasts revealed normal VLCFA levels, normal C26:0 but reduced pristanic acid beta-oxidation activity. Both DBP

  17. Using model complexes to augment and advance metalloproteinase inhibitor design.

    Science.gov (United States)

    Jacobsen, Faith E; Cohen, Seth M

    2004-05-17

    The tetrahedral zinc complex [(Tp(Ph,Me))ZnOH] (Tp(Ph,Me) = hydrotris(3,5-phenylmethylpyrazolyl)borate) was combined with 2-thenylmercaptan, ethyl 4,4,4-trifluoroacetoacetate, salicylic acid, salicylamide, thiosalicylic acid, thiosalicylamide, methyl salicylate, methyl thiosalicyliate, and 2-hydroxyacetophenone to form the corresponding [(Tp(Ph,Me))Zn(ZBG)] complexes (ZBG = zinc-binding group). X-ray crystal structures of these complexes were obtained to determine the mode of binding for each ZBG, several of which had been previously studied with SAR by NMR (structure-activity relationship by nuclear magnetic resonance) as potential ligands for use in matrix metalloproteinase inhibitors. The [(Tp(Ph,Me))Zn(ZBG)] complexes show that hydrogen bonding and donor atom acidity have a pronounced effect on the mode of binding for this series of ligands. The results of these studies give valuable insight into how ligand protonation state and intramolecular hydrogen bonds can influence the coordination mode of metal-binding proteinase inhibitors. The findings here suggest that model-based approaches can be used to augment drug discovery methods applied to metalloproteins and can aid second-generation drug design.

  18. Surface complexation modelling applied to the sorption of nickel on silica

    International Nuclear Information System (INIS)

    Olin, M.

    1995-10-01

    The modelling based on a mechanistic approach, of a sorption experiment is presented in the report. The system chosen for experiments (nickel + silica) is modelled by using literature values for some parameters, the remainder being fitted by existing experimental results. All calculations are performed by HYDRAQL, a model planned especially for surface complexation modelling. Allmost all the calculations are made by using the Triple-Layer Model (TLM) approach, which appeared to be sufficiently flexible for the silica system. The report includes a short description of mechanistic sorption models, input data, experimental results and modelling results (mostly graphical presentations). (13 refs., 40 figs., 4 tabs.)

  19. Capturing the complex behavior of hydraulic fracture stimulation through multi-physics modeling, field-based constraints, and model reduction

    Science.gov (United States)

    Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.

    2016-12-01

    Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.

  20. Complex networks generated by the Penna bit-string model: Emergence of small-world and assortative mixing

    Science.gov (United States)

    Li, Chunguang; Maini, Philip K.

    2005-10-01

    The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.

  1. Model Complexities of Shallow Networks Representing Highly Varying Functions

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2016-01-01

    Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016

  2. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  3. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    Science.gov (United States)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  4. Unsymmetrical dizinc complexes as models for the active sites of phosphohydrolases.

    Science.gov (United States)

    Jarenmark, Martin; Csapó, Edit; Singh, Jyoti; Wöckel, Simone; Farkas, Etelka; Meyer, Franc; Haukka, Matti; Nordlander, Ebbe

    2010-09-21

    The unsymmetrical dinucleating ligand 2-(N-isopropyl-N-((2-pyridyl)methyl)aminomethyl)-6-(N-(carboxylmethyl)-N-((2-pyridyl)methyl)aminomethyl)-4-methylphenol (IPCPMP or L) has been synthesized to model the active site environment of dinuclear metallohydrolases. It has been isolated as the hexafluorophosphate salt H(4)IPCPMP(PF(6))(2) x 2 H(2)O (H(4)L), which has been structurally characterized, and has been used to form two different Zn(II) complexes, [{Zn(2)(IPCPMP)(OAc)}(2)][PF(6)](2) (2) and [{Zn(2)(IPCPMP)(Piv)}(2)][PF(6)](2) (3) (OAc = acetate; Piv = pivalate). The crystal structures of and show that they consist of tetranuclear complexes with very similar structures. Infrared spectroscopy and mass spectrometry indicate that the tetranuclear complexes dissociate into dinuclear complexes in solution. Potentiometric studies of the Zn(II):IPCPMP system in aqueous solution reveal that a mononuclear complex is surprisingly stable at low pH, even at a 2:1 Zn(II):L ratio, but a dinuclear complex dominates at high pH and transforms into a dihydroxido complex by a cooperative deprotonation of two, probably terminally coordinated, water molecules. A kinetic investigation indicates that one of these hydroxides is the active nucleophile in the hydrolysis of bis(2,4-dinitrophenyl)phosphate (BDNPP) enhanced by complex 2, and mechanistic proposals are presented for this reaction as well as the previously reported transesterification of 2-hydroxypropyl p-nitrophenyl phosphate (HPNP) promoted by Zn(II) complexes of IPCPMP.

  5. Macroscopic model description of heavy-ion induced complex-fragment emission

    International Nuclear Information System (INIS)

    Carjan, N.

    1988-01-01

    The Yukawa-plus-exponential finite-range model and the standard liquid-drop model are shortly reviewed and compared. The dependence of the barrier heights and of the saddle-point shapes on mass-asymmetry and on angular momentum is studied for the compound nuclei 110 Sn, 149 Tb and 194 Hg. The predicted asymmetric-fission barriers, charge yields and total kinetic energies are compared with experimental data obtained from the deexcitation of compound nuclei by complex-fragment emission

  6. Daily Based Morgan–Morgan–Finney (DMMF Model: A Spatially Distributed Conceptual Soil Erosion Model to Simulate Complex Soil Surface Configurations

    Directory of Open Access Journals (Sweden)

    Kwanghun Choi

    2017-04-01

    Full Text Available In this paper, we present the Daily based Morgan–Morgan–Finney model. The main processes in this model are based on the Morgan–Morgan–Finney soil erosion model, and it is suitable for estimating surface runoff and sediment redistribution patterns in seasonal climate regions with complex surface configurations. We achieved temporal flexibility by utilizing daily time steps, which is suitable for regions with concentrated seasonal rainfall. We introduce the proportion of impervious surface cover as a parameter to reflect its impacts on soil erosion through blocking water infiltration and protecting the soil from detachment. Also, several equations and sequences of sub-processes are modified from the previous model to better represent physical processes. From the sensitivity analysis using the Sobol’ method, the DMMF model shows the rational response to the input parameters which is consistent with the result from the previous versions. To evaluate the model performance, we applied the model to two potato fields in South Korea that had complex surface configurations using plastic covered ridges at various temporal periods during the monsoon season. Our new model shows acceptable performance for runoff and the sediment loss estimation ( NSE ≥ 0.63 , | PBIAS | ≤ 17.00 , and RSR ≤ 0.57 . Our findings demonstrate that the DMMF model is able to predict the surface runoff and sediment redistribution patterns for cropland with complex surface configurations.

  7. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    Science.gov (United States)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  8. Multi-level emulation of complex climate model responses to boundary forcing data

    Science.gov (United States)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  9. Adsorption of uranium(VI) to manganese oxides: X-ray absorption spectroscopy and surface complexation modeling.

    Science.gov (United States)

    Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E

    2013-01-15

    The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.

  10. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  11. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  12. Complex energy eigenstates in a model with two equal mass particles

    Energy Technology Data Exchange (ETDEWEB)

    Gleiser, R J; Reula, D A; Moreschi, O M [Universidad Nacional de Cordoba (Argentina). Inst. de Matematica, Astronomia y Fisica

    1980-09-01

    The properties of a simples quantum mechanical model for the decay of two equal mass particles are studied and related to some recent work on complex energy eigenvalues. It consists essentially in a generalization of the Lee-Friedrichs model for an unstable particle and gives a highly idealized version of the K/sup 0/-anti K/sup 0/ system, including CP violations. The model is completely solvable, thus allowing a comparison with the well known Weisskopf-Wigner formalism for the decay amplitudes. A different model, describing the same system is also briefly outlined.

  13. Diffusion in higher dimensional SYK model with complex fermions

    Science.gov (United States)

    Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong

    2018-01-01

    We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.

  14. Adaptive Surface Modeling of Soil Properties in Complex Landforms

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2017-06-01

    Full Text Available Abstract: Spatial discontinuity often causes poor accuracy when a single model is used for the surface modeling of soil properties in complex geomorphic areas. Here we present a method for adaptive surface modeling of combined secondary variables to improve prediction accuracy during the interpolation of soil properties (ASM-SP. Using various secondary variables and multiple base interpolation models, ASM-SP was used to interpolate soil K+ in a typical complex geomorphic area (Qinghai Lake Basin, China. Five methods, including inverse distance weighting (IDW, ordinary kriging (OK, and OK combined with different secondary variables (e.g., OK-Landuse, OK-Geology, and OK-Soil, were used to validate the proposed method. The mean error (ME, mean absolute error (MAE, root mean square error (RMSE, mean relative error (MRE, and accuracy (AC were used as evaluation indicators. Results showed that: (1 The OK interpolation result is spatially smooth and has a weak bull's-eye effect, and the IDW has a stronger ‘bull’s-eye’ effect, relatively. They both have obvious deficiencies in depicting spatial variability of soil K+. (2 The methods incorporating combinations of different secondary variables (e.g., ASM-SP, OK-Landuse, OK-Geology, and OK-Soil were associated with lower estimation bias. Compared with IDW, OK, OK-Landuse, OK-Geology, and OK-Soil, the accuracy of ASM-SP increased by 13.63%, 10.85%, 9.98%, 8.32%, and 7.66%, respectively. Furthermore, ASM-SP was more stable, with lower MEs, MAEs, RMSEs, and MREs. (3 ASM-SP presents more details than others in the abrupt boundary, which can render the result consistent with the true secondary variables. In conclusion, ASM-SP can not only consider the nonlinear relationship between secondary variables and soil properties, but can also adaptively combine the advantages of multiple models, which contributes to making the spatial interpolation of soil K+ more reasonable.

  15. A 3D modeling approach to complex faults with multi-source data

    Science.gov (United States)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  16. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    Energy Technology Data Exchange (ETDEWEB)

    Bloch, Jacques [Univ. of Regensburg (Germany). Inst. for Theorectical Physics; Glesaan, Jonas [Swansea Univ., Swansea U.K.; Verbaarschot, Jacobus [Stony Brook Univ., NY (United States). Dept. of Physics and Astronomy; Zafeiropoulos, Savvas [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Heidelberg Univ. (Germany). Inst. for Theoretische Physik

    2018-04-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  17. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

    Science.gov (United States)

    Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

    2013-04-01

    In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

  18. New Age of 3D Geological Modelling or Complexity is not an Issue Anymore

    Science.gov (United States)

    Mitrofanov, Aleksandr

    2017-04-01

    Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit

  19. Generative complexity of Gray-Scott model

    Science.gov (United States)

    Adamatzky, Andrew

    2018-03-01

    In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).

  20. Modelling of turbulence and combustion for simulation of gas explosions in complex geometries

    Energy Technology Data Exchange (ETDEWEB)

    Arntzen, Bjoern Johan

    1998-12-31

    This thesis analyses and presents new models for turbulent reactive flows for CFD (Computational Fluid Dynamics) simulation of gas explosions in complex geometries like offshore modules. The course of a gas explosion in a complex geometry is largely determined by the development of turbulence and the accompanying increased combustion rate. To be able to model the process it is necessary to use a CFD code as a starting point, provided with a suitable turbulence and combustion model. The modelling and calculations are done in a three-dimensional finite volume CFD code, where complex geometries are represented by a porosity concept, which gives porosity on the grid cell faces, depending on what is inside the cell. The turbulent flow field is modelled with a k-{epsilon} turbulence model. Subgrid models are used for production of turbulence from geometry not fully resolved on the grid. Results from laser doppler anemometry measurements around obstructions in steady and transient flows have been analysed and the turbulence models have been improved to handle transient, subgrid and reactive flows. The combustion is modelled with a burning velocity model and a flame model which incorporates the burning velocity into the code. Two different flame models have been developed: SIF (Simple Interface Flame model), which treats the flame as an interface between reactants and products, and the {beta}-model where the reaction zone is resolved with about three grid cells. The flame normally starts with a quasi laminar burning velocity, due to flame instabilities, modelled as a function of flame radius and laminar burning velocity. As the flow field becomes turbulent, the flame uses a turbulent burning velocity model based on experimental data and dependent on turbulence parameters and laminar burning velocity. The laminar burning velocity is modelled as a function of gas mixture, equivalence ratio, pressure and temperature in reactant. Simulations agree well with experiments. 139

  1. Recommended Research Directions for Improving the Validation of Complex Systems Models.

    Energy Technology Data Exchange (ETDEWEB)

    Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Finley, Patrick D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flanagan, Tatiana Paz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Naugle, Asmeret Bier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tsao, Jeffrey Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Improved validation for models of complex systems has been a primary focus over the past year for the Resilience in Complex Systems Research Challenge. This document describes a set of research directions that are the result of distilling those ideas into three categories of research -- epistemic uncertainty, strong tests, and value of information. The content of this document can be used to transmit valuable information to future research activities, update the Resilience in Complex Systems Research Challenge's roadmap, inform the upcoming FY18 Laboratory Directed Research and Development (LDRD) call and research proposals, and facilitate collaborations between Sandia and external organizations. The recommended research directions can provide topics for collaborative research, development of proposals, workshops, and other opportunities.

  2. Characterization of pioglitazone cyclodextrin complexes: Molecular modeling to in vivo evaluation

    Directory of Open Access Journals (Sweden)

    Dinesh M Bramhane

    2016-01-01

    Full Text Available Aims: The objective of present study was to study the influence of different β-cyclodextrin derivatives and different methods of complexation on aqueous solubility and consequent translation in in vivo performance of Pioglitazone (PE. Material and Methods: Three cyclodextrins: β-cyclodextrin (BCD, hydroxypropyl-β-cyclodextrin (HPBCD and Sulfobutylether-7-β-cyclodextrin (SBEBCD were employed in preparation of 1:1 Pioglitazone complexes by three methods viz. co-grinding, kneading and co-evaporation. Complexation was confirmed by phase solubility, proton NMR, Fourier Transform Infrared spectroscopy, Differential Scanning Calorimetry (DSC and X-Ray diffraction (XRD. Mode of complexation was investigated by molecular dynamic studies. Pharmacodynamic study of blood glucose lowering activity of PE complexes was performed in Alloxan induced diabetic rat model. Results: Aqueous solubility of PE was significantly improved in presence of cyclodextrin. Apparent solubility constants were observed to be 254.33 M–1 for BCD-PE, 737.48 M–1 for HPBCD-PE and 5959.06 M–1 for SBEBCD-PE. The in silico predictions of mode of inclusion were in close agreement with the experimental proton NMR observation. DSC and XRD demonstrated complete amorphization of crystalline PE upon inclusion. All complexes exhibited >95% dissolution within 10 min compared to drug powder that showed <40% at the same time. Marked lowering of blood glucose was recorded for all complexes. Conclusion: Complexation of PE with different BCD significantly influenced its aqueous solubility, improved in vitro dissolution and consequently translated into enhanced pharmacodynamic activity in rats

  3. Multi-scale modeling with cellular automata: The complex automata approach

    NARCIS (Netherlands)

    Hoekstra, A.G.; Falcone, J.-L.; Caiazzo, A.; Chopard, B.

    2008-01-01

    Cellular Automata are commonly used to describe complex natural phenomena. In many cases it is required to capture the multi-scale nature of these phenomena. A single Cellular Automata model may not be able to efficiently simulate a wide range of spatial and temporal scales. It is our goal to

  4. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  5. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    Science.gov (United States)

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  6. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Large eddy simulation modeling of particle-laden flows in complex terrain

    Science.gov (United States)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  8. Modelling of Octahedral Manganese II Complexes with Inorganic Ligands: A Problem with Spin-States

    Directory of Open Access Journals (Sweden)

    Ludwik Adamowicz

    2003-08-01

    Full Text Available Abstract: Quantum mechanical ab initio UHF, MP2, MC-SCF and DFT calculations with moderate Gaussian basis sets were performed for MnX6, X = H2O, F-, CN-, manganese octahedral complexes. The correct spin-state of the complexes was obtained only when the counter ions neutralizing the entire complexes were used in the modelling at the B3LYP level of theory.

  9. Physiological Dynamics in Demyelinating Diseases: Unraveling Complex Relationships through Computer Modeling

    Directory of Open Access Journals (Sweden)

    Jay S. Coggan

    2015-09-01

    Full Text Available Despite intense research, few treatments are available for most neurological disorders. Demyelinating diseases are no exception. This is perhaps not surprising considering the multifactorial nature of these diseases, which involve complex interactions between immune system cells, glia and neurons. In the case of multiple sclerosis, for example, there is no unanimity among researchers about the cause or even which system or cell type could be ground zero. This situation precludes the development and strategic application of mechanism-based therapies. We will discuss how computational modeling applied to questions at different biological levels can help link together disparate observations and decipher complex mechanisms whose solutions are not amenable to simple reductionism. By making testable predictions and revealing critical gaps in existing knowledge, such models can help direct research and will provide a rigorous framework in which to integrate new data as they are collected. Nowadays, there is no shortage of data; the challenge is to make sense of it all. In that respect, computational modeling is an invaluable tool that could, ultimately, transform how we understand, diagnose, and treat demyelinating diseases.

  10. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    International Nuclear Information System (INIS)

    Lu, Yunfan; Wang, Jun; Niu, Hongli

    2015-01-01

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model

  11. Agent-based financial dynamics model from stochastic interacting epidemic system and complexity analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Yunfan, E-mail: yunfanlu@yeah.net; Wang, Jun; Niu, Hongli

    2015-06-12

    An agent-based financial stock price model is developed and investigated by a stochastic interacting epidemic system, which is one of the statistical physics systems and has been used to model the spread of an epidemic or a forest fire. Numerical and statistical analysis are performed on the simulated returns of the proposed financial model. Complexity properties of the financial time series are explored by calculating the correlation dimension and using the modified multiscale entropy method. In order to verify the rationality of the financial model, the real stock market indexes, Shanghai Composite Index and Shenzhen Component Index, are studied in comparison with the simulation data of the proposed model for the different infectiousness parameters. The empirical research reveals that this financial model can reproduce some important features of the real stock markets. - Highlights: • A new agent-based financial price model is developed by stochastic interacting epidemic system. • The structure of the proposed model allows to simulate the financial dynamics. • Correlation dimension and MMSE are applied to complexity analysis of financial time series. • Empirical results show the rationality of the proposed financial model.

  12. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  13. Building Better Ecological Machines: Complexity Theory and Alternative Economic Models

    Directory of Open Access Journals (Sweden)

    Jess Bier

    2016-12-01

    Full Text Available Computer models of the economy are regularly used to predict economic phenomena and set financial policy. However, the conventional macroeconomic models are currently being reimagined after they failed to foresee the current economic crisis, the outlines of which began to be understood only in 2007-2008. In this article we analyze the most prominent of this reimagining: Agent-Based models (ABMs. ABMs are an influential alternative to standard economic models, and they are one focus of complexity theory, a discipline that is a more open successor to the conventional chaos and fractal modeling of the 1990s. The modelers who create ABMs claim that their models depict markets as ecologies, and that they are more responsive than conventional models that depict markets as machines. We challenge this presentation, arguing instead that recent modeling efforts amount to the creation of models as ecological machines. Our paper aims to contribute to an understanding of the organizing metaphors of macroeconomic models, which we argue is relevant conceptually and politically, e.g., when models are used for regulatory purposes.

  14. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    Science.gov (United States)

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle

  15. A complex network based model for detecting isolated communities in water distribution networks

    Science.gov (United States)

    Sheng, Nan; Jia, Youwei; Xu, Zhao; Ho, Siu-Lau; Wai Kan, Chi

    2013-12-01

    Water distribution network (WDN) is a typical real-world complex network of major infrastructure that plays an important role in human's daily life. In this paper, we explore the formation of isolated communities in WDN based on complex network theory. A graph-algebraic model is proposed to effectively detect the potential communities due to pipeline failures. This model can properly illustrate the connectivity and evolution of WDN during different stages of contingency events, and identify the emerging isolated communities through spectral analysis on Laplacian matrix. A case study on a practical urban WDN in China is conducted, and the consistency between the simulation results and the historical data are reported to showcase the feasibility and effectiveness of the proposed model.

  16. Frequency dependence of complex moduli of brain tissue using a fractional Zener model

    International Nuclear Information System (INIS)

    Kohandel, M; Sivaloganathan, S; Tenti, G; Darvish, K

    2005-01-01

    Brain tissue exhibits viscoelastic behaviour. If loading times are substantially short, static tests are not sufficient to determine the complete viscoelastic behaviour of the material, and dynamic test methods are more appropriate. The concept of complex modulus of elasticity is a powerful tool for characterizing the frequency domain behaviour of viscoelastic materials. On the other hand, it is well known that classical viscoelastic models can be generalized by means of fractional calculus to describe more complex viscoelastic behaviour of materials. In this paper, the fractional Zener model is investigated in order to describe the dynamic behaviour of brain tissue. The model is fitted to experimental data of oscillatory shear tests of bovine brain tissue to verify its behaviour and to obtain the material parameters

  17. Model Complexity and Out-of-Sample Performance: Evidence from S&P 500 Index Returns

    NARCIS (Netherlands)

    Kaeck, Andreas; Rodrigues, Paulo; Seeger, Norman J.

    We apply a range of out-of-sample specification tests to more than forty competing stochastic volatility models to address how model complexity affects out-of-sample performance. Using daily S&P 500 index returns, model confidence set estimations provide strong evidence that the most important model

  18. QRS complex detection based on continuous density hidden Markov models using univariate observations

    Science.gov (United States)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  19. Parameters in dynamic models of complex traits are containers of missing heritability.

    Directory of Open Access Journals (Sweden)

    Yunpeng Wang

    Full Text Available Polymorphisms identified in genome-wide association studies of human traits rarely explain more than a small proportion of the heritable variation, and improving this situation within the current paradigm appears daunting. Given a well-validated dynamic model of a complex physiological trait, a substantial part of the underlying genetic variation must manifest as variation in model parameters. These parameters are themselves phenotypic traits. By linking whole-cell phenotypic variation to genetic variation in a computational model of a single heart cell, incorporating genotype-to-parameter maps, we show that genome-wide association studies on parameters reveal much more genetic variation than when using higher-level cellular phenotypes. The results suggest that letting such studies be guided by computational physiology may facilitate a causal understanding of the genotype-to-phenotype map of complex traits, with strong implications for the development of phenomics technology.

  20. Modeling the complex activity of sickle cell and thalassemia specialist nurses in England.

    Science.gov (United States)

    Leary, Alison; Anionwu, Elizabeth N

    2014-01-01

    Specialist advanced practice nursing in hemoglobinopathies has a rich historical and descriptive literature. Subsequent work has shown that the role is valued by patients and families and also by other professionals. However, there is little empirical research on the complexity of activity of these services in terms of interventions offered. In addition, the work of clinical nurse specialists in England has been devalued through a perception of oversimplification. The purpose of this study was to understand the complexity of expert nursing practice in sickle cell and thalassemia. The approach taken to modeling complexity was used from common methods in mathematical modeling and computational mathematics. Knowledge discovery through data was the underpinning framework used in this study using a priori mined data. This allowed categorization of activity and articulation of complexity. In total, 8966 nursing events were captured over 1639 hours from a total of 22.8 whole time equivalents, and several data sources were mined. The work of specialist nurses in this area is complex in terms of the physical and psychosocial care they provide. The nurses also undertook case management activity such as utilizing a very large network of professionals, and others participated in admission avoidance work and education of patients' families and other staff. The work of nurses specializing in hemoglobinopathy care is complex and multidimensional and is likely to contribute to the quality of care in a cost-effective way. An understanding of this complexity can be used as an underpinning to establishing key performance indicators, optimum caseload calculations, and economic evaluation.

  1. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  2. Simulation model 'methane' as a tool for effective biogas production during anaerobic conversion of complex organic matter

    Energy Technology Data Exchange (ETDEWEB)

    Vavilin, V A; Vasiliev, V B; Ponomarev, A V; Rytow, S V [Russian Academy of Sciences, Moscow (Russian Federation). Water Problems Inst.

    1994-01-01

    A universal basic model of anaerobic conversion of complex organic material is suggested. The model can be used for investigating the start-up experiments for food industry wastewater. General results obtained in the model agreed with the experimental data. An explanation of a complex dynamic behaviour of the anaerobic system is suggested. (author)

  3. Microscopic universality of complex matrix model correlation functions at weak non-Hermiticity

    International Nuclear Information System (INIS)

    Akemann, G.

    2002-01-01

    The microscopic correlation functions of non-chiral random matrix models with complex eigenvalues are analyzed for a wide class of non-Gaussian measures. In the large-N limit of weak non-Hermiticity, where N is the size of the complex matrices, we can prove that all k-point correlation functions including an arbitrary number of Dirac mass terms are universal close to the origin. To this aim we establish the universality of the asymptotics of orthogonal polynomials in the complex plane. The universality of the correlation functions then follows from that of the kernel of orthogonal polynomials and a mapping of massive to massless correlators

  4. A modeling process to understand complex system architectures

    Science.gov (United States)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  5. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  6. Quantitative evaluation and modeling of two-dimensional neovascular network complexity: the surface fractal dimension

    International Nuclear Information System (INIS)

    Grizzi, Fabio; Russo, Carlo; Colombo, Piergiuseppe; Franceschini, Barbara; Frezza, Eldo E; Cobos, Everardo; Chiriva-Internati, Maurizio

    2005-01-01

    Modeling the complex development and growth of tumor angiogenesis using mathematics and biological data is a burgeoning area of cancer research. Architectural complexity is the main feature of every anatomical system, including organs, tissues, cells and sub-cellular entities. The vascular system is a complex network whose geometrical characteristics cannot be properly defined using the principles of Euclidean geometry, which is only capable of interpreting regular and smooth objects that are almost impossible to find in Nature. However, fractal geometry is a more powerful means of quantifying the spatial complexity of real objects. This paper introduces the surface fractal dimension (D s ) as a numerical index of the two-dimensional (2-D) geometrical complexity of tumor vascular networks, and their behavior during computer-simulated changes in vessel density and distribution. We show that D s significantly depends on the number of vessels and their pattern of distribution. This demonstrates that the quantitative evaluation of the 2-D geometrical complexity of tumor vascular systems can be useful not only to measure its complex architecture, but also to model its development and growth. Studying the fractal properties of neovascularity induces reflections upon the real significance of the complex form of branched anatomical structures, in an attempt to define more appropriate methods of describing them quantitatively. This knowledge can be used to predict the aggressiveness of malignant tumors and design compounds that can halt the process of angiogenesis and influence tumor growth

  7. Modeling the Complexities of Water and Hygiene in Limpopo Province South Africa

    Science.gov (United States)

    Mellor, J. E.; Smith, J. A.; Learmonth, G.; Netshandama, V.; Dillingham, R.

    2012-12-01

    Access to sustainable water and sanitation services is one of the biggest challenges the developing world faces as an increasing number of people inhabit those areas. Inadequate access to water and sanitation infrastructure often leads children to drink poor quality water which can result in early childhood diarrhea (ECD). Repeated episodes of ECD can cause serious problems such as growth stunting, cognitive impairment, and even death. Although researchers have long studied the connection between poor access to water and hygiene facilities and ECD, most studies have relied on intervention-control methods to study the effects of singular interventions. Such studies are time-consuming, costly, and fail to acknowledge that the causes and prevention strategies for ECD are numerous and complex. An alternate approach is to think of a community as a complex system in which the engineered, natural and social environments interact in ways that are not easily predicted. Such complex systems have no central or coordinating mechanism and may exhibit emergent behavior which can be counterintuitive and lead to valuable insights. The goal of this research is to develop a robust, quantitative understanding of the complex pathogen transmission chain that leads to ECD. To realize this goal, we have developed an Agent-Based Model (ABM) which simulates individual community member behavior. We have validated this transdisciplinary model with four years of field data from a community in Limpopo Province, South Africa. Our model incorporates data such as household water source preferences, collection habits, household- and source-water quality, water-source reliability and biological regrowth. Our outcome measures are household water quality, ECD incidences, and child growth stunting. This technique allows us to test hypotheses on the computer. Future researchers can implement promising interventions with our partner institution, the University of Venda, and the model can be refined as

  8. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    Science.gov (United States)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  9. Assessment for Complex Learning Resources: Development and Validation of an Integrated Model

    Directory of Open Access Journals (Sweden)

    Gudrun Wesiak

    2013-01-01

    Full Text Available Today’s e-learning systems meet the challenge to provide interactive, personalized environments that support self-regulated learning as well as social collaboration and simulation. At the same time assessment procedures have to be adapted to the new learning environments by moving from isolated summative assessments to integrated assessment forms. Therefore, learning experiences enriched with complex didactic resources - such as virtualized collaborations and serious games - have emerged. In this extension of [1] an integrated model for e-assessment (IMA is outlined, which incorporates complex learning resources and assessment forms as main components for the development of an enriched learning experience. For a validation the IMA was presented to a group of experts from the fields of cognitive science, pedagogy, and e-learning. The findings from the validation lead to several refinements of the model, which mainly concern the component forms of assessment and the integration of social aspects. Both aspects are accounted for in the revised model, the former by providing a detailed sub-model for assessment forms.

  10. Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.

    Science.gov (United States)

    Poirier, Louise

    Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…

  11. Calcium-manganese oxides as structural and functional models for active site in oxygen evolving complex in photosystem II: lessons from simple models.

    Science.gov (United States)

    Najafpour, Mohammad Mahdi

    2011-01-01

    The oxygen evolving complex in photosystem II which induces the oxidation of water to dioxygen in plants, algae and certain bacteria contains a cluster of one calcium and four manganese ions. It serves as a model to split water by sunlight. Reports on the mechanism and structure of photosystem II provide a more detailed architecture of the oxygen evolving complex and the surrounding amino acids. One challenge in this field is the development of artificial model compounds to study oxygen evolution reaction outside the complicated environment of the enzyme. Calcium-manganese oxides as structural and functional models for the active site of photosystem II are explained and reviewed in this paper. Because of related structures of these calcium-manganese oxides and the catalytic centers of active site of the oxygen evolving complex of photosystem II, the study may help to understand more about mechanism of oxygen evolution by the oxygen evolving complex of photosystem II. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Tsao, Jeffrey Y. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trucano, Timothy G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kleban, Stephen D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Naugle, Asmeret Bier [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Verzi, Stephen Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Johnson, Curtis M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Mark A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Flanagan, Tatiana Paz [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vugrin, Eric D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gabert, Kasimir Georg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lave, Matthew Samuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Chen, Wei [Northwestern Univ., Evanston, IL (United States); DeLaurentis, Daniel [Purdue Univ., West Lafayette, IN (United States); Hubler, Alfred [Univ. of Illinois, Urbana, IL (United States); Oberkampf, Bill [WLO Consulting, Austin, TX (United States)

    2016-08-01

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?

  13. Small System dynamics models for big issues : Triple jump towards real-world complexity

    NARCIS (Netherlands)

    Pruyt, E.

    2013-01-01

    System Dynamics (SD) is a method to describe, model, simulate and analyze dynamically complex issues and/or systems in terms of the processes, information, organizational boundaries and strategies. Quantitative SD modeling, simulation and analysis facilitates the (re)design of systems and design of

  14. A subsurface model of the beaver meadow complex

    Science.gov (United States)

    Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.

    2015-12-01

    Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.

  15. Aspects of data modeling and query processing for complex multidimensional data

    DEFF Research Database (Denmark)

    Pedersen, Torben Bach

    warehousing technologies, over those posed by conventional data warehouse applications. This thesis presents a number of exciting new research challenges posed by clinical applications, to be met by the database research community. These include the need for complex-data modeling features, advanced temporal...

  16. Production of vanillin by metabolically engineered Escherichia coli.

    Science.gov (United States)

    Yoon, Sang-Hwal; Li, Cui; Kim, Ju-Eun; Lee, Sook-Hee; Yoon, Ji-Young; Choi, Myung-Suk; Seo, Weon-Taek; Yang, Jae-Kyung; Kim, Jae-Yeon; Kim, Seon-Won

    2005-11-01

    E. coli was metabolically engineered to produce vanillin by expression of the fcs and ech genes from Amycolatopsis sp. encoding feruloyl-CoA synthetase and enoyl-CoA hydratase/aldolase, respectively. Vanillin production was optimized by leaky expression of the genes, under the IPTG-inducible trc promoter, in complex 2YT medium. Supplementation with glucose, fructose, galactose, arabinose or glycerol severely decreased vanillin production. The highest vanillin production of 1.1 g l(-1) was obtained with cultivation for 48 h in 2YT medium with 0.2% (w/v) ferulate, without IPTG and no supplementation of carbon sources.

  17. Interacting with complex systems. Models and games for a sustainable economy

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, H.J.M.

    2010-09-15

    In the last decades the science-policy interface has become more important and more complex too. In this report we search for novel ways to extend or reframe the economic and environmental theories and models upon which policy recommendations are, or should be, based. The methods and applications of Complex System Science, in particular, have been explored and are found to be still fragmented. But they certainly can and should form the basis for introducing behavioural and innovation dynamics which make these theories and models more like what happens in the real world. In combination with interactive simulation and games, of which some examples are discussed in this report, science can in a post-modern context contribute more effectively to the strategic decision making in government and other institutions regarding sustainable development. This will direly be needed in view of the new and global challenges facing us.

  18. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    Science.gov (United States)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  19. Constructive Lower Bounds on Model Complexity of Shallow Perceptron Networks

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2018-01-01

    Roč. 29, č. 7 (2018), s. 305-315 ISSN 0941-0643 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : shallow and deep networks * model complexity and sparsity * signum perceptron networks * finite mappings * variational norms * Hadamard matrices Subject RIV: IN - Informatics, Computer Science Impact factor: 2.505, year: 2016

  20. Model-based flaw localization from perturbations in the dynamic response of complex mechanical structures

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H

    2009-02-24

    A new method of locating structural damage using measured differences in vibrational response and a numerical model of the undamaged structure has been presented. This method is particularly suited for complex structures with little or no symmetry. In a prior study the method successively located simulated damage from measurements of the vibrational response on two simple structures. Here we demonstrate that it can locate simulated damage in a complex structure. A numerical model of a complex structure was used to calculate the structural response before and after the introduction of a void. The method can now be considered for application to structures of programmatic interest. It could be used to monitor the structural integrity of complex mechanical structures and assemblies over their lifetimes. This would allow early detection of damage, when repair is relatively easy and inexpensive. It would also allow one to schedule maintenance based on actual damage instead of a time schedule.

  1. Use of probabilistic relational model (PRM) for dependability analysis of complex systems

    OpenAIRE

    Medina-Oliva , Gabriela; Weber , Philippe; Levrat , Eric; Iung , Benoît

    2010-01-01

    International audience; This paper proposes a methodology to develop a aided decision-making tool for assessing the dependability and performances (i.e. reliability) of an industrial system. This tool is built on a model based on a new formalism, called the probabilistic relational model (PRM) which is adapted to deal with large and complex systems. The model is formalized from functional, dysfunctional and informational studies of the technical industrial systems. An application of this meth...

  2. Surface Complexation Modeling in Variable Charge Soils: Charge Characterization by Potentiometric Titration

    Directory of Open Access Journals (Sweden)

    Giuliano Marchi

    2015-10-01

    Full Text Available ABSTRACT Intrinsic equilibrium constants of 17 representative Brazilian Oxisols were estimated from potentiometric titration measuring the adsorption of H+ and OH− on amphoteric surfaces in suspensions of varying ionic strength. Equilibrium constants were fitted to two surface complexation models: diffuse layer and constant capacitance. The former was fitted by calculating total site concentration from curve fitting estimates and pH-extrapolation of the intrinsic equilibrium constants to the PZNPC (hand calculation, considering one and two reactive sites, and by the FITEQL software. The latter was fitted only by FITEQL, with one reactive site. Soil chemical and physical properties were correlated to the intrinsic equilibrium constants. Both surface complexation models satisfactorily fit our experimental data, but for results at low ionic strength, optimization did not converge in FITEQL. Data were incorporated in Visual MINTEQ and they provide a modeling system that can predict protonation-dissociation reactions in the soil surface under changing environmental conditions.

  3. CONSTRUCTION OF AGGREGATE NATIONAL ECONOMIC MODEL WITH DETAILED REPRESENTATION OF THE FOREST COMPLEX

    Directory of Open Access Journals (Sweden)

    Blam Yu. Sh.

    2014-09-01

    Full Text Available Autonomy of the industrial forecasts often exacerbated by the lack of direct connection with the economic forecasts on the macro level. On the other hand it is desirable to simulate the industrial strategy in a fairly high degree of isolation, so that it does not depend at every moment on description of other activities or levels of hierarchy. To study the effects of national economic relations on the development of industrial complex we propose to use a spatial model of the national economy, which describes modalities of the researched industries in more detail. Quantitative parameters, obtained using basic Interregional Cross-sectoral Optimization Model (OMMM against the external development of the industrial complex, are used to form an aggregated model with a detailed representation with unsignificant loss of information. Thus, the above described model is intended to harmonize national economic decisions with forecasts obtained from industry models in real terms. The conversion procedure is based on the properties of the model of «mutual» problems and information from basic OMMM. The final result is a production-transport cost model within a «traditional» industrial structure.

  4. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    Science.gov (United States)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  5. A novel multilayer model for missing link prediction and future link forecasting in dynamic complex networks

    Science.gov (United States)

    Yasami, Yasser; Safaei, Farshad

    2018-02-01

    The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of

  6. Developing predictive systems models to address complexity and relevance for ecological risk assessment.

    Science.gov (United States)

    Forbes, Valery E; Calow, Peter

    2013-07-01

    Ecological risk assessments (ERAs) are not used as well as they could be in risk management. Part of the problem is that they often lack ecological relevance; that is, they fail to grasp necessary ecological complexities. Adding realism and complexity can be difficult and costly. We argue that predictive systems models (PSMs) can provide a way of capturing complexity and ecological relevance cost-effectively. However, addressing complexity and ecological relevance is only part of the problem. Ecological risk assessments often fail to meet the needs of risk managers by not providing assessments that relate to protection goals and by expressing risk in ratios that cannot be weighed against the costs of interventions. Once more, PSMs can be designed to provide outputs in terms of value-relevant effects that are modulated against exposure and that can provide a better basis for decision making than arbitrary ratios or threshold values. Recent developments in the modeling and its potential for implementation by risk assessors and risk managers are beginning to demonstrate how PSMs can be practically applied in risk assessment and the advantages that doing so could have. Copyright © 2013 SETAC.

  7. Model of a ternary complex between activated factor VII, tissue factor and factor IX.

    Science.gov (United States)

    Chen, Shu-wen W; Pellequer, Jean-Luc; Schved, Jean-François; Giansily-Blaizot, Muriel

    2002-07-01

    Upon binding to tissue factor, FVIIa triggers coagulation by activating vitamin K-dependent zymogens, factor IX (FIX) and factor X (FX). To understand recognition mechanisms in the initiation step of the coagulation cascade, we present a three-dimensional model of the ternary complex between FVIIa:TF:FIX. This model was built using a full-space search algorithm in combination with computational graphics. With the known crystallographic complex FVIIa:TF kept fixed, the FIX docking was performed first with FIX Gla-EGF1 domains, followed by the FIX protease/EGF2 domains. Because the FIXa crystal structure lacks electron density for the Gla domain, we constructed a chimeric FIX molecule that contains the Gla-EGF1 domains of FVIIa and the EGF2-protease domains of FIXa. The FVIIa:TF:FIX complex has been extensively challenged against experimental data including site-directed mutagenesis, inhibitory peptide data, haemophilia B database mutations, inhibitor antibodies and a novel exosite binding inhibitor peptide. This FVIIa:TF:FIX complex provides a powerful tool to study the regulation of FVIIa production and presents new avenues for developing therapeutic inhibitory compounds of FVIIa:TF:substrate complex.

  8. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  9. Modelling and simulation of gas explosions in complex geometries

    Energy Technology Data Exchange (ETDEWEB)

    Saeter, Olav

    1998-12-31

    This thesis presents a three-dimensional Computational Fluid Dynamics (CFD) code (EXSIM94) for modelling and simulation of gas explosions in complex geometries. It gives the theory and validates the following sub-models : (1) the flow resistance and turbulence generation model for densely packed regions, (2) the flow resistance and turbulence generation model for single objects, and (3) the quasi-laminar combustion model. It is found that a simple model for flow resistance and turbulence generation in densely packed beds is able to reproduce the medium and large scale MERGE explosion experiments of the Commission of European Communities (CEC) within a band of factor 2. The model for a single representation is found to predict explosion pressure in better agreement with the experiments with a modified k-{epsilon} model. This modification also gives a slightly improved grid independence for realistic gas explosion approaches. One laminar model is found unsuitable for gas explosion modelling because of strong grid dependence. Another laminar model is found to be relatively grid independent and to work well in harmony with the turbulent combustion model. The code is validated against 40 realistic gas explosion experiments. It is relatively grid independent in predicting explosion pressure in different offshore geometries. It can predict the influence of ignition point location, vent arrangements, different geometries, scaling effects and gas reactivity. The validation study concludes with statistical and uncertainty analyses of the code performance. 98 refs., 96 figs, 12 tabs.

  10. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    Science.gov (United States)

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  11. Real-time modeling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    1994-08-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models

  12. Real-time modelling of complex atmospheric releases in urban areas

    International Nuclear Information System (INIS)

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    2000-01-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralised dispersion modelling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models. (author)

  13. Prandtl-Ishlinskii hysteresis models for complex time dependent hysteresis nonlinearities

    Czech Academy of Sciences Publication Activity Database

    Al Janaideh, M.; Krejčí, Pavel

    2012-01-01

    Roč. 407, č. 9 (2012), s. 1365-1367 ISSN 0921-4526 R&D Projects: GA ČR GAP201/10/2315 Institutional research plan: CEZ:AV0Z10190503 Keywords : complex hysteresis * time dependent hysteresis * Prandtl-Ishlinskii model Subject RIV: BA - General Mathematics Impact factor: 1.327, year: 2012 http://www.sciencedirect.com/science/article/pii/S092145261100932X

  14. A COMPARATIVE STUDY OF FORECASTING MODELS FOR TREND AND SEASONAL TIME SERIES DOES COMPLEX MODEL ALWAYS YIELD BETTER FORECAST THAN SIMPLE MODELS

    Directory of Open Access Journals (Sweden)

    Suhartono Suhartono

    2005-01-01

    Full Text Available Many business and economic time series are non-stationary time series that contain trend and seasonal variations. Seasonality is a periodic and recurrent pattern caused by factors such as weather, holidays, or repeating promotions. A stochastic trend is often accompanied with the seasonal variations and can have a significant impact on various forecasting methods. In this paper, we will investigate and compare some forecasting methods for modeling time series with both trend and seasonal patterns. These methods are Winter's, Decomposition, Time Series Regression, ARIMA and Neural Networks models. In this empirical research, we study on the effectiveness of the forecasting performance, particularly to answer whether a complex method always give a better forecast than a simpler method. We use a real data, that is airline passenger data. The result shows that the more complex model does not always yield a better result than a simpler one. Additionally, we also find the possibility to do further research especially the use of hybrid model by combining some forecasting method to get better forecast, for example combination between decomposition (as data preprocessing and neural network model.

  15. Multi-Sensor As-Built Models of Complex Industrial Architectures

    Directory of Open Access Journals (Sweden)

    Jean-François Hullo

    2015-12-01

    Full Text Available In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF, is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models. In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”.

  16. An overview of structurally complex network-based modeling of public opinion in the “We the Media” era

    Science.gov (United States)

    Wang, Guanghui; Wang, Yufei; Liu, Yijun; Chi, Yuxue

    2018-05-01

    As the transmission of public opinion on the Internet in the “We the Media” era tends to be supraterritorial, concealed and complex, the traditional “point-to-surface” transmission of information has been transformed into “point-to-point” reciprocal transmission. A foundation for studies of the evolution of public opinion and its transmission on the Internet in the “We the Media” era can be laid by converting the massive amounts of fragmented information on public opinion that exists on “We the Media” platforms into structurally complex networks of information. This paper describes studies of structurally complex network-based modeling of public opinion on the Internet in the “We the Media” era from the perspective of the development and evolution of complex networks. The progress that has been made in research projects relevant to the structural modeling of public opinion on the Internet is comprehensively summarized. The review considers aspects such as regular grid-based modeling of the rules that describe the propagation of public opinion on the Internet in the “We the Media” era, social network modeling, dynamic network modeling, and supernetwork modeling. Moreover, an outlook for future studies that address complex network-based modeling of public opinion on the Internet is put forward as a summary from the perspective of modeling conducted using the techniques mentioned above.

  17. Models, methods and software tools for building complex adaptive traffic systems

    International Nuclear Information System (INIS)

    Alyushin, S.A.

    2011-01-01

    The paper studies the modern methods and tools to simulate the behavior of complex adaptive systems (CAS), the existing systems of traffic modeling in simulators and their characteristics; proposes requirements for assessing the suitability of the system to simulate the CAS behavior in simulators. The author has developed a model of adaptive agent representation and its functioning environment to meet certain requirements set above, and has presented methods of agents' interactions and methods of conflict resolution in simulated traffic situations. A simulation system realizing computer modeling for simulating the behavior of CAS in traffic situations has been created [ru

  18. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Science.gov (United States)

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  19. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  20. Technical Note: Modeling a complex micro-multileaf collimator using the standard BEAMnrc distribution

    International Nuclear Information System (INIS)

    Kairn, T.; Kenny, J.; Crowe, S. B.; Fielding, A. L.; Franich, R. D.; Johnston, P. N.; Knight, R. T.; Langton, C. M.; Schlect, D.; Trapp, J. V.

    2010-01-01

    Purpose: The component modules in the standard BEAMnrc distribution may appear to be insufficient to model micro-multileaf collimators that have trifaceted leaf ends and complex leaf profiles. This note indicates, however, that accurate Monte Carlo simulations of radiotherapy beams defined by a complex collimation device can be completed using BEAMnrc's standard VARMLC component module. Methods: That this simple collimator model can produce spatially and dosimetrically accurate microcollimated fields is illustrated using comparisons with ion chamber and film measurements of the dose deposited by square and irregular fields incident on planar, homogeneous water phantoms. Results: Monte Carlo dose calculations for on-axis and off-axis fields are shown to produce good agreement with experimental values, even on close examination of the penumbrae. Conclusions: The use of a VARMLC model of the micro-multileaf collimator, along with a commissioned model of the associated linear accelerator, is therefore recommended as an alternative to the development or use of in-house or third-party component modules for simulating stereotactic radiotherapy and radiosurgery treatments. Simulation parameters for the VARMLC model are provided which should allow other researchers to adapt and use this model to study clinical stereotactic radiotherapy treatments.

  1. A neural population model incorporating dopaminergic neurotransmission during complex voluntary behaviors.

    Directory of Open Access Journals (Sweden)

    Stefan Fürtinger

    2014-11-01

    Full Text Available Assessing brain activity during complex voluntary motor behaviors that require the recruitment of multiple neural sites is a field of active research. Our current knowledge is primarily based on human brain imaging studies that have clear limitations in terms of temporal and spatial resolution. We developed a physiologically informed non-linear multi-compartment stochastic neural model to simulate functional brain activity coupled with neurotransmitter release during complex voluntary behavior, such as speech production. Due to its state-dependent modulation of neural firing, dopaminergic neurotransmission plays a key role in the organization of functional brain circuits controlling speech and language and thus has been incorporated in our neural population model. A rigorous mathematical proof establishing existence and uniqueness of solutions to the proposed model as well as a computationally efficient strategy to numerically approximate these solutions are presented. Simulated brain activity during the resting state and sentence production was analyzed using functional network connectivity, and graph theoretical techniques were employed to highlight differences between the two conditions. We demonstrate that our model successfully reproduces characteristic changes seen in empirical data between the resting state and speech production, and dopaminergic neurotransmission evokes pronounced changes in modeled functional connectivity by acting on the underlying biological stochastic neural model. Specifically, model and data networks in both speech and rest conditions share task-specific network features: both the simulated and empirical functional connectivity networks show an increase in nodal influence and segregation in speech over the resting state. These commonalities confirm that dopamine is a key neuromodulator of the functional connectome of speech control. Based on reproducible characteristic aspects of empirical data, we suggest a number

  2. Multiscale Modeling of Complex Molecular Structure and Dynamics with MBN Explorer

    DEFF Research Database (Denmark)

    Solov'yov, Ilia A.; Korol, Andrei V.; Solov'yov, Andrey V.

    This book introduces readers to MesoBioNano (MBN) Explorer - a multi-purpose software package designed to model molecular systems at various levels of size and complexity. In addition, it presents a specially designed multi-task toolkit and interface - the MBN Studio - which enables the set-up of...

  3. Autonomous Modeling, Statistical Complexity and Semi-annealed Treatment of Boolean Networks

    Science.gov (United States)

    Gong, Xinwei

    This dissertation presents three studies on Boolean networks. Boolean networks are a class of mathematical systems consisting of interacting elements with binary state variables. Each element is a node with a Boolean logic gate, and the presence of interactions between any two nodes is represented by directed links. Boolean networks that implement the logic structures of real systems are studied as coarse-grained models of the real systems. Large random Boolean networks are studied with mean field approximations and used to provide a baseline of possible behaviors of large real systems. This dissertation presents one study of the former type, concerning the stable oscillation of a yeast cell-cycle oscillator, and two studies of the latter type, respectively concerning the statistical complexity of large random Boolean networks and an extension of traditional mean field techniques that accounts for the presence of short loops. In the cell-cycle oscillator study, a novel autonomous update scheme is introduced to study the stability of oscillations in small networks. A motif that corrects pulse-growing perturbations and a motif that grows pulses are identified. A combination of the two motifs is capable of sustaining stable oscillations. Examining a Boolean model of the yeast cell-cycle oscillator using an autonomous update scheme yields evidence that it is endowed with such a combination. Random Boolean networks are classified as ordered, critical or disordered based on their response to small perturbations. In the second study, random Boolean networks are taken as prototypical cases for the evaluation of two measures of complexity based on a criterion for optimal statistical prediction. One measure, defined for homogeneous systems, does not distinguish between the static spatial inhomogeneity in the ordered phase and the dynamical inhomogeneity in the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing

  4. Complex curve of the two-matrix model and its tau-function

    International Nuclear Information System (INIS)

    Kazakov, Vladimir A; Marshakov, Andrei

    2003-01-01

    We study the Hermitian and normal two-matrix models in planar approximation for an arbitrary number of eigenvalue supports. Its planar graph interpretation is given. The study reveals a general structure of the underlying analytic complex curve, different from the hyperelliptic curve of the one-matrix model. The matrix model quantities are expressed through the periods of meromorphic generating differential on this curve and the partition function of the multiple support solution, as a function of filling numbers and coefficients of the matrix potential, is shown to be a quasiclassical tau-function. The relation to N = 1 supersymmetric Yang-Mills theories is discussed. A general class of solvable multi-matrix models with tree-like interactions is considered

  5. Complex motion of elevators in piecewise map model combined with circle map

    Science.gov (United States)

    Nagatani, Takashi

    2013-11-01

    We study the dynamic behavior in the elevator traffic controlled by capacity when the inflow rate of passengers into elevators varies periodically with time. The dynamics of elevators is described by the piecewise map model combined with the circle map. The motion of the elevators depends on the inflow rate, its period, and the number of elevators. The motion in the piecewise map model combined with the circle map shows a complex behavior different from the motion in the piecewise map model.

  6. A survey of atmospheric dispersion models applicable to risk studies for nuclear facilities in complex terrain

    International Nuclear Information System (INIS)

    Wittek, P.

    1985-09-01

    Atmospheric dispersion models are reviewed with respect to their application to the consequence assessment within risk studies for nuclear power plants located in complex terrain. This review comprises: seven straight-line Gaussian models, which have been modified in order to take into account in a crude way terrain elevations, enhanced turbulence and some others effects; three trajectory/puff-models, which can handle wind direction changes and the resulting plume or puff trajectories; five three-dimensional wind field models, which calculate the wind field in complex terrain for the application in a grid model; three grid models; one Monte-Carlo-model. The main features of the computer codes are described, along with some informations on the necessary computer time and storage capacity. (orig.) [de

  7. A mononuclear zinc(II) complex with piroxicam: Crystal structure, DNA- and BSA-binding studies; in vitro cell cytotoxicity and molecular modeling of oxicam complexes

    Science.gov (United States)

    Jannesari, Zahra; Hadadzadeh, Hassan; Amirghofran, Zahra; Simpson, Jim; Khayamian, Taghi; Maleki, Batool

    2015-02-01

    A new mononuclear Zn(II) complex, trans-[Zn(Pir)2(DMSO)2], where Pir- is 4-hydroxy-2-methyl-N-2-pyridyl-2H-1,2-benzothiazine-3-carboxamide-1,1-dioxide (piroxicam), has been synthesized and characterized. The crystal structure of the complex was obtained by the single crystal X-ray diffraction technique. The interaction of the complex with DNA and BSA was investigated. The complex interacts with FS-DNA by two binding modes, viz., electrostatic and groove binding (major and minor). The microenvironment and the secondary structure of BSA are changed in the presence of the complex. The anticancer effects of the seven complexes of oxicam family were also determined on the human K562 cell lines and the results showed reasonable cytotoxicities. The interactions of the oxicam complexes with BSA and DNA were modeled by molecular docking and molecular dynamic simulation methods.

  8. Complex fluid network optimization and control integrative design based on nonlinear dynamic model

    International Nuclear Information System (INIS)

    Sui, Jinxue; Yang, Li; Hu, Yunan

    2016-01-01

    In view of distribution according to complex fluid network’s needs, this paper proposed one optimization computation method of the nonlinear programming mathematical model based on genetic algorithm. The simulation result shows that the overall energy consumption of the optimized fluid network has a decrease obviously. The control model of the fluid network is established based on nonlinear dynamics. We design the control law based on feedback linearization, take the optimal value by genetic algorithm as the simulation data, can also solve the branch resistance under the optimal value. These resistances can provide technical support and reference for fluid network design and construction, so can realize complex fluid network optimization and control integration design.

  9. The semiotics of control and modeling relations in complex systems.

    Science.gov (United States)

    Joslyn, C

    2001-01-01

    We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

  10. Task complexity and task, goal, and reward interdependence in group performance : a prescriptive model

    NARCIS (Netherlands)

    Vijfeijken, van H.T.G.A.; Kleingeld, P.A.M.; Tuijl, van H.F.J.M.; Algera, J.A.; Thierry, H.

    2002-01-01

    A prescriptive model on how to design effective combinations of goal setting and contingent rewards for group performance management is presented. The model incorporates the constructs task complexity, task interdependence, goal interdependence, and reward interdependence and specifies optimal fit

  11. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    Science.gov (United States)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  12. Modelling of spatially complex human-ecosystem, rural-urban and rich-poor interactions

    CSIR Research Space (South Africa)

    Naude, AH

    2008-06-01

    Full Text Available The paper outlines the challenges of modelling and assessing spatially complex human-ecosystem interactions, and the need to simultaneously consider rural-urban and rich-poor interactions. The context for exploring these challenges is South Africa...

  13. 3D printing the pterygopalatine fossa: a negative space model of a complex structure.

    Science.gov (United States)

    Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis

    2018-02-01

    The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.

  14. Critical appraisal of first-generation renal tumor complexity scoring systems: Creation of a second-generation model of tumor complexity.

    Science.gov (United States)

    Tobert, Conrad M; Shoemaker, Allen; Kahnoski, Richard J; Lane, Brian R

    2015-04-01

    To investigate whether a combination of variables from each nephrometry system improves performance. There are 3 first-generation systems that quantify tumor complexity: R.E.N.A.L. nephrometry score (RNS), preoperative aspects and dimensions used for an anatomical (PADUA) classification (PC), and centrality index (CI). Although each has been subjected to validation and comparative analysis, to our knowledge, no work has been done to combine variables from each method to optimize their performance. Scores were assigned to each of 276 patients undergoing partial nephrectomy (PN) or radical nephrectomy (RN). Individual components of all 3 systems were evaluated in multivariable logistic regression analysis of surgery type (PN vs. RN) and combined into a "second-generation model." In multivariable analysis, each scoring system was a significant predictor of PN vs. RN (Psystems, CI was most highly correlated with surgery type (area under the curve [AUC] = 0.91), followed by RNS (AUC = 0.90) and PC (AUC = 0.88). Each individual component of these scoring systems was also a predictor of surgery type (Psystem (RNS), location along the lateral rim (PC), and centrality (CI). A novel model in which these 4 variables were rescaled outperformed each first-generation system (AUC = 0.91). Optimization of first-generation models of renal tumor complexity results in a novel scoring system, which strongly predicts surgery type. This second-generation model should aid comprehension, but future work is still needed to establish the most clinically useful model. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Paving the way towards complex blood-brain barrier models using pluripotent stem cells

    DEFF Research Database (Denmark)

    Lauschke, Karin; Frederiksen, Lise; Hall, Vanessa Jane

    2017-01-01

    , it is now possible to produce many cell types from the BBB and even partially recapitulate this complex tissue in vitro. In this review, we summarize the most recent developments in PSC differentiation and modelling of the BBB. We also suggest how patient-specific human induced PSCs could be used to model...

  16. Disentangling the Complexity of HGF Signaling by Combining Qualitative and Quantitative Modeling.

    Directory of Open Access Journals (Sweden)

    Lorenza A D'Alessandro

    2015-04-01

    Full Text Available Signaling pathways are characterized by crosstalk, feedback and feedforward mechanisms giving rise to highly complex and cell-context specific signaling networks. Dissecting the underlying relations is crucial to predict the impact of targeted perturbations. However, a major challenge in identifying cell-context specific signaling networks is the enormous number of potentially possible interactions. Here, we report a novel hybrid mathematical modeling strategy to systematically unravel hepatocyte growth factor (HGF stimulated phosphoinositide-3-kinase (PI3K and mitogen activated protein kinase (MAPK signaling, which critically contribute to liver regeneration. By combining time-resolved quantitative experimental data generated in primary mouse hepatocytes with interaction graph and ordinary differential equation modeling, we identify and experimentally validate a network structure that represents the experimental data best and indicates specific crosstalk mechanisms. Whereas the identified network is robust against single perturbations, combinatorial inhibition strategies are predicted that result in strong reduction of Akt and ERK activation. Thus, by capitalizing on the advantages of the two modeling approaches, we reduce the high combinatorial complexity and identify cell-context specific signaling networks.

  17. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    Science.gov (United States)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  18. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    Science.gov (United States)

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  19. Atmospheric dispersion modelling over complex terrain at small scale

    Science.gov (United States)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  20. Reducing Spatial Data Complexity for Classification Models

    International Nuclear Information System (INIS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-01-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  1. Reducing Spatial Data Complexity for Classification Models

    Science.gov (United States)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  2. Process modeling of the platform choise for development of the multimedia educational complex

    Directory of Open Access Journals (Sweden)

    Ірина Олександрівна Бондар

    2016-10-01

    Full Text Available The article presents a methodical approach to the platform choice as the technological basis for building of open and functional structure and the further implementation of the substantive content of the modules of the network multimedia complex for the discipline. The proposed approach is implemented through the use of mathematical tools. The result of the process modeling is the decision of the most appropriate platform for development of the multimedia complex

  3. Sandpile model for relaxation in complex systems

    International Nuclear Information System (INIS)

    Vazquez, A.; Sotolongo-Costa, O.; Brouers, F.

    1997-10-01

    The relaxation in complex systems is, in general, nonexponential. After an initial rapid decay the system relaxes slowly following a long time tail. In the present paper a sandpile moderation of the relaxation in complex systems is analysed. Complexity is introduced by a process of avalanches in the Bethe lattice and a feedback mechanism which leads to slower decay with increasing time. In this way, some features of relaxation in complex systems: long time tails relaxation, aging, and fractal distribution of characteristic times, are obtained by simple computer simulations. (author)

  4. Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome

    Science.gov (United States)

    Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.

    2018-03-01

    The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.

  5. Modeling the Propagation of Mobile Phone Virus under Complex Network

    Science.gov (United States)

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  6. Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)

    Science.gov (United States)

    2010-01-01

    Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887

  7. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    Science.gov (United States)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  8. Complex Dynamics of an Adnascent-Type Game Model

    Directory of Open Access Journals (Sweden)

    Baogui Xin

    2008-01-01

    Full Text Available The paper presents a nonlinear discrete game model for two oligopolistic firms whose products are adnascent. (In biology, the term adnascent has only one sense, “growing to or on something else,” e.g., “moss is an adnascent plant.” See Webster's Revised Unabridged Dictionary published in 1913 by C. & G. Merriam Co., edited by Noah Porter. The bifurcation of its Nash equilibrium is analyzed with Schwarzian derivative and normal form theory. Its complex dynamics is demonstrated by means of the largest Lyapunov exponents, fractal dimensions, bifurcation diagrams, and phase portraits. At last, bifurcation and chaos anticontrol of this system are studied.

  9. A novel approach for modelling complex maintenance systems using discrete event simulation

    International Nuclear Information System (INIS)

    Alrabghi, Abdullah; Tiwari, Ashutosh

    2016-01-01

    Existing approaches for modelling maintenance rely on oversimplified assumptions which prevent them from reflecting the complexity found in industrial systems. In this paper, we propose a novel approach that enables the modelling of non-identical multi-unit systems without restrictive assumptions on the number of units or their maintenance characteristics. Modelling complex interactions between maintenance strategies and their effects on assets in the system is achieved by accessing event queues in Discrete Event Simulation (DES). The approach utilises the wide success DES has achieved in manufacturing by allowing integration with models that are closely related to maintenance such as production and spare parts systems. Additional advantages of using DES include rapid modelling and visual interactive simulation. The proposed approach is demonstrated in a simulation based optimisation study of a published case. The current research is one of the first to optimise maintenance strategies simultaneously with their parameters while considering production dynamics and spare parts management. The findings of this research provide insights for non-conflicting objectives in maintenance systems. In addition, the proposed approach can be used to facilitate the simulation and optimisation of industrial maintenance systems. - Highlights: • This research is one of the first to optimise maintenance strategies simultaneously. • New insights for non-conflicting objectives in maintenance systems. • The approach can be used to optimise industrial maintenance systems.

  10. Learning and inference using complex generative models in a spatial localization task.

    Science.gov (United States)

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  11. Organizational-economic model of formation of socio-commercial multifunctional complex in the construction of high-rise buildings

    Science.gov (United States)

    Kirillova, Ariadna; Prytkova, Oksana O.

    2018-03-01

    The article is devoted to the features of the formation of the organizational and economic model of the construction of a socio-commercial multifunctional complex for high-rise construction. Authors have given examples of high-altitude multifunctional complexes in Moscow, analyzed the advantages and disadvantages in the implementation of multifunctional complexes, stressed the need for a holistic strategic approach, allowing to take into account the prospects for the development of the city and the creation of a comfortable living environment. Based on the analysis of multifunctional complexes features, a matrix of SWOT analysis was compiled. For the development of cities and improving the quality of life of the population, it is proposed to implement a new type of multifunctional complexes of a joint social and commercial direction, including, along with the implementation of office areas - schools, polyclinics, various sports facilities and cultural and leisure centers (theatrical, dance, studio, etc.). The approach proposed in the article for developing the model is based on a comparative evaluation of the multifunctional complex project of a social and commercial direction implemented at the expense of public-private partnership in the form of a concession agreement and a commercial multifunctional complex being built at the expense of the investor. It has been proved by calculations that the obtained indicators satisfy the conditions of expediency of the proposed organizational-economic model and the project of the social and commercial multifunctional complex is effective.

  12. Organizational-economic model of formation of socio-commercial multifunctional complex in the construction of high-rise buildings

    Directory of Open Access Journals (Sweden)

    Kirillova Ariadna

    2018-01-01

    Full Text Available The article is devoted to the features of the formation of the organizational and economic model of the construction of a socio-commercial multifunctional complex for high-rise construction. Authors have given examples of high-altitude multifunctional complexes in Moscow, analyzed the advantages and disadvantages in the implementation of multifunctional complexes, stressed the need for a holistic strategic approach, allowing to take into account the prospects for the development of the city and the creation of a comfortable living environment. Based on the analysis of multifunctional complexes features, a matrix of SWOT analysis was compiled. For the development of cities and improving the quality of life of the population, it is proposed to implement a new type of multifunctional complexes of a joint social and commercial direction, including, along with the implementation of office areas - schools, polyclinics, various sports facilities and cultural and leisure centers (theatrical, dance, studio, etc.. The approach proposed in the article for developing the model is based on a comparative evaluation of the multifunctional complex project of a social and commercial direction implemented at the expense of public-private partnership in the form of a concession agreement and a commercial multifunctional complex being built at the expense of the investor. It has been proved by calculations that the obtained indicators satisfy the conditions of expediency of the proposed organizational-economic model and the project of the social and commercial multifunctional complex is effective.

  13. Spectroscopic studies of molybdenum complexes as models for nitrogenase

    International Nuclear Information System (INIS)

    Walker, T.P.

    1981-05-01

    Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo 2 O 4 (cysteine) 2 2- and trans-Mo(N 2 ) 2 (dppe) 2 (dppe = 1,2-bis(diphenylphosphino)ethane). The H 1 and C 13 NMR of solutions of Mo 2 O 4 (cys) 2 2- are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N 2 ) 2 (dppe) 2 is described and compared to the EXAFS of MoH 4 (dppe) 2 . The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo 2 O 4 (cys) 2 2- and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction

  14. NHL and RCGA Based Multi-Relational Fuzzy Cognitive Map Modeling for Complex Systems

    Directory of Open Access Journals (Sweden)

    Zhen Peng

    2015-11-01

    Full Text Available In order to model multi-dimensions and multi-granularities oriented complex systems, this paper firstly proposes a kind of multi-relational Fuzzy Cognitive Map (FCM to simulate the multi-relational system and its auto construct algorithm integrating Nonlinear Hebbian Learning (NHL and Real Code Genetic Algorithm (RCGA. The multi-relational FCM fits to model the complex system with multi-dimensions and multi-granularities. The auto construct algorithm can learn the multi-relational FCM from multi-relational data resources to eliminate human intervention. The Multi-Relational Data Mining (MRDM algorithm integrates multi-instance oriented NHL and RCGA of FCM. NHL is extended to mine the causal relationships between coarse-granularity concept and its fined-granularity concepts driven by multi-instances in the multi-relational system. RCGA is used to establish high-quality high-level FCM driven by data. The multi-relational FCM and the integrating algorithm have been applied in complex system of Mutagenesis. The experiment demonstrates not only that they get better classification accuracy, but it also shows the causal relationships among the concepts of the system.

  15. Acetylene as fast food: Implications for development of life on anoxic primordial earth and in the outer solar system

    Science.gov (United States)

    Oremland, R.S.; Voytek, M.A.

    2008-01-01

    Acetylene occurs, by photolysis of methane, in the atmospheres of jovian planets and Titan. In contrast, acetylene is only a trace component of Earth's current atmosphere. Nonetheless, a methane-rich atmosphere has been hypothesized for early Earth; this atmosphere would also have been rich in acetylene. This poses a paradox, because acetylene is a potent inhibitor of many key anaerobic microbial processes, including methanogenesis, anaerobic methane oxidation, nitrogen fixation, and hydrogen oxidation. Fermentation of acetylene was discovered 25 years ago, and Pelobacter acetylenicus was shown to grow on acetylene by virtue of acetylene hydratase, which results in the formation of acetaldehyde. Acetaldehyde subsequently dismutates to ethanol and acetate (plus some hydrogen). However, acetylene hydratase is specific for acetylene and does not react with any analogous compounds. We hypothesize that microbes with acetylene hydratase played a key role in the evolution of Earth's early biosphere by exploiting an available source of carbon from the atmosphere and in so doing formed protective niches that allowed for other microbial processes to flourish. Furthermore, the presence of acetylene in the atmosphere of a planet or planetoid could possibly represent evidence for an extraterrestrial anaerobic ecosystem. ?? Mary Ann Liebert, Inc.

  16. Acetylene as fast food: implications for development of life on anoxic primordial Earth and in the outer solar system.

    Science.gov (United States)

    Oremland, Ronald S; Voytek, Mary A

    2008-02-01

    Acetylene occurs, by photolysis of methane, in the atmospheres of jovian planets and Titan. In contrast, acetylene is only a trace component of Earth's current atmosphere. Nonetheless, a methane-rich atmosphere has been hypothesized for early Earth; this atmosphere would also have been rich in acetylene. This poses a paradox, because acetylene is a potent inhibitor of many key anaerobic microbial processes, including methanogenesis, anaerobic methane oxidation, nitrogen fixation, and hydrogen oxidation. Fermentation of acetylene was discovered approximately 25 years ago, and Pelobacter acetylenicus was shown to grow on acetylene by virtue of acetylene hydratase, which results in the formation of acetaldehyde. Acetaldehyde subsequently dismutates to ethanol and acetate (plus some hydrogen). However, acetylene hydratase is specific for acetylene and does not react with any analogous compounds. We hypothesize that microbes with acetylene hydratase played a key role in the evolution of Earth's early biosphere by exploiting an available source of carbon from the atmosphere and in so doing formed protective niches that allowed for other microbial processes to flourish. Furthermore, the presence of acetylene in the atmosphere of a planet or planetoid could possibly represent evidence for an extraterrestrial anaerobic ecosystem.

  17. MATHEMATICAL MODELS OF PROCESSES AND SYSTEMS OF TECHNICAL OPERATION FOR ONBOARD COMPLEXES AND FUNCTIONAL SYSTEMS OF AVIONICS

    Directory of Open Access Journals (Sweden)

    Sergey Viktorovich Kuznetsov

    2017-01-01

    Full Text Available Modern aircraft are equipped with complicated systems and complexes of avionics. Aircraft and its avionics tech- nical operation process is observed as a process with changing of operation states. Mathematical models of avionics pro- cesses and systems of technical operation are represented as Markov chains, Markov and semi-Markov processes. The pur- pose is to develop the graph-models of avionics technical operation processes, describing their work in flight, as well as during maintenance on the ground in the various systems of technical operation. The graph-models of processes and sys- tems of on-board complexes and functional avionics systems in flight are proposed. They are based on the state tables. The models are specified for the various technical operation systems: the system with control of the reliability level, the system with parameters control and the system with resource control. The events, which cause the avionics complexes and func- tional systems change their technical state, are failures and faults of built-in test equipment. Avionics system of technical operation with reliability level control is applicable for objects with constant or slowly varying in time failure rate. Avion- ics system of technical operation with resource control is mainly used for objects with increasing over time failure rate. Avionics system of technical operation with parameters control is used for objects with increasing over time failure rate and with generalized parameters, which can provide forecasting and assign the borders of before-fail technical states. The pro- posed formal graphical approach avionics complexes and systems models designing is the basis for models and complex systems and facilities construction, both for a single aircraft and for an airline aircraft fleet, or even for the entire aircraft fleet of some specific type. The ultimate graph-models for avionics in various systems of technical operation permit the beginning of

  18. Electromagnetic modelling of Ground Penetrating Radar responses to complex targets

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonis

    2014-05-01

    This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be

  19. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Directory of Open Access Journals (Sweden)

    Samreen Laghari

    Full Text Available Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT implies an inherent difficulty in modeling problems.It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS. The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC framework to model a Complex communication network problem.We use Exploratory Agent-based Modeling (EABM, as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy.The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  20. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    Science.gov (United States)

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  1. Data-Driven Modeling of Complex Systems by means of a Dynamical ANN

    Science.gov (United States)

    Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.

    2017-12-01

    The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).

  2. Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.

    Science.gov (United States)

    Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C

    2010-05-01

    Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.

  3. Modelling hemoglobin and hemoglobin:haptoglobin complex clearance in a non-rodent species– pharmacokinetic and therapeutic implications

    Directory of Open Access Journals (Sweden)

    Felicitas S Boretti

    2014-10-01

    Full Text Available Preclinical studies suggest that haptoglobin (Hp supplementation could be an effective therapeutic modality during acute or chronic hemolytic diseases. Hp prevents Hb extravasation and neutralizes Hb’s oxidative and NO scavenging activity in the vasculature. Small animal models such as mouse, rat and guinea pig appear to be valuable to provide proof-of-concept for Hb neutralization by Hp in diverse pre-clinical conditions. However, these species differ significantly from human in the clearance of Hb:Hp complexes, which leads to long persistence of circulating Hb:Hp complexes after administration of human plasma derived Hp. Alternative animal models must therefore be explored to guide pre-clinical development of these potential therapeutics. In contrast to rodents, dogs have high Hp plasma concentrations comparable to human. In this study we show that like human macrophages, dog peripheral blood monocyte derived macrophages express a glucocorticoid inducible endocytic clearance pathways with a high specificity for the Hb:Hp complex. Evaluating the Beagle dog as a non-rodent model species we provide the first pharmacokinetic parameter estimates of free Hb and Hb:Hp phenotype complexes. The data reflect a drastically reduced volume of distribution (Vc of the complex compared to free Hb, increased exposures (Cmax and AUC and significantly reduced total body clearance (CL with a terminal half-life (t1/2 of approximately 12 hours. Distribution and clearance was identical for dog and human Hb (± glucocorticoid stimulation and for dimeric and multimeric Hp preparations bound to Hb. Collectively, our study supports the dog as a non-rodent animal model to study pharmacological and pharmacokinetic aspects of Hb clearance systems and apply the model to studying Hp therapeutics.

  4. Modelling complex systems of heterogeneous agents to better design sustainability transitions policy

    NARCIS (Netherlands)

    Mercure, J.F.A.; Pollitt, H.; Bassi, A.M.; Viñuales, J.E.; Edwards, N.R.

    2016-01-01

    This article proposes a fundamental methodological shift in the modelling of policy interventions for sustainability transitions in order to account for complexity (e.g. self-reinforcing mechanisms, such as technology lock-ins, arising from multi-agent interactions) and agent heterogeneity (e.g.

  5. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  6. Clarity versus complexity: land-use modeling as a practical tool for decision-makers

    Science.gov (United States)

    Sohl, Terry L.; Claggett, Peter

    2013-01-01

    The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.

  7. The value and cost of complexity in predictive modelling: role of tissue anisotropic conductivity and fibre tracts in neuromodulation.

    Science.gov (United States)

    Shahid, Syed Salman; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony

    2014-06-01

    Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Results illustrate the need to rationally

  8. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    Science.gov (United States)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  9. Validation of the Canadian atmospheric dispersion model for the CANDU reactor complex at Wolsong, Korea

    International Nuclear Information System (INIS)

    Klukas, M.H.; Davis, P.A.

    2000-01-01

    AECL is undertaking the validation of ADDAM, an atmospheric dispersion and dose code based on the Canadian Standards Association model CSA N288.2. The key component of the validation program involves comparison of model predicted and measured vertical and lateral dispersion parameters, effective release height and air concentrations. A wind tunnel study of the dispersion of exhaust gases from the CANDU complex at Wolsong, Korea provides test data for dispersion over uniform and complex terrain. The test data are for distances close enough to the release points to evaluate the model for exclusion area boundaries (EAB) as small as 500 m. Lateral and vertical dispersion is described well for releases over uniform terrain but the model tends to over-predict these parameters for complex terrain. Both plume rise and entrainment are modelled conservatively and the way they are combined in the model produces conservative estimates of the effective release height for low and high wind speeds. Estimates for the medium wind speed case (50-m wind speed, 3.8 ms -1 ) are conservative when the correction for entrainment is made. For the highest ground-level concentrations, those of greatest interest in a safety analysis, 82% of the predictions were within a factor 2 of the observed values. The model can be used with confidence to predict air concentrations of exhaust gases at the Wolsong site for neutral conditions, even for flows over the hills to the west, and is unlikely to substantially under-predict concentrations. (author)

  10. Simple versus complex models of trait evolution and stasis as a response to environmental change

    Science.gov (United States)

    Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott

    2015-04-01

    Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.

  11. Complex dynamics and bifurcation analysis of host–parasitoid models with impulsive control strategy

    International Nuclear Information System (INIS)

    Yang, Jin; Tang, Sanyi; Tan, Yuanshun

    2016-01-01

    Highlights: • We develop novel host-parasitoid models with impulsive control strategy. • The effects of key parameters on the successful control have been addressed. • The complex dynamics and related biological significance are investigated. • The results between two types of host-parasitoid models have been discussed. - Abstract: In this paper, we propose and analyse two type host–parasitoid models with integrated pest management (IPM) interventions as impulsive control strategies. For fixed pulsed model, the threshold condition for the global stability of the host-eradication periodic solution is provided, and the effects of key parameters including the impulsive period, proportionate killing rate, instantaneous search rate, releasing constant, survival rate and the proportionate release rate on the threshold condition are discussed. Then latin hypercube sampling /partial rank correlation coefficients are used to carry out sensitivity analyses to determine the significance of each parameters. Further, bifurcation analyses are presented and the results show that coexistence of attractors existed for a wide range of parameters, and the switch-like transitions among these attractors indicate that varying dosages and frequencies of insecticide applications and numbers of parasitoid released are crucial for IPM strategy. For unfixed pulsed model, the results show that this model exists very complex dynamics and the host population can be controlled below ET, and it implies that the modelling methods are helpful for improving optimal strategies to design appropriate IPM.

  12. Cement/clay interactions: feedback on the increasing complexity of modeling assumptions

    International Nuclear Information System (INIS)

    Marty, Nicolas C.M.; Gaucher, Eric C.; Tournassat, Christophe; Gaboreau, Stephane; Vong, Chan Quang; Claret, F.; Munier, Isabelle; Cochepin, Benoit

    2012-01-01

    Document available in extended abstract form only. Cementitious materials will be widely used in French concept of radioactive waste repositories. During their degradation over time, in contact with geological pore water, they will release hyper-alkaline fluids rich in calcium and alkaline cations. This chemical gradient likely to develop at the cement/clay interfaces will induce geochemical transformations. The first simplified calculations based mainly on simple mass balance calculation led to a very pessimistic understanding of the real expansion mechanism of the alkaline plume. However, geochemical and migration processes are much more complex because of the dissolution of the barrier's accessory phases and the precipitation of secondary minerals. To describe and to understand this complexity, coupled geochemistry and transport calculations are a useful and a mandatory tool. Furthermore, such sets of modeling when properly calibrated on experimental results are able to give insights on larger time scale unreachable with experiments. Since approximately 20 years, numerous papers have described the results of reactive transport modeling of cement/clay interactions with various numerical assumptions. For example, some authors selected a purely thermodynamic approach while others preferred a coupled thermodynamic/kinetic approach. Unfortunately, most of these studies used different and not comparable parameters as space discretization, initial and boundary conditions, thermodynamic databases, clayey and cementitious materials, etc... This study revisits the types of simulations proposed in the past to represent the effect of an alkaline perturbation with regard to the degree of complexity that was considered. The main goal of the study is to perform simulations with a consistent set of data and an increasing complexity. In doing so, the analysis of numerical results will give a clear vision of key parameters driving the expansion of alteration fronts and

  13. Modelling of the quenching process in complex superconducting magnet systems

    International Nuclear Information System (INIS)

    Hagedorn, D.; Rodriguez-Mateos, F.

    1992-01-01

    This paper reports that the superconducting twin bore dipole magnet for the proposed Large Hadron Collider (LHC) at CERN shows a complex winding structure consisting of eight compact layers each of them electromagnetically and thermally coupled with the others. This magnet is only one part of an electrical circuit; test and operation conditions are characterized by different circuits. In order to study the quenching process in this complex system, design adequate protection schemes, and provide a basis for the dimensioning of protection devices such as heaters, current breakers and dump resistors, a general simulation tool called QUABER has been developed using the analog system analysis program SABER. A complete set of electro-thermal models has been crated for the propagation of normal regions. Any network extension or modification is easy to implement without rewriting the whole set of differential equations

  14. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel

    2017-04-06

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  15. Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors

    KAUST Repository

    Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.

    2017-01-01

    In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.

  16. Surface complexation modeling of Cu(II adsorption on mixtures of hydrous ferric oxide and kaolinite

    Directory of Open Access Journals (Sweden)

    Schaller Melinda S

    2008-09-01

    Full Text Available Abstract Background The application of surface complexation models (SCMs to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO, pure kaolinite (from two sources and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples.

  17. Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Zhang Guiqing; Chen Tianlun

    2008-01-01

    Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.

  18. Kinetic models of partially ionized complex plasmas in the low frequency regime

    International Nuclear Information System (INIS)

    Tolias, P.; Ratynskaia, S.; Angelis, U. de

    2011-01-01

    The results from three kinetic models of complex plasmas taking into account collisions with neutrals are compared in the low-frequency regime: The ''full'' model which considers the absorption of plasma fluxes on dust particles and dust charge fluctuations, the ''multi-component'' model where both these effects are neglected, and the ''standard'' model which takes into account the dust charge perturbations but not the absorption of fluxes. We derive and numerically evaluate expressions of the low frequency responses of these models, also taking into account the modification of the capture cross-sections due to the effect of neutrals. The role of plasma sources and collisions with neutrals is assessed by computing the plasma permittivities and static permittivities for all the three models.

  19. Effects of Task Performance and Task Complexity on the Validity of Computational Models of Attention

    NARCIS (Netherlands)

    Koning, L. de; Maanen, P.P. van; Dongen, K. van

    2008-01-01

    Computational models of attention can be used as a component of decision support systems. For accurate support, a computational model of attention has to be valid and robust. The effects of task performance and task complexity on the validity of three different computational models of attention were

  20. From complex spatial dynamics to simple Markov chain models: do predators and prey leave footprints?

    DEFF Research Database (Denmark)

    Nachman, Gøsta Støger; Borregaard, Michael Krabbe

    2010-01-01

    to another, are then depicted in a state transition diagram, constituting the "footprints" of the underlying population dynamics. We investigate to what extent changes in the population processes modeled in the complex simulation (i.e. the predator's functional response and the dispersal rates of both......In this paper we present a concept for using presence-absence data to recover information on the population dynamics of predator-prey systems. We use a highly complex and spatially explicit simulation model of a predator-prey mite system to generate simple presence-absence data: the number...... of transition probabilities on state variables, and combine this information in a Markov chain transition matrix model. Finally, we use this extended model to predict the long-term dynamics of the system and to reveal its asymptotic steady state properties....

  1. A consensus for the development of a vector model to assess clinical complexity.

    Science.gov (United States)

    Corazza, Gino Roberto; Klersy, Catherine; Formagnana, Pietro; Lenti, Marco Vincenzo; Padula, Donatella

    2017-12-01

    The progressive rise in multimorbidity has made management of complex patients one of the most topical and challenging issues in medicine, both in clinical practice and for healthcare organizations. To make this easier, a score of clinical complexity (CC) would be useful. A vector model to evaluate biological and extra-biological (socio-economic, cultural, behavioural, environmental) domains of CC was proposed a few years ago. However, given that the variables that grade each domain had never been defined, this model has never been used in clinical practice. To overcome these limits, a consensus meeting was organised to grade each domain of CC, and to establish the hierarchy of the domains. A one-day consensus meeting consisting of a multi-professional panel of 25 people was held at our Hospital. In a preliminary phase, the proponents selected seven variables as qualifiers for each of the five above-mentioned domains. In the course of the meeting, the panel voted for five variables considered to be the most representative for each domain. Consensus was established with 2/3 agreement, and all variables were dichotomised. Finally, the various domains were parametrized and ranked within a feasible vector model. A Clinical Complexity Index was set up using the chosen variables. All the domains were graphically represented through a vector model: the biological domain was chosen as the most significant (highest slope), followed by the behavioural and socio-economic domains (intermediate slope), and lastly by the cultural and environmental ones (lowest slope). A feasible and comprehensive tool to evaluate CC in clinical practice is proposed herein.

  2. General classical solutions of the complex Grassmannian and CP sub(N-1) sigma models

    International Nuclear Information System (INIS)

    Sasaki, Ryu.

    1983-05-01

    General classical solutions are constructed for the complex Grassmannian non-linear sigma models in two euclidean dimensions in terms of holomorphic functions. The Grassmannian sigma models are a simple generalization of the well known CP sup(N-1) model in two dimensions and they share various interesting properties; existence of (anti-) instantons, an infinite number of conserved quantities and complete integrability. (author)

  3. Modeling Complex Workflow in Molecular Diagnostics

    Science.gov (United States)

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  4. Research Area 3: Mathematics (3.1 Modeling of Complex Systems)

    Science.gov (United States)

    2017-10-31

    Title: RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery Report Term: 0-Other Email ...Paolo Rosso Email : prosso@dsic.upv.es values of the profile characteristics taken by the users), intersection (they represent the relationship between...accuracy, especially when adding fully connected layers at the end of the network. This work has resulted in the writing of a manuscript for the Journal

  5. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    Science.gov (United States)

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated

  6. Physical approach to air pollution climatological modelling in a complex site

    Energy Technology Data Exchange (ETDEWEB)

    Bonino, G [Torino, Universita; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Longhetto, A [Ente Nazionale per l' Energia Elettrica, Centro di Ricerca Termica e Nucleare, Milan; CNR, Istituto di Cosmo-Geofisica, Turin, Italy); Runca, E [International Institute for Applied Systems Analysis, Laxenburg, Austria

    1980-09-01

    A Gaussian climatological model which takes into account physical factors affecting air pollutant dispersion, such as nocturnal radiative inversion and mixing height evolution, associated with land breeze and sea breeze regimes, has been applied to the topographically complex area of La Spezia. The measurements of the dynamic and thermodynamic structure of the lower atmosphere obtained by field experiments are utilized in the model to calculate the SO/sub 2/ seasonal average concentrations. The model has been tested on eight three-monthly periods by comparing the simulated values with the ones measured at the SO/sub 2/ stations of the local air pollution monitoring network. Comparison of simulated and measured values was satisfactory and proved the applicability of the model for urban planning and establishment of air quality strategies.

  7. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  8. Genetic complexity in a Drosophila model of diabetes-associated misfolded human proinsulin.

    Science.gov (United States)

    Park, Soo-Young; Ludwig, Michael Z; Tamarina, Natalia A; He, Bin Z; Carl, Sarah H; Dickerson, Desiree A; Barse, Levi; Arun, Bharath; Williams, Calvin L; Miles, Cecelia M; Philipson, Louis H; Steiner, Donald F; Bell, Graeme I; Kreitman, Martin

    2014-02-01

    Drosophila melanogaster has been widely used as a model of human Mendelian disease, but its value in modeling complex disease has received little attention. Fly models of complex disease would enable high-resolution mapping of disease-modifying loci and the identification of novel targets for therapeutic intervention. Here, we describe a fly model of permanent neonatal diabetes mellitus and explore the complexity of this model. The approach involves the transgenic expression of a misfolded mutant of human preproinsulin, hINS(C96Y), which is a cause of permanent neonatal diabetes. When expressed in fly imaginal discs, hINS(C96Y) causes a reduction of adult structures, including the eye, wing, and notum. Eye imaginal discs exhibit defects in both the structure and the arrangement of ommatidia. In the wing, expression of hINS(C96Y) leads to ectopic expression of veins and mechano-sensory organs, indicating disruption of wild-type signaling processes regulating cell fates. These readily measurable "disease" phenotypes are sensitive to temperature, gene dose, and sex. Mutant (but not wild-type) proinsulin expression in the eye imaginal disc induces IRE1-mediated XBP1 alternative splicing, a signal for endoplasmic reticulum stress response activation, and produces global change in gene expression. Mutant hINS transgene tester strains, when crossed to stocks from the Drosophila Genetic Reference Panel, produce F1 adults with a continuous range of disease phenotypes and large broad-sense heritability. Surprisingly, the severity of mutant hINS-induced disease in the eye is not correlated with that in the notum in these crosses, nor with eye reduction phenotypes caused by the expression of two dominant eye mutants acting in two different eye development pathways, Drop (Dr) or Lobe (L), when crossed into the same genetic backgrounds. The tissue specificity of genetic variability for mutant hINS-induced disease has, therefore, its own distinct signature. The genetic dominance

  9. Modeling Cu{sup 2+}-Aβ complexes from computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2015-09-15

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  10. Sparkle/PM3 for the modeling of europium(III), gadolinium(III), and terbium(III) complexes

    International Nuclear Information System (INIS)

    Freire, Ricardo O.; Rocha, Gerd B.; Simas, Alfredo M.

    2009-01-01

    The Sparkle/PM3 model is extended to europium(III), gadolinium(III), and terbium(III) complexes. The validation procedure was carried out using only high quality crystallographic structures, for a total of ninety-six Eu(III) complexes, seventy Gd(III) complexes, and forty-two Tb(III) complexes. The Sparkle/PM3 unsigned mean error, for all interatomic distances between the trivalent lanthanide ion and the ligand atoms of the first sphere of coordination, is: 0.080 A for Eu(III); 0.063 A for Gd(III); and 0.070 A for Tb(III). These figures are similar to the Sparkle/AM1 ones of 0.082 A, 0.061 A, and 0.068 A respectively, indicating they are all comparable parameterizations. Moreover, their accuracy is similar to what can be obtained by present-day ab initio effective core potential full geometry optimization calculations on such lanthanide complexes. Finally, we report a preliminary attempt to show that Sparkle/PM3 geometry predictions are reliable. For one of the Eu(III) complexes, BAFZEO, we created hundreds of different input geometries by randomly varying the distances and angles of the ligands to the central Eu(III) ion, which were all subsequently fully optimized. A significant trend was unveiled, indicating that more accurate local minima geometries cluster at lower total energies, thus reinforcing the validity of sparkle model calculations. (author)

  11. The Modeling and Complexity of Dynamical Systems by Means of Computation and Information Theories

    Directory of Open Access Journals (Sweden)

    Robert Logozar

    2011-12-01

    Full Text Available We present the modeling of dynamical systems and finding of their complexity indicators by the use of concepts from computation and information theories, within the framework of J. P. Crutchfield's theory of  ε-machines. A short formal outline of the  ε-machines is given. In this approach, dynamical systems are analyzed directly from the time series that is received from a properly adjusted measuring instrument. The binary strings are parsed through the parse tree, within which morphologically and probabilistically unique subtrees or morphs are recognized as system states. The outline and precise interrelation of the information-theoretic entropies and complexities emanating from the model is given. The paper serves also as a theoretical foundation for the future presentation of the DSA program that implements the  ε-machines modeling up to the stochastic finite automata level.

  12. Atomic level insights into realistic molecular models of dendrimer-drug complexes through MD simulations

    Science.gov (United States)

    Jain, Vaibhav; Maiti, Prabal K.; Bharatam, Prasad V.

    2016-09-01

    Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH2) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH2) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH2) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an

  13. Analysis of undergraduate students' conceptual models of a complex biological system across a diverse body of learners

    Science.gov (United States)

    Dirnbeck, Matthew R.

    Biological systems pose a challenge both for learners and teachers because they are complex systems mediated by feedback loops; networks of cause-effect relationships; and non-linear, hierarchical, and emergent properties. Teachers and scientists routinely use models to communicate ideas about complex systems. Model-based pedagogies engage students in model construction as a means of practicing higher-order reasoning skills. One such modeling paradigm describes systems in terms of their structures, behaviors, and functions (SBF). The SBF framework is a simple modeling language that has been used to teach about complex biological systems. Here, we used student-generated SBF models to assess students' causal reasoning in the context of a novel biological problem on an exam. We compared students' performance on the modeling problem, their performance on a set of knowledge/comprehension questions, and their performance on a set of scientific reasoning questions. We found that students who performed well on knowledge and understanding questions also constructed more networked, higher quality models. Previous studies have shown that learners' mental maps increase in complexity with increased expertise. We wanted to investigate if biology students with varying levels of training in biology showed a similar pattern when constructing system models. In a pilot study, we administered the same modeling problem to two additional groups of students: 1) an animal physiology course for students pursuing a major in biology (n=37) and 2) an exercise physiology course for non-majors (n=27). We found that there was no significant difference in model organization across the three student populations, but there was a significant difference in the ability to represent function between the three populations. Between the three groups the non-majors had the lowest function scores, the introductory majors had the middle function scores, and the upper division majors had the highest function

  14. Chromate Adsorption on Selected Soil Minerals: Surface Complexation Modeling Coupled with Spectroscopic Investigation.

    Czech Academy of Sciences Publication Activity Database

    Veselská, V.; Fajgar, Radek; Číhalová, S.; Bolanz, R.M.; Göttlicher, J.; Steininger, R.; Siddique, J.A.; Komárek, M.

    2016-01-01

    Roč. 318, NOV 15 (2016), s. 433-442 ISSN 0304-3894 Institutional support: RVO:67985858 Keywords : surface complexation modeling * chromate * soil minerals Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 6.065, year: 2016

  15. Determination of timescales of nitrate contamination by groundwater age models in a complex aquifer system

    Science.gov (United States)

    Koh, E. H.; Lee, E.; Kaown, D.; Lee, K. K.; Green, C. T.

    2017-12-01

    Timing and magnitudes of nitrate contamination are determined by various factors like contaminant loading, recharge characteristics and geologic system. Information of an elapsed time since recharged water traveling to a certain outlet location, which is defined as groundwater age, can provide indirect interpretation related to the hydrologic characteristics of the aquifer system. There are three major methods (apparent ages, lumped parameter model, and numerical model) to date groundwater ages, which differently characterize groundwater mixing resulted by various groundwater flow pathways in a heterogeneous aquifer system. Therefore, in this study, we compared the three age models in a complex aquifer system by using observed age tracer data and reconstructed history of nitrate contamination by long-term source loading. The 3H-3He and CFC-12 apparent ages, which did not consider the groundwater mixing, estimated the most delayed response time and a highest period of the nitrate loading had not reached yet. However, the lumped parameter model could generate more recent loading response than the apparent ages and the peak loading period influenced the water quality. The numerical model could delineate various groundwater mixing components and its different impacts on nitrate dynamics in the complex aquifer system. The different age estimation methods lead to variations in the estimated contaminant loading history, in which the discrepancy in the age estimation was dominantly observed in the complex aquifer system.

  16. A model based bayesian solution for characterization of complex damage scenarios in aerospace composite structures.

    Science.gov (United States)

    Reed, H; Leckey, Cara A C; Dick, A; Harvey, G; Dobson, J

    2018-01-01

    Ultrasonic damage detection and characterization is commonly used in nondestructive evaluation (NDE) of aerospace composite components. In recent years there has been an increased development of guided wave based methods. In real materials and structures, these dispersive waves result in complicated behavior in the presence of complex damage scenarios. Model-based characterization methods utilize accurate three dimensional finite element models (FEMs) of guided wave interaction with realistic damage scenarios to aid in defect identification and classification. This work describes an inverse solution for realistic composite damage characterization by comparing the wavenumber-frequency spectra of experimental and simulated ultrasonic inspections. The composite laminate material properties are first verified through a Bayesian solution (Markov chain Monte Carlo), enabling uncertainty quantification surrounding the characterization. A study is undertaken to assess the efficacy of the proposed damage model and comparative metrics between the experimental and simulated output. The FEM is then parameterized with a damage model capable of describing the typical complex damage created by impact events in composites. The damage is characterized through a transdimensional Markov chain Monte Carlo solution, enabling a flexible damage model capable of adapting to the complex damage geometry investigated here. The posterior probability distributions of the individual delamination petals as well as the overall envelope of the damage site are determined. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The maintenance management framework models and methods for complex systems maintenance

    CERN Document Server

    Crespo Márquez, Adolfo

    2010-01-01

    “The Maintenance Management Framework” describes and reviews the concept, process and framework of modern maintenance management of complex systems; concentrating specifically on modern modelling tools (deterministic and empirical) for maintenance planning and scheduling. It will be bought by engineers and professionals involved in maintenance management, maintenance engineering, operations management, quality, etc. as well as graduate students and researchers in this field.

  18. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    Science.gov (United States)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  19. A software complex intended for constructing applied models and meta-models on the basis of mathematical programming principles

    Directory of Open Access Journals (Sweden)

    Михаил Юрьевич Чернышов

    2013-12-01

    Full Text Available A software complex (SC elaborated by the authors on the basis of the language LMPL and representing a software tool intended for synthesis of applied software models and meta-models constructed on the basis of mathematical programming (MP principles is described. LMPL provides for an explicit form of declarative representation of MP-models, presumes automatic constructing and transformation of models and the capability of adding external software packages. The following software versions of the SC have been implemented: 1 a SC intended for representing the process of choosing an optimal hydroelectric power plant model (on the principles of meta-modeling and 2 a SC intended for representing the logic-sense relations between the models of a set of discourse formations in the discourse meta-model.

  20. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons