Bickel, D R; West, B J
1998-11-01
A fractal renewal point process (FRPP) is used to model molecular evolution in agreement with the relationship between the variance and the mean numbers of nonsynonymous and synonymous substitutions in mammals. Like other episodic models such as the doubly stochastic Poisson process, this model accounts for the large variances observed in amino acid substitution rates, but unlike certain other episodic models, it also accounts for the increase in the index of dispersion with the mean number of substitutions in Ohta's (1995) data. We find that this correlation is significant for nonsynonymous substitutions at the 1% level and for synonymous substitutions at the 10% level, even after removing lineage effects and when using Bulmer's (1989) unbiased estimator of the index of dispersion. This model is simpler than most other overdispersed models of evolution in the sense that it is fully specified by a single interevent probability distribution. Interpretations in terms of chaotic dynamics and in terms of chance and selection are discussed.
Yan, Na; Mountney, Nigel P.; Colombera, Luca; Dorrell, Robert M.
2017-08-01
Although fundamental types of fluvial meander-bend transformations - expansion, translation, rotation, and combinations thereof - are widely recognised, the relationship between the migratory behaviour of a meander bend, and its resultant accumulated sedimentary architecture and lithofacies distribution remains relatively poorly understood. Three-dimensional data from both currently active fluvial systems and from ancient preserved successions known from outcrop and subsurface settings are limited. To tackle this problem, a 3D numerical forward stratigraphic model - the Point-Bar Sedimentary Architecture Numerical Deduction (PB-SAND) - has been devised as a tool for the reconstruction and prediction of the complex spatio-temporal migratory evolution of fluvial meanders, their generated bar forms and the associated lithofacies distributions that accumulate as heterogeneous fluvial successions. PB-SAND uses a dominantly geometric modelling approach supplemented by process-based and stochastic model components, and is constrained by quantified sedimentological data derived from modern point bars or ancient successions that represent suitable analogues. The model predicts the internal architecture and geometry of fluvial point-bar elements in three dimensions. The model is applied to predict the sedimentary lithofacies architecture of ancient preserved point-bar and counter-point-bar deposits of the middle Jurassic Scalby Formation (North Yorkshire, UK) to demonstrate the predictive capabilities of PB-SAND in modelling 3D architectures of different types of meander-bend transformations. PB-SAND serves as a practical tool with which to predict heterogeneity in subsurface hydrocarbon reservoirs and water aquifers.
Evolution of disturbances in stagnation point flow
Criminale, William O.; Jackson, Thomas L.; Lasseigne, D. Glenn
1993-01-01
The evolution of three-dimensional disturbances in an incompressible three-dimensional stagnation-point flow in an inviscid fluid is investigated. Since it is not possible to apply classical normal mode analysis to the disturbance equations for the fully three-dimensional stagnation-point flow to obtain solutions, an initial-value problem is solved instead. The evolution of the disturbances provide the necessary information to determine stability and indeed the complete transient as well. It is found that when considering the disturbance energy, the planar stagnation-point flow, which is independent of one of the transverse coordinates, represents a neutrally stable flow whereas the fully three-dimensional flow is either stable or unstable, depending on whether the flow is away from or towards the stagnation point in the transverse direction that is neglected in the planar stagnation point.
Model Breaking Points Conceptualized
Vig, Rozy; Murray, Eileen; Star, Jon R.
2014-01-01
Current curriculum initiatives (e.g., National Governors Association Center for Best Practices and Council of Chief State School Officers 2010) advocate that models be used in the mathematics classroom. However, despite their apparent promise, there comes a point when models break, a point in the mathematical problem space where the model cannot,…
Energy Technology Data Exchange (ETDEWEB)
Meszaros, Sz.; Allende Prieto, C.; De Vicente, A. [Instituto de Astrofisica de Canarias (IAC), E-38200 La Laguna, Tenerife (Spain); Edvardsson, B.; Gustafsson, B. [Department of Physics and Astronomy, Division of Astronomy and Space Physics, Box 515, SE-751 20 Uppsala (Sweden); Castelli, F. [Istituto Nazionale di Astrofisica, Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34143 Trieste (Italy); Garcia Perez, A. E.; Majewski, S. R. [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904-4325 (United States); Plez, B. [Laboratoire Univers et Particules de Montpellier, Universite Montpellier 2, CNRS, F-34095 Montpellier (France); Schiavon, R. [Gemini Observatory, 670 North A' ohoku Place, Hilo, HI 96720 (United States); Shetrone, M. [McDonald Observatory, University of Texas, Austin, TX 78712 (United States)
2012-10-01
We present a new grid of model photospheres for the SDSS-III/APOGEE survey of stellar populations of the Galaxy, calculated using the ATLAS9 and MARCS codes. New opacity distribution functions were generated to calculate ATLAS9 model photospheres. MARCS models were calculated based on opacity sampling techniques. The metallicity ([M/H]) spans from -5 to 1.5 for ATLAS and -2.5 to 0.5 for MARCS models. There are three main differences with respect to previous ATLAS9 model grids: a new corrected H{sub 2}O line list, a wide range of carbon ([C/M]) and {alpha} element [{alpha}/M] variations, and solar reference abundances from Asplund et al. The added range of varying carbon and {alpha}-element abundances also extends the previously calculated MARCS model grids. Altogether, 1980 chemical compositions were used for the ATLAS9 grid and 175 for the MARCS grid. Over 808,000 ATLAS9 models were computed spanning temperatures from 3500 K to 30,000 K and log g from 0 to 5, where larger temperatures only have high gravities. The MARCS models span from 3500 K to 5500 K, and log g from 0 to 5. All model atmospheres are publicly available online.
TMDs: Evolution, modeling, precision
Directory of Open Access Journals (Sweden)
D’Alesio Umberto
2015-01-01
Full Text Available The factorization theorem for qT spectra in Drell-Yan processes, boson production and semi-inclusive deep inelastic scattering allows for the determination of the non-perturbative parts of transverse momentum dependent parton distribution functions. Here we discuss the fit of Drell-Yan and Z-production data using the transverse momentum dependent formalism and the resummation of the evolution kernel. We find a good theoretical stability of the results and a final χ2/points ≲ 1. We show how the fixing of the non-perturbative pieces of the evolution can be used to make predictions at present and future colliders.
DEFF Research Database (Denmark)
Antero, Michelle C.; Hedman, Jonas; Henningsson, Stefan
2013-01-01
The ERP industry has undergone dramatic changes over the past decades due to changing market demands, thereby creating new challenges and opportunities, which have to be managed by ERP vendors. This paper inquires into the necessary evolution of business models in a technology-intensive industry (e...
Stochastic Models of Evolution
Bezruchko, Boris P.; Smirnov, Dmitry A.
To continue the discussion of randomness given in Sect. 2.2.1, we briefly touch on stochastic models of temporal evolution (random processes). They can be specified either via explicit definition of their statistical properties (probability density functions, correlation functions, etc., Sects. 4.1, 4.2 and 4.3) or via stochastic difference or differential equations. Some of the most widely known equations, their properties and applications are discussed in Sects. 4.4 and 4.5.
Modeling Subglacial Permafrost Evolution
Koutnik, M. R.; Marshall, S.
2002-12-01
Permanently frozen ground was present both beneath and peripheral to the Quaternary ice sheets. In areas where the ice sheet grew or advanced over permafrost, the ice sheet insulated the ground, leading to subglacial permafrost degradation. This has created distinct signatures of ice sheet occupation in the Canadian north and in Alaska during the last glacial period, with greatly diminished permafrost thickness in regions that were ice covered for an extended period. In contrast, areas peripheral to the ice sheet, including the Midwest United States, were cooled by the glacial climate conditions and the regional cooling influence of the ice sheet, leading to permafrost growth. We have developed a sub- and proglacial diffusion based permafrost model that utilizes a logarithmic grid transformation to more efficiently track the changing depth of permafrost with time. This model is coupled with the ice sheet thermodynamic model of Marshall and Clarke [1997a] to explore the geologic signatures of the last glacial cycle in North America. This offers the potential for new constraints on modeled ice sheet history. Preliminary model runs show that the overlying ice sheet has a significant effect on the underlying and peripheral permafrost degradation and formation. Subglacial permafrost is also important because its evolution influences the basal temperature of the ice sheet, critical for evolution of subglacial hydrology and fast flow instabilities (e.g. ice streams). We present results of permafrost conditions under the last glacial maximum ice sheet and the effect of permafrost on basal temperature evolution through the last glacial cycle in North America. Marshall, S. J. and G. K. C. Clarke, 1997a. J. Geophys. Res., 102 (B9), 20,599-20,614.
Direct approach for solving nonlinear evolution and two-point ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 81; Issue 6. Direct approach for solving nonlinear evolution and two-point ... demonstrate the use and computational efﬁciency of the method. This method can easily be applied to many nonlinear problems and is capable of reducing the size of computational work.
Modelling of Landslides with the Material-point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2009-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
DEFF Research Database (Denmark)
Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.
2017-01-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign...... on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution (R similar to 22,500), high signal-to-noise ratio (>100), infrared (1.51-1.70 mu m) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations...
Modelling tipping-point phenomena of scientific coauthorship networks
Xie, Zheng; Yi, Dongyun; Zhenzheng, Ouyang; Li, Jianping
2016-01-01
In a range of scientific coauthorship networks, tipping points are detected in degree distributions, correlations between degrees and local clustering coefficients, etc. The existence of those tipping points could be treated as a result of the diversity of collaboration behaviours in scientific field. A growing geometric hypergraph built on a cluster of concentric circles is proposed to model two typical collaboration behaviours, namely the behaviour of leaders and that of other members in research teams. The model successfully predicts the tipping points, as well as many common features of coauthorship networks. For example, it realizes a process of deriving the complex scale-free property from the simple yes/no experiments. Moreover, it gives a reasonable explanation for the emergence of tipping points by the difference of collaboration behaviours between leaders and other members, which emerges in the evolution of research teams. The evolution synthetically addresses typical factors of generating collabora...
Point: Proposing the Electrokinetic Model
Moeller, Marcus J.; Kuppe, Christoph
2015-01-01
It is still not fully resolved how the glomerular filter works and why it never clogs. Several models have been proposed. In this review, we will compare the most widely used “pore model” to the more recent and refined “electrokinetic model” of glomerular filtration. The pore model assumes the existence of highly ordered regular pores, but it cannot provide a mechanistic explanation for several of the inherent characteristics of the glomerular filter. The electrokinetic model assumes that streaming potentials generate an electrical field along the filter surface which repels the negatively charged plasma proteins, preventing them from passing across the filter. The electrokinetic model can provide elegant mechanistic solutions for most of the unresolved riddles about the glomerular filter. PMID:25700457
Population dynamics and evolution modelling
Directory of Open Access Journals (Sweden)
Aleksej Olenin
2013-03-01
Full Text Available Ecological system modelling is a powerful tool that provides better understanding of interspecies interaction. Although a complex model gives more information about the modelled object it also drastically increases the computational time needed to get that information. In this paper a rather simple three trophic level population dynamics model with an evolution mechanism is described which can be run on any personal computer. The performance capacity of the evolution mechanism was shown by running the model 1100 times for both carnivores and herbivores so that only one type of animals could evolve. Also it was shown that attempts of controlling the population abundances with chemicals or by hunting while being somewhat effective still can be overcome by animals if they have the ability to evolve.
Grandi, Claudio; Colling, D; Fisk, I; Girone, M
2014-01-01
The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers, that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.
Biodiversity and models of evolution
Directory of Open Access Journals (Sweden)
S. L. Podvalny
2016-01-01
Full Text Available Summary. The paper discusses the evolutionary impact of biodiversity, the backbone of noosphere, which status has been fixed by a UN convention. The examples and role of such diversity are considered the various levels of life arrangement. On the level of standalone organisms, the diversity in question manifests itself in the differentiation and separation of the key physiologic functions which significantly broaden the eco-niche for the species with the consummate type of such separation. However, the organismic level of biodiversity does not work for building any developmental models since the starting point of genetic inheritance and variability processes emerges on the minimum structural unit of the living world only, i.e. the population. It is noted that the sufficient gene pool for species development may accumulate in fairly large populations only, where the general rate of mutation does not yield to the rate of ambient variations. The paper shows that the known formal models of species development based on the Fisher theorem about the impact of genodispersion on species adjustment are not in keeping with the actual existence of the species due to the conventionally finite and steady number of genotypes within a population. On the ecosystem level of life arrangement, the key role pertains to the taxonomic diversity supporting the continuous food chain in the system against any adverse developmental conditions of certain taxons. Also, the progressive evolution of an ecosystem is largely stabilized by its multilayer hierarchic structure and the closed circle of matter and energy. The developmental system models based on the Lotka-Volterra equations describing the interaction of the open-loop ecosystem elements only insufficiently represent the position of biodiversity in the evolutionary processes. The paper lays down the requirements to such models which take into account the mass balance within a system; its trophic structure; the
The Apache Point Observatory Galactic Evolution Experiment (APOGEE)
Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.; Allende Prieto, Carlos; Barkhouser, Robert; Bizyaev, Dmitry; Blank, Basil; Brunner, Sophia; Burton, Adam; Carrera, Ricardo; Chojnowski, S. Drew; Cunha, Kátia; Epstein, Courtney; Fitzgerald, Greg; García Pérez, Ana E.; Hearty, Fred R.; Henderson, Chuck; Holtzman, Jon A.; Johnson, Jennifer A.; Lam, Charles R.; Lawler, James E.; Maseman, Paul; Mészáros, Szabolcs; Nelson, Matthew; Nguyen, Duy Coung; Nidever, David L.; Pinsonneault, Marc; Shetrone, Matthew; Smee, Stephen; Smith, Verne V.; Stolberg, Todd; Skrutskie, Michael F.; Walker, Eric; Wilson, John C.; Zasowski, Gail; Anders, Friedrich; Basu, Sarbani; Beland, Stephane; Blanton, Michael R.; Bovy, Jo; Brownstein, Joel R.; Carlberg, Joleen; Chaplin, William; Chiappini, Cristina; Eisenstein, Daniel J.; Elsworth, Yvonne; Feuillet, Diane; Fleming, Scott W.; Galbraith-Frew, Jessica; García, Rafael A.; García-Hernández, D. Aníbal; Gillespie, Bruce A.; Girardi, Léo; Gunn, James E.; Hasselquist, Sten; Hayden, Michael R.; Hekker, Saskia; Ivans, Inese; Kinemuchi, Karen; Klaene, Mark; Mahadevan, Suvrath; Mathur, Savita; Mosser, Benoît; Muna, Demitri; Munn, Jeffrey A.; Nichol, Robert C.; O'Connell, Robert W.; Parejko, John K.; Robin, A. C.; Rocha-Pinto, Helio; Schultheis, Matthias; Serenelli, Aldo M.; Shane, Neville; Silva Aguirre, Victor; Sobeck, Jennifer S.; Thompson, Benjamin; Troup, Nicholas W.; Weinberg, David H.; Zamora, Olga
2017-09-01
The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution (R ˜ 22,500), high signal-to-noise ratio (>100), infrared (1.51-1.70 μm) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.
MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
F. Poux
2017-10-01
Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
Model for Semantically Rich Point Cloud Data
Poux, F.; Neuville, R.; Hallot, P.; Billen, R.
2017-10-01
This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.
New model systems for experimental evolution.
Collins, Sinéad
2013-07-01
Microbial experimental evolution uses a few well-characterized model systems to answer fundamental questions about how evolution works. This special section highlights novel model systems for experimental evolution, with a focus on marine model systems that can be used to understand evolutionary responses to global change in the oceans. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Evolution models with extremal dynamics
Directory of Open Access Journals (Sweden)
Petri P. Kärenlampi
2016-08-01
Full Text Available The random-neighbor version of the Bak-Sneppen biological evolution model is reproduced, along with an analogous model of random replicators, the latter eventually experiencing topology changes. In the absence of topology changes, both types of models self-organize to a critical state. Species extinctions in the replicator system degenerates the self-organization to a random walk, as does vanishing of species interaction for the BS-model. A replicator model with speciation is introduced, experiencing dramatic topology changes. It produces a variety of features, but self-organizes to a possibly critical state only in a few special cases. Speciation-extinction dynamics interfering with self-organization, biological macroevolution probably is not a self-organized critical system.
Quantum evolution near unstable equilibrium point: an algebraic approach
Bai, Z Q
2003-01-01
We study the quantum evolution of an unstable system in su(1,1) algebra. The evolution of any initial state |k, nu) is recursively obtained. When t -> infinity, |(k', nu| exp (-i/h-bar Ht)|k, nu)| sup 2 decays as e sup - sup 4 supnu sup t or t sup - sup 4 supnu in the hyperbolic (H = 2K sub 1) or parabolic (H 2K sub 1 + 2K sub 3) unstable cases, respectively. The quantum-classic correspondence independent of the Bargmann index nu is established based on the long-time and large-scale behaviour of wavefunctions.
Quantum evolution near unstable equilibrium point: an algebraic approach
Energy Technology Data Exchange (ETDEWEB)
Bai, Zai-Qiao [Department of Physics, Beijing Normal University, Beijing 100875, People' s Republic of (China); Zheng, Wei-Mou [Institute of Theoretical Physics, Academia Sinica, Beijing 100080, People' s Republic of (China)
2003-03-21
We study the quantum evolution of an unstable system in su(1,1) algebra. The evolution of any initial state |k, {nu}) is recursively obtained. When t {yields} {infinity}, |(k', {nu}| exp (-i/h-bar Ht)|k, {nu})|{sup 2} decays as e{sup -4{nu}}{sup t} or t{sup -4{nu}} in the hyperbolic (H = 2K{sub 1}) or parabolic (H 2K{sub 1} + 2K{sub 3}) unstable cases, respectively. The quantum-classic correspondence independent of the Bargmann index {nu} is established based on the long-time and large-scale behaviour of wavefunctions.
Modeling Temporal Evolution and Multiscale Structure in Networks
DEFF Research Database (Denmark)
Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard
2013-01-01
-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights......Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change...
Modeling Shoreline Evolution on Mars
Kraal, E. R.; Ashpaug, E. I.; Lorenz, R. D.
2003-05-01
Geomorphic evidence of surface water on Mars has important implications for planetary surface evolution, as well as for the continuing exploration of the planet as future landing sites are selected. Here we present the initial results from forward models of crater lake basin evolution motivated by the identification of intracrater landforms on Mars which exhibit possible evidence for a history of surface water. Proposed lacustrine Martian landforms include shorelines, terraces, and wave cut benches - features that have received considerable attention in terrestrial lacustrine geomorphology but which have never been quantitatively addressed with sufficient rigor on Mars. In particular, the existing body of terrestrial research has yet to be applied adequately to planets of different gravity, temperature (or working fluid) and atmospheric pressure, such as Mars and Titan. The 2-D model includes wave generation, shore erosion, and other factors. Wave generation depends primarily on wind speed and basin size. The erosive power of the generated waves along the shoreline depends on wave size and period, initial topography, rock hardness, and the effects of crater impact formation on the bedrock. Other factors include water loss to evaporation and infiltration, sediment transport within the basin, wind transported sediment, and ice cover. Waves are generated using terrestrial empirical equations that have been modified for the lower gravity on Mars. Erosion is based on equations for terrestrial rocky coastline evolution models that have been modified for Martian conditions. Results presented here will focus on the first two aspects, wave generation and shoreline erosion. Additional research will include exploring the effect of different air pressures on the system as well as modifying the model for application to possible crater lakes of liquid hydrocarbons on Titan.
Modeling of Landslides with the Material Point Method
DEFF Research Database (Denmark)
Andersen, Søren Mikkel; Andersen, Lars
2008-01-01
A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...... is employed for the soil. The slide is triggered for the initially stable slope by removing the cohesion of the soil and the slide is followed from the triggering until a state of equilibrium is again reached. Parameter studies, in which the angle of internal friction of the soil and the degree...
From malaria parasite point of view – Plasmodium falciparum evolution
Directory of Open Access Journals (Sweden)
Agata Zerka
2015-12-01
Full Text Available Malaria is caused by infection with protozoan parasites belonging to the genus Plasmodium, which have arguably exerted the greatest selection pressure on humans in the history of our species. Besides humans, different Plasmodium parasites infect a wide range of animal hosts, from marine invertebrates to primates. On the other hand, individual Plasmodium species show high host specificity. The extraordinary evolution of Plasmodium probably began when a free-living red algae turned parasitic, and culminated with its ability to thrive inside a human red blood cell. Studies on the African apes generated new data on the evolution of malaria parasites in general and the deadliest human-specific species, Plasmodium falciparum, in particular. Initially, it was hypothesized that P. falciparum descended from the chimpanzee malaria parasite P. reichenowi, after the human and the chimp lineage diverged about 6 million years ago. However, a recently identified new species infecting gorillas, unexpectedly showed similarity to P. falciparum and was therefore named P. praefalciparum. That finding spurred an alternative hypothesis, which proposes that P. falciparum descended from its gorilla rather than chimp counterpart. In addition, the gorilla-to-human host shift may have occurred more recently (about 10 thousand years ago than the theoretical P. falciparum-P. reichenowi split. One of the key aims of the studies on Plasmodium evolution is to elucidate the mechanisms that allow the incessant host shifting and retaining the host specificity, especially in the case of human-specific species. Thorough understanding of these phenomena will be necessary to design effective malaria treatment and prevention strategies.
[From malaria parasite point of view--Plasmodium falciparum evolution].
Zerka, Agata; Kaczmarek, Radosław; Jaśkiewicz, Ewa
2015-12-31
Malaria is caused by infection with protozoan parasites belonging to the genus Plasmodium, which have arguably exerted the greatest selection pressure on humans in the history of our species. Besides humans, different Plasmodium parasites infect a wide range of animal hosts, from marine invertebrates to primates. On the other hand, individual Plasmodium species show high host specificity. The extraordinary evolution of Plasmodium probably began when a free-living red algae turned parasitic, and culminated with its ability to thrive inside a human red blood cell. Studies on the African apes generated new data on the evolution of malaria parasites in general and the deadliest human-specific species, Plasmodium falciparum, in particular. Initially, it was hypothesized that P. falciparum descended from the chimpanzee malaria parasite P. reichenowi, after the human and the chimp lineage diverged about 6 million years ago. However, a recently identified new species infecting gorillas, unexpectedly showed similarity to P. falciparum and was therefore named P. praefalciparum. That finding spurred an alternative hypothesis, which proposes that P. falciparum descended from its gorilla rather than chimp counterpart. In addition, the gorilla-to-human host shift may have occurred more recently (about 10 thousand years ago) than the theoretical P. falciparum-P. reichenowi split. One of the key aims of the studies on Plasmodium evolution is to elucidate the mechanisms that allow the incessant host shifting and retaining the host specificity, especially in the case of human-specific species. Thorough understanding of these phenomena will be necessary to design effective malaria treatment and prevention strategies.
Comparison of sparse point distribution models
DEFF Research Database (Denmark)
Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus
2010-01-01
This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...... in slaughterhouses....
Evaluating choices in multi-process landscape evolution models
Temme, A.J.A.M.; Claessens, L.; Veldkamp, A.; Schoorl, J.M.
2011-01-01
The interest in landscape evolution models (LEMs) that simulate multiple landscape processes is growing. However, modelling multiple processes constitutes a new starting point for which some aspects of the set up of LEMs must be re-evaluated. The objective of this paper is to demonstrate the
Direct approach for solving nonlinear evolution and two-point ...
Indian Academy of Sciences (India)
2013-12-01
Dec 1, 2013 ... 1School of Mathematics and Applied Statistics, University of Wollongong, Wollongong,. NSW 2522 ... of problems in engineering and science, including the modelling of chemical reactions, heat transfer ...... [16] G E Pukhov, Differential transformations and mathematical modelling of physical processes.
Approximate Model for Turbulent Stagnation Point Flow.
Energy Technology Data Exchange (ETDEWEB)
Dechant, Lawrence [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-01-01
Here we derive an approximate turbulent self-similar model for a class of favorable pressure gradient wedge-like flows, focusing on the stagnation point limit. While the self-similar model provides a useful gross flow field estimate this approach must be combined with a near wall model is to determine skin friction and by Reynolds analogy the heat transfer coefficient. The combined approach is developed in detail for the stagnation point flow problem where turbulent skin friction and Nusselt number results are obtained. Comparison to the classical Van Driest (1958) result suggests overall reasonable agreement. Though the model is only valid near the stagnation region of cylinders and spheres it nonetheless provides a reasonable model for overall cylinder and sphere heat transfer. The enhancement effect of free stream turbulence upon the laminar flow is used to derive a similar expression which is valid for turbulent flow. Examination of free stream enhanced laminar flow suggests that the rather than enhancement of a laminar flow behavior free stream disturbance results in early transition to turbulent stagnation point behavior. Excellent agreement is shown between enhanced laminar flow and turbulent flow behavior for high levels, e.g. 5% of free stream turbulence. Finally the blunt body turbulent stagnation results are shown to provide realistic heat transfer results for turbulent jet impingement problems.
Homeostasis: an underestimated focal point of ecology and evolution.
Giordano, Mario
2013-10-01
The concept of homeostasis is often ill-defined, in the scientific literature. The word "homeostasis", literally, indicates the absence of changes and an absolute maintenance of the status quo. The multiplicity of possible examples of homeostasis suggests that it is essentially impossible that all aspects of the composition of the organism and the rate of processes carried out by the organism are simultaneously held constant, when the environment changes are in the non-lethal range. In attempting to clarify the usage of the term homeostasis, I emphasize the probable contributions to evolutionary fitness of homeostasis main attributes: rate processes and compositions. I also attempted to identify the aspects of homeostasis that are most likely to be subject to natural selection. The tendency to retain the status quo derives from the interplay of functions (among which growth), metabolic pools and elemental stoichiometry. The set points around which oscillations occur in biological system and their control mechanisms are determined by evolutionary processes; consequently, also the tendency of a cell to be homeostatic with respect to a given set point is selectable. A homeostatic response to external perturbations may be selectively favored when the potential reproductive advantage offered by a reorganization of cell resources cannot be exploited. This is most likely to occur in the case of environmental perturbations of moderate intensity and short duration relative to the growth rate. Under these circumstances, homeostasis may be an energetically and competitively preferable option, because it requires no alteration of the expressed proteome and eliminates the requirement for reverse acclimation, upon cessation of the perturbation. This review also intends to be a stimulus to "ad hoc" experiments to assess the ecological and evolutionary relevance of homeostasis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Proactive Quality Guidance for Model Evolution in Model Libraries
Ganser, Andreas; Lichter, Horst; Roth, Alexander; Rumpe, Bernhard
2014-01-01
Model evolution in model libraries differs from general model evolution. It limits the scope to the manageable and allows to develop clear concepts, approaches, solutions, and methodologies. Looking at model quality in evolving model libraries, we focus on quality concerns related to reusability. In this paper, we put forward our proactive quality guidance approach for model evolution in model libraries. It uses an editing-time assessment linked to a lightweight quality model, corresponding m...
Evolution models of red supergiants
Georgy, Cyril
2017-11-01
The red supergiant (RSG) phase is a key stage for the evolution of massive stars. The current uncertainties about the mass-loss rates of these objects make their evolution far to be fully understood. In this paper, we discuss some of the physical processes that determine the duration of the RSG phase. We also show how the mass loss affect their evolution, and can allow for some RSGs to evolve towards the blue side of the Hertzsprung-Russell diagram. We also propose observational tests that can help in better understanding the evolution of these stars.
Can fisheries-induced evolution shift reference points for fisheries management?
Heino, M.; Baulier, L.; Boukal, D.S.; Mollet, F.M.; Rijnsdorp, A.D.
2013-01-01
Biological reference points are important tools for fisheries management. Reference points are not static, but may change when a population's environment or the population itself changes. Fisheries-induced evolution is one mechanism that can alter population characteristics, leading to “shifting”
Can fisheries-induced evolution shift reference points for fisheries management?
DEFF Research Database (Denmark)
Heino, Mikko; Baulier, Loїc; Boukal, David S.
2013-01-01
Biological reference points are important tools for fisheries management. Reference points are not static, but may change when a population's environment or the population itself changes. Fisheries-induced evolution is one mechanism that can alter population characteristics, leading to “shifting...... that reference points gradually lose their intended meaning. This can lead to increased precaution, which is safe, but potentially costly. Shifts can also occur in more perilous directions, such that actual risks are greater than anticipated. Our qualitative analysis suggests that all commonly used reference...... points are susceptible to shifting through fisheries-induced evolution, including the limit and “precautionary” reference points for spawning-stock biomass, Blim and Bpa, and the target reference point for fishing mortality, F0.1. Our findings call for increased awareness of fisheries-induced changes...
The Lomagundi Event Marks Post-Pasteur Point Evolution of Aerobic Respiration: A Hypothesis
Raub, T. D.; Kirschvink, J. L.; Nash, C. Z.; Raub, T. M.; Kopp, R. E.; Hilburn, I. A.
2009-05-01
All published early Earth carbon cycle models assume that aerobic respiration is as ancient as oxygenic photosynthesis. However, aerobic respiration shuts down at oxygen concentrations below the Pasteur Point, (.01 of the present atmospheric level, PAL). As geochemical processes are unable to produce even local oxygen concentrations above .001 PAL, it follows that aerobic respiration could only have evolved after oxygenic photosynthesis, implying a time gap. The evolution of oxygen reductase-utilizing metabolisms presumably would have occupied this interval. During this time the PS-II-generated free oxygen would have been largely unavailable for remineralization of dissolved organic carbon and so would have profoundly shifted the burial ratio of organic/inorganic carbon. We argue that the sequential geological record of the Makganyene (Snowball?) glaciation (2.3-2.22), the exessively aerobic Hekpoort and coeval paleosols, the Lomagundi-Jatuli carbon isotopic excursion (ending 2.056 Ga), and the deposition of concentrated, sedimentary organic carbon (shungite) mark this period of a profoundly unbalanced global carbon cycle. The Kopp et al. (2005) model for oxyatmoversion agrees with phylogenetic evidence for the radiation of cyanobacteria followed closely by the radiation of gram-negative lineages containing magnetotactic bacteria, which depend upon vertical oxygen gradients. These organisms include delta-Proteobacteria from which the mitochondrial ancestor originated. The Precambrian carbon cycle was rebalanced after a series of biological innovations allowed utilization of the high redox potential of free oxygen. Aerobic respiration in mitochondria required the evolution of a unique family of Fe-Cu oxidases, one of many factors contributing to the >210 Myr delay between the Makganyene deglaciation and the end of the Lomagundi-Jatuli event. We speculate that metalliferious fluids associated with the eruption of the Bushveld complex facilitated evolution of these
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
2012-01-01
We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model. Under this model, the points can be of one of three types: a ‘background...... for producing point patterns with linear structures and propose to use the model as the likelihood in a Bayesian setting when analysing a spatial point pattern exhibiting linear structures. We illustrate this methodology by analysing two spatial point pattern datasets (locations of bronze age graves in Denmark...
Modelling point patterns with linear structures
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
2009-01-01
Many observed spatial point patterns contain points placed roughly on line segments. Point patterns exhibiting such structures can be found for example in archaeology (locations of bronze age graves in Denmark) and geography (locations of mountain tops). We consider a particular class of point...... processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...
Modelling point patterns with linear structures
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
Many observed spatial point patterns contain points placed roughly on line segments. Point patterns exhibiting such structures can be found for example in archaeology (locations of bronze age graves in Denmark) and geography (locations of mountain tops). We consider a particular class of point...... processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...
Spatial Stochastic Point Models for Reservoir Characterization
Energy Technology Data Exchange (ETDEWEB)
Syversveen, Anne Randi
1997-12-31
The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.
SYNTHETIC AGB EVOLUTION .1. A NEW MODEL
GROENEWEGEN, MAT; DEJONG, T
We have constructed a model to calculate in a synthetic way the evolution of stars on the asymptotic giant branch (AGB). The evolution is started at the first thermal pulse (TP) and is terminated when the envelope mass has been lost due to mass loss or when the core mass reaches the Chandrasekhar
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model, i.e. each new point is generated given the previous points. Under this model...... pattern exhibiting linear structures but where the exact mechanism responsible for the formations of lines is unknown. We illustrate this methodology by analyzing two spatial point pattern data sets (locations of bronze age graves in Denmark and locations of mountain tops in Spain) without knowing which...
CNEM: Cluster Based Network Evolution Model
Directory of Open Access Journals (Sweden)
Sarwat Nizamani
2015-01-01
Full Text Available This paper presents a network evolution model, which is based on the clustering approach. The proposed approach depicts the network evolution, which demonstrates the network formation from individual nodes to fully evolved network. An agglomerative hierarchical clustering method is applied for the evolution of network. In the paper, we present three case studies which show the evolution of the networks from the scratch. These case studies include: terrorist network of 9/11 incidents, terrorist network of WMD (Weapons Mass Destruction plot against France and a network of tweets discussing a topic. The network of 9/11 is also used for evaluation, using other social network analysis methods which show that the clusters created using the proposed model of network evolution are of good quality, thus the proposed method can be used by law enforcement agencies in order to further investigate the criminal networks
LEMSI - The Landscape Evolution Model Sensitivity Investigation
Skinner, Christopher; Coulthard, Tom; Schwanghart, Wolfgang; Van De Wiel, Marco
2017-04-01
Landscape Evolution Models have been developing through a combination of improvements in model efficiencies and computational power. This improvement has allowed simulations to take on more detail and complexities, pushing the modelling from the realm of pure exploration of major processes, into one of numerical prediction and real-world applications. However, unlike the tools other numerical modelling fields, Landscape Evolution Models have not yet undergone rigorous sensitivity analyses to highlight the main sources of model sensitivity and uncertainty. The Landscape Evolution Model Sensitivity Investigation (LEMSI) is the first large, global analysis of parameter sensitivity within a Landscape Evolution Model. We applied the Morris Method to the CAESAR-Lisflood model, investigating sensitivities to 15 user-defined parameter values and the sensitivities of 14 model output measures, featuring 4,800 individual tests and using over 500,000 cpu hours. This was repeated over two different catchments over 30-year and 1000-year periods. The model showed some sensitivity to most parameters, with variation between the catchments and the timeframe. However, the model showed consistent sensitivity to the choice of sediment transport law throughout, highlighting this as the major source of uncertainty in Landscape Evolution Models. Our results demonstrate the importance of considering parameter uncertainty in Landscape Evolution Modelling, especially if the model is to be used for prediction and/or real-world applications. The reliance on uncertain, deterministic sediment transport laws was shown to be the most important sensitivity in the model, and developing novel, probabilistic approaches could be a solution to this.
Discovery of an unconventional centromere in budding yeast redefines evolution of point centromeres.
Kobayashi, Norihiko; Suzuki, Yutaka; Schoenfeld, Lori W; Müller, Carolin A; Nieduszynski, Conrad; Wolfe, Kenneth H; Tanaka, Tomoyuki U
2015-08-03
Centromeres are the chromosomal regions promoting kinetochore assembly for chromosome segregation. In many eukaryotes, the centromere consists of up to mega base pairs of DNA. On such "regional centromeres," kinetochore assembly is mainly defined by epigenetic regulation [1]. By contrast, a clade of budding yeasts (Saccharomycetaceae) has a "point centromere" of 120-200 base pairs of DNA, on which kinetochore assembly is defined by the consensus DNA sequence [2, 3]. During evolution, budding yeasts acquired point centromeres, which replaced ancestral, regional centromeres [4]. All known point centromeres among different yeast species share common consensus DNA elements (CDEs) [5, 6], implying that they evolved only once and stayed essentially unchanged throughout evolution. Here, we identify a yeast centromere that challenges this view: that of the budding yeast Naumovozyma castellii is the first unconventional point centromere with unique CDEs. The N. castellii centromere CDEs are essential for centromere function but have different DNA sequences from CDEs in other point centromeres. Gene order analyses around N. castellii centromeres indicate their unique, and separate, evolutionary origin. Nevertheless, they are still bound by the ortholog of the CBF3 complex, which recognizes CDEs in other point centromeres. The new type of point centromere originated prior to the divergence between N. castellii and its close relative Naumovozyma dairenensis and disseminated to all N. castellii chromosomes through extensive genome rearrangement. Thus, contrary to the conventional view, point centromeres can undergo rapid evolutionary changes. These findings give new insights into the evolution of point centromeres. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Modeling microstructural evolution of multiple texture components during recrystallization
DEFF Research Database (Denmark)
Vandermeer, R.A.; Juul Jensen, D.
1994-01-01
Models were formulated in an effort to characterize recrystallization in materials with multiple texture components. The models are based on a microstructural path methodology (MPM). Experimentally the microstructural evolution of conmmercial aluminum during recrystallization was characterized...... using stereological point and lineal measurements of microstructural properties in combination with EBSP analysis for orientation determinations. The potential of the models to describe the observed recrystallization behavior of heavily cold-rolled commercial aluminum was demonstrated. A successful MPM...... model was deduced which, for each texture component-random, rolling and cube orientations, was quantitatively consistent with the measured microstructural properties. Nucleation and growth rates were deduced for each texture component using the model....
Staged Evolution with Quality Gates for Model Libraries
Roth, Alexander; Ganser, Andreas; Lichter, Horst; Rumpe, Bernhard
2014-01-01
Model evolution is widely considered as a subject under research. Despite its role in research, common purpose concepts, approaches, solutions, and methodologies are missing. Limiting the scope to model libraries makes model evolution and related quality concerns manageable, as we show below. In this paper, we put forward our quality staged model evolution theory for model libraries. It is founded on evolution graphs, which offer a structure for model evolution in model libraries through evol...
Rethinking the evolution of specialization: A model for the evolution of phenotypic heterogeneity.
Rubin, Ilan N; Doebeli, Michael
2017-12-21
Phenotypic heterogeneity refers to genetically identical individuals that express different phenotypes, even when in the same environment. Traditionally, "bet-hedging" in fluctuating environments is offered as the explanation for the evolution of phenotypic heterogeneity. However, there are an increasing number of examples of microbial populations that display phenotypic heterogeneity in stable environments. Here we present an evolutionary model of phenotypic heterogeneity of microbial metabolism and a resultant theory for the evolution of phenotypic versus genetic specialization. We use two-dimensional adaptive dynamics to track the evolution of the population phenotype distribution of the expression of two metabolic processes with a concave trade-off. Rather than assume a Gaussian phenotype distribution, we use a Beta distribution that is capable of describing genotypes that manifest as individuals with two distinct phenotypes. Doing so, we find that environmental variation is not a necessary condition for the evolution of phenotypic heterogeneity, which can evolve as a form of specialization in a stable environment. There are two competing pressures driving the evolution of specialization: directional selection toward the evolution of phenotypic heterogeneity and disruptive selection toward genetically determined specialists. Because of the lack of a singular point in the two-dimensional adaptive dynamics and the fact that directional selection is a first order process, while disruptive selection is of second order, the evolution of phenotypic heterogeneity dominates and often precludes speciation. We find that branching, and therefore genetic specialization, occurs mainly under two conditions: the presence of a cost to maintaining a high phenotypic variance or when the effect of mutations is large. A cost to high phenotypic variance dampens the strength of selection toward phenotypic heterogeneity and, when sufficiently large, introduces a singular point into
A resonance based model of biological evolution
Damasco, Achille; Giuliani, Alessandro
2017-04-01
We propose a coarse grained physical model of evolution. The proposed model 'at least in principle' is amenable of an experimental verification even if this looks as a conundrum: evolution is a unique historical process and the tape cannot be reversed and played again. Nevertheless, we can imagine a phenomenological scenario tailored upon state transitions in physical chemistry in which different agents of evolution play the role of the elements of a state transition like thermal noise or resonance effects. The abstract model we propose can be of help for sketching hypotheses and getting rid of some well-known features of natural history like the so-called Cambrian explosion. The possibility of an experimental proof of the model is discussed as well.
A Thermodynamic Point of View on Dark Energy Models
Directory of Open Access Journals (Sweden)
Vincenzo F. Cardone
2017-07-01
Full Text Available We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios.
Biological evolution model with conditional mutation rates
Saakian, David B.; Ghazaryan, Makar; Bratus, Alexander; Hu, Chin-Kun
2017-05-01
We consider an evolution model, in which the mutation rates depend on the structure of population: the mutation rates from lower populated sequences to higher populated sequences are reduced. We have applied the Hamilton-Jacobi equation method to solve the model and calculate the mean fitness. We have found that the modulated mutation rates, directed to increase the mean fitness.
Stochastic evolutions of dynamic traffic flow modeling and applications
Chen, Xiqun (Michael); Shi, Qixin
2015-01-01
This book reveals the underlying mechanisms of complexity and stochastic evolutions of traffic flows. Using Eulerian and Lagrangian measurements, the authors propose lognormal headway/spacing/velocity distributions and subsequently develop a Markov car-following model to describe drivers’ random choices concerning headways/spacings, putting forward a stochastic fundamental diagram model for wide scattering flow-density points. In the context of highway onramp bottlenecks, the authors present a traffic flow breakdown probability model and spatial-temporal queuing model to improve the stability and reliability of road traffic flows. This book is intended for researchers and graduate students in the fields of transportation engineering and civil engineering.
Ciolek, Glenn E.; Königl, Arieh
1998-09-01
We present a numerical simulation of the dynamical collapse of a nonrotating, magnetic molecular cloud core and follow the core's evolution through the formation of a central point mass and its subsequent growth into a 1 M⊙ protostar. The epoch of point-mass formation (PMF) is investigated by a self-consistent extension of previously presented models of core formation and contraction in axisymmetric, self-gravitating, isothermal, magnetically supported interstellar molecular clouds. Prior to PMF, the core is dynamically contracting and is not well approximated by a quasi-static equilibrium model. Ambipolar diffusion, which plays a key role in the early evolution of the core, is unimportant during the dynamical pre-PMF collapse phase. However, the appearance of a central mass, through its effect on the gravitational field in the inner core regions, leads to a ``revitalization'' of ambipolar diffusion in the weakly ionized gas surrounding the central protostar. This process is so efficient that it leads to a decoupling of the field from the matter and results in an outward-propagating hydromagnetic C-type shock. The existence of an ambipolar diffusion-mediated shock of this type was predicted by Li & McKee, and we find that the basic shock structure given by their analytic model is well reproduced by our more accurate numerical results. Our calculation also demonstrates that ambipolar diffusion, rather than Ohmic diffusivity operating in the innermost core region, is the main field-decoupling mechanism responsible for driving the shock after PMF. The passage of the shock leads to a substantial redistribution, by ambipolar diffusion but possibly also by magnetic interchange, of the mass contained within the magnetic flux tubes in the inner core. In particular, ambipolar diffusion reduces the flux initially threading a collapsing ~1 M⊙ core by a factor >~103 by the time this mass accumulates within the inner radius (~=7.3 AU) of our computational grid. This
Shaping asteroid models using genetic evolution (SAGE)
Bartczak, P.; Dudziński, G.
2018-02-01
In this work, we present SAGE (shaping asteroid models using genetic evolution), an asteroid modelling algorithm based solely on photometric lightcurve data. It produces non-convex shapes, orientations of the rotation axes and rotational periods of asteroids. The main concept behind a genetic evolution algorithm is to produce random populations of shapes and spin-axis orientations by mutating a seed shape and iterating the process until it converges to a stable global minimum. We tested SAGE on five artificial shapes. We also modelled asteroids 433 Eros and 9 Metis, since ground truth observations for them exist, allowing us to validate the models. We compared the derived shape of Eros with the NEAR Shoemaker model and that of Metis with adaptive optics and stellar occultation observations since other models from various inversion methods were available for Metis.
Political model of social evolution.
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-12-27
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved.
Political model of social evolution
Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin
2011-01-01
Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved. PMID:22198760
Modeling and pedagogic techniques: points of tangency
Yadrovskaya, M.
2010-01-01
Education quality is connected with effective teaching techniques. One of such techniques projecting ways that is complex usage of elements of modeling for realization of informational, cybernetic and activitical branches of education. Made on the basis of proposed approach, education techniques with elements of modeling (ETEM) allow overcoming disadvantages of education technologization and to execute personally oriented, activitical, creative studying.
Modelling persistence in annual Australia point rainfall
Directory of Open Access Journals (Sweden)
J. P. Whiting
2003-01-01
Full Text Available Annual rainfall time series for Sydney from 1859 to 1999 is analysed. Clear evidence of nonstationarity is presented, but substantial evidence for persistence or hidden states is more elusive. A test of the hypothesis that a hidden state Markov model reduces to a mixture distribution is presented. There is strong evidence of a correlation between the annual rainfall and climate indices. Strong evidence of persistence of one of these indices, the Pacific Decadal Oscillation (PDO, is presented together with a demonstration that this is better modelled by fractional differencing than by a hidden state Markov model. It is shown that conditioning the logarithm of rainfall on PDO, the Southern Oscillation index (SOI, and their interaction provides realistic simulation of rainfall that matches observed statistics. Similar simulation models are presented for Brisbane, Melbourne and Perth. Keywords: Hydrological persistence,hidden state Markov models, fractional differencing, PDO, SOI, Australian rainfall
Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters
Song, S. G.
2013-12-24
Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.
Toke Point, Washington Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Toke Point, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....
Sand Point, Alaska Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Sand Point, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....
Computational modelling of evolution: ecosystems and language
Lipowski, Adam
2008-01-01
Recently, computational modelling became a very important research tool that enables us to study problems that for decades evaded scientific analysis. Evolutionary systems are certainly examples of such problems: they are composed of many units that might reproduce, diffuse, mutate, die, or in some cases for example communicate. These processes might be of some adaptive value, they influence each other and occur on various time scales. That is why such systems are so difficult to study. In this paper we briefly review some computational approaches, as well as our contributions, to the evolution of ecosystems and language. We start from Lotka-Volterra equations and the modelling of simple two-species prey-predator systems. Such systems are canonical example for studying oscillatory behaviour in competitive populations. Then we describe various approaches to study long-term evolution of multi-species ecosystems. We emphasize the need to use models that take into account both ecological and evolutionary processe...
Sand Point, Alaska Coastal Digital Elevation Model
National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...
Air pollution model for point source
Directory of Open Access Journals (Sweden)
Jozef Mačala
2006-12-01
Full Text Available Mathematical models of air pollution have a broad practical application. They are irreplaceable wherever it is not possible to determine a state of air pollution by measuring of a noxious agent concentration. By creating of a suitable model of air pollution we can assess a state of the air quality but we also to predict the pollution that can occur at given atmospheric conditions. The created model is a suitable tool for controlling the activity of TEKO and for the evaluation of the quality of air in a monitored area of the city of Košice. A sufficient knowledge in the given field is a condition. The input data and information necessary for creating such a model of polluted air is another important factor.
Model-Driven Software Evolution : A Research Agenda
Van Deursen, A.; Visser, E.; Warmer, J.
2007-01-01
Software systems need to evolve, and systems built using model-driven approaches are no exception. What complicates model-driven engineering is that it requires multiple dimensions of evolution. In regular evolution, the modeling language is used to make the changes. In meta-model evolution, changes
Modelling landscape evolution at the flume scale
Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew
2017-04-01
The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.
Self-exciting point process in modeling earthquake occurrences
Pratiwi, H.; Slamet, I.; Saputro, D. R. S.; Respatiwulan
2017-06-01
In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables.
Research on stratified evolution of composite materials under four-point bending loading
Hao, M. J.; You, Q. J.; Zheng, J. C.; Yue, Z.; Xie, Z. P.
2017-12-01
In order to explore the effect of stratified evolution and delamination on the load capacity and service life of the composite materials under the four-point bending loading, the artificial tectonic defects of the different positions were set up. The four-point bending test was carried out, and the whole process was recorded by acoustic emission, and the damage degree of the composite layer was judged by the impact accumulation of the specimen - time-amplitude history chart, load-time-relative energy history chart, acoustic emission impact signal positioning map. The results show that the stratified defects near the surface of the specimen accelerate the process of material failure and expansion. The location of the delamination defects changes the bending performance of the composites to a great extent. The closer the stratification defects are to the surface of the specimen, the greater the damage, the worse the service capacity of the specimen.
Global models of planet formation and evolution
Mordasini, C.; Mollière, P.; Dittkrist, K.-M.; Jin, S.; Alibert, Y.
2015-04-01
Despite the strong increase in observational data on extrasolar planets, the processes that led to the formation of these planets are still not well understood. However, thanks to the high number of extrasolar planets that have been discovered, it is now possible to look at the planets as a population that puts statistical constraints on theoretical formation models. A method that uses these constraints is planetary population synthesis where synthetic planetary populations are generated and compared to the actual population. The key element of the population synthesis method is a global model of planet formation and evolution. These models directly predict observable planetary properties based on properties of the natal protoplanetary disc, linking two important classes of astrophysical objects. To do so, global models build on the simplified results of many specialized models that address one specific physical mechanism. We thoroughly review the physics of the sub-models included in global formation models. The sub-models can be classified as models describing the protoplanetary disc (of gas and solids), those that describe one (proto)planet (its solid core, gaseous envelope and atmosphere), and finally those that describe the interactions (orbital migration and N-body interaction). We compare the approaches taken in different global models, discuss the links between specialized and global models, and identify physical processes that require improved descriptions in future work. We then shortly address important results of planetary population synthesis like the planetary mass function or the mass-radius relationship. With these statistical results, the global effects of physical mechanisms occurring during planet formation and evolution become apparent, and specialized models describing them can be put to the observational test. Owing to their nature as meta models, global models depend on the results of specialized models, and therefore on the development of
Modeling olfactory bulb evolution through primate phylogeny.
Heritage, Steven
2014-01-01
Adaptive characterizations of primates have usually included a reduction in olfactory sensitivity. However, this inference of derivation and directionality assumes an ancestral state of olfaction, usually by comparison to a group of extant non-primate mammals. Thus, the accuracy of the inference depends on the assumed ancestral state. Here I present a phylogenetic model of continuous trait evolution that reconstructs olfactory bulb volumes for ancestral nodes of primates and mammal outgroups. Parent-daughter comparisons suggest that, relative to the ancestral euarchontan, the crown-primate node is plesiomorphic and that derived reduction in olfactory sensitivity is an attribute of the haplorhine lineage. The model also suggests a derived increase in olfactory sensitivity at the strepsirrhine node. This oppositional diversification of the strepsirrhine and haplorhine lineages from an intermediate and non-derived ancestor is inconsistent with a characterization of graded reduction through primate evolution.
Modelling sediment clasts transport during landscape evolution
Carretier, Sébastien; Martinod, Pierre; Reich, Martin; Godderis, Yves
2016-03-01
Over thousands to millions of years, the landscape evolution is predicted by models based on fluxes of eroded, transported and deposited material. The laws describing these fluxes, corresponding to averages over many years, are difficult to prove with the available data. On the other hand, sediment dynamics are often tackled by studying the distribution of certain grain properties in the field (e.g. heavy metals, detrital zircons, 10Be in gravel, magnetic tracers). There is a gap between landscape evolution models based on fluxes and these field data on individual clasts, which prevent the latter from being used to calibrate the former. Here we propose an algorithm coupling the landscape evolution with mobile clasts. Our landscape evolution model predicts local erosion, deposition and transfer fluxes resulting from hillslope and river processes. Clasts of any size are initially spread in the basement and are detached, moved and deposited according to probabilities using these fluxes. Several river and hillslope laws are studied. Although the resulting mean transport rate of the clasts does not depend on the time step or the model cell size, our approach is limited by the fact that their scattering rate is cell-size-dependent. Nevertheless, both their mean transport rate and the shape of the scattering-time curves fit the predictions. Different erosion-transport laws generate different clast movements. These differences show that studying the tracers in the field may provide a way to establish these laws on the hillslopes and in the rivers. Possible applications include the interpretation of cosmogenic nuclides in individual gravel deposits, provenance analyses, placers, sediment coarsening or fining, the relationship between magnetic tracers in rivers and the river planform, and the tracing of weathered sediment.
Conductance histogram evolution of an EC-MCBJ fabricated Au atomic point contact
Energy Technology Data Exchange (ETDEWEB)
Yang Yang; Liu Junyang; Chen Zhaobin; Tian Jinghua; Jin Xi; Liu Bo; Yang Fangzu; Tian Zhongqun [State Key Laboratory of Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005 (China); Li Xiulan; Tao Nongjian [Center for Bioelectronics and Biosensors, Biodesign Institute, Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-6206 (United States); Luo Zhongzi; Lu Miao, E-mail: zqtian@xmu.edu.cn [Micro-Electro-Mechanical Systems Research Center, Pen-Tung Sah Micro-Nano Technology Institute, Xiamen University, Xiamen 361005 (China)
2011-07-08
This work presents a study of Au conductance quantization based on a combined electrochemical deposition and mechanically controllable break junction (MCBJ) method. We describe the microfabrication process and discuss improved features of our microchip structure compared to the previous one. The improved structure prolongs the available life of the microchip and also increases the success rate of the MCBJ experiment. Stepwise changes in the current were observed at the last stage of atomic point contact breakdown and conductance histograms were constructed. The evolution of 1G{sub 0} peak height in conductance histograms was used to investigate the probability of formation of an atomic point contact. It has been shown that the success rate in forming an atomic point contact can be improved by decreasing the stretching speed and the degree that the two electrodes are brought into contact. The repeated breakdown and formation over thousands of cycles led to a distinctive increase of 1G{sub 0} peak height in the conductance histograms, and this increased probability of forming a single atomic point contact is discussed.
Microstructural and continuum evolution modeling of sintering.
Energy Technology Data Exchange (ETDEWEB)
Braginsky, Michael V.; Olevsky, Eugene A. (San Diego State University, San Diego, CA); Johnson, D. Lynn (Northwest University, Evanston, IL); Tikare, Veena; Garino, Terry J.; Arguello, Jose Guadalupe, Jr.
2003-12-01
All ceramics and powder metals, including the ceramics components that Sandia uses in critical weapons components such as PZT voltage bars and current stacks, multi-layer ceramic MET's, ahmindmolybdenum & alumina cermets, and ZnO varistors, are manufactured by sintering. Sintering is a critical, possibly the most important, processing step during manufacturing of ceramics. The microstructural evolution, the macroscopic shrinkage, and shape distortions during sintering will control the engineering performance of the resulting ceramic component. Yet, modeling and prediction of sintering behavior is in its infancy, lagging far behind the other manufacturing models, such as powder synthesis and powder compaction models, and behind models that predict engineering properties and reliability. In this project, we developed a model that was capable of simulating microstructural evolution during sintering, providing constitutive equations for macroscale simulation of shrinkage and distortion during sintering. And we developed macroscale sintering simulation capability in JAS3D. The mesoscale model can simulate microstructural evolution in a complex powder compact of hundreds or even thousands of particles of arbitrary shape and size by 1. curvature-driven grain growth, 2. pore migration and coalescence by surface diffusion, 3. vacancy formation, grain boundary diffusion and annihilation. This model was validated by comparing predictions of the simulation to analytical predictions for simple geometries. The model was then used to simulate sintering in complex powder compacts. Sintering stress and materials viscous module were obtained from the simulations. These constitutive equations were then used by macroscopic simulations for simulating shrinkage and shape changes in FEM simulations. The continuum theory of sintering embodied in the constitutive description of Skorohod and Olevsky was combined with results from microstructure evolution simulations to model shrinkage
Point Reyes, California Tsunami Forecast Grids for MOST Model
National Oceanic and Atmospheric Administration, Department of Commerce — The Point Reyes, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...
FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS
National Research Council Canada - National Science Library
Y. Sun; M. Shahzad; X. Zhu
2016-01-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds...
Book review: Statistical Analysis and Modelling of Spatial Point Patterns
DEFF Research Database (Denmark)
Møller, Jesper
2009-01-01
Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912......Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912...
Brand Equity Evolution: a System Dynamics Model
Directory of Open Access Journals (Sweden)
Edson Crescitelli
2009-04-01
Full Text Available One of the greatest challenges in brand management lies in monitoring brand equity over time. This paper aimsto present a simulation model able to represent this evolution. The model was drawn on brand equity concepts developed by Aaker and Joachimsthaler (2000, using the system dynamics methodology. The use ofcomputational dynamic models aims to create new sources of information able to sensitize academics and managers alike to the dynamic implications of their brand management. As a result, an easily implementable model was generated, capable of executing continuous scenario simulations by surveying casual relations among the variables that explain brand equity. Moreover, the existence of a number of system modeling tools will allow extensive application of the concepts used in this study in practical situations, both in professional and educational settings
Quantum-like model of partially directed evolution.
Melkikh, Alexey V; Khrennikov, Andrei
2017-05-01
The background of this study is that models of the evolution of living systems are based mainly on the evolution of replicators and cannot explain many of the properties of biological systems such as the existence of the sexes, molecular exaptation and others. The purpose of this study is to build a complete model of the evolution of organisms based on a combination of quantum-like models and models based on partial directivity of evolution. We also used optimal control theory for evolution modeling. We found that partial directivity of evolution is necessary for the explanation of the properties of an evolving system such as the stability of evolutionary strategies, aging and death, the presence of the sexes. The proposed model represents a systems approach to the evolution of species and will facilitate the understanding of the evolution and biology as a whole. Copyright © 2016 Elsevier Ltd. All rights reserved.
Image to Point Cloud Method of 3D-MODELING
Chibunichev, A. G.; Galakhov, V. P.
2012-07-01
This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.
Improved Systematic Pointing Error Model for the DSN Antennas
Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.
2011-01-01
New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.
Constraints and entropy in a model of network evolution
Tee, Philip; Wakeman, Ian; Parisis, George; Dawes, Jonathan; Kiss, István Z.
2017-11-01
Barabási-Albert's "Scale Free" model is the starting point for much of the accepted theory of the evolution of real world communication networks. Careful comparison of the theory with a wide range of real world networks, however, indicates that the model is in some cases, only a rough approximation to the dynamical evolution of real networks. In particular, the exponent γ of the power law distribution of degree is predicted by the model to be exactly 3, whereas in a number of real world networks it has values between 1.2 and 2.9. In addition, the degree distributions of real networks exhibit cut offs at high node degree, which indicates the existence of maximal node degrees for these networks. In this paper we propose a simple extension to the "Scale Free" model, which offers better agreement with the experimental data. This improvement is satisfying, but the model still does not explain why the attachment probabilities should favor high degree nodes, or indeed how constraints arrive in non-physical networks. Using recent advances in the analysis of the entropy of graphs at the node level we propose a first principles derivation for the "Scale Free" and "constraints" model from thermodynamic principles, and demonstrate that both preferential attachment and constraints could arise as a natural consequence of the second law of thermodynamics.
Culinary evolution models for Indian cuisines
Jain, Anupam
2015-01-01
Culinary systems, the practice of preparing a refined combination of ingredients that is palatable as well as socially acceptable, are examples of complex dynamical systems. They evolve over time and are affected by a large number of factors. Modeling the dynamic nature of evolution of regional cuisines may provide us a quantitative basis and exhibit underlying processes that have driven them into the present day status. This is especially important given that the potential culinary space is practically infinite because of possible number of ingredient combinations as recipes. Such studies also provide a means to compare and contrast cuisines and to unearth their therapeutic value. Herein we provide rigorous analysis of modeling eight diverse Indian regional cuisines, while also highlighting their uniqueness, and a comparison among those models at the level of flavor compounds which opens up molecular level studies associating them especially with non-communicable diseases such as diabetes.
Mathematical models of ecology and evolution
DEFF Research Database (Denmark)
Zhang, Lai
2012-01-01
-history processes: net-assimilation mechanism of rule and net-reproduction mechanism of size dependence using a simple model comprising a size-structured consumer Daphina and an unstructured resource alge. It is found that in contrast to the former mechanism, the latter tends to destabilize population...... dynamics but as a trade-o promotes species survival by shortening juvenile delay between birth and the onset of reproduction. Paper II compares the size-spectrum and food-web representations of communities using two traits (body size and habitat location) based unstructured population model of Lotka......) based size-structured population model, that is, interference in foraging, maintenance, survival, and recruitment. Their impacts on the ecology and evolution of size-structured populations and communities are explored. Ecologically, interference aects population demographic properties either negatively...
A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Börcs
2012-07-01
Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.
The ‘division of labour’ model of eye evolution
National Research Council Canada - National Science Library
Detlev Arendt; Harald Hausen; Günter Purschke
2009-01-01
The ‘division of labour’ model of eye evolution is elaborated here. We propose that the evolution of complex, multicellular animal eyes started from a single, multi-functional cell type that existed in metazoan ancestors...
Disk galaxy formation and evolution: models up to intermediate redshifts
Firmani, Claudio; Avila-Reese, Vladimir
1999-06-01
Making use of a seminumerical method we develop a scenario of disk galaxy formation and evolution in the framework of inflationary cold dark matter (CDM) cosmologies. Within the virializing dark matter halos, disks in centrifugal equilibrium are built-up and their galactic evolution is followed through an approach which considers the gravitational interactions among the galaxy components, the turbulence and energy balance of the ISM, the star formation (SF) process due to disk gravitational instabilities, the stellar evolution and the secular formation of a bulge. We find that the main properties and correlations of disk galaxies are determined by the mass, the hierarchical mass aggregation history and the primordial angular momentum. The models follow the same trends across the Hubble sequence than the observed galaxies. The predicted TF relation is in good agreement with the observations except for the standart CDM. While the slope of this relation remains almost constant up to intermediate redshifts, its zero-point decreases in the H-band and slightly increases in the B-band. A maximum in the SF rate for most of the models is attained at z ~1.5-2.5.
Change-centric Model for Web Service Evolution
Zuo, Wei; Aïcha-Nabila, Benharkat; Amghar, Youssef
2014-01-01
International audience; —Web service is subject to frequent changes during its lifecycle. Web service evolution is a widely discussed topic. Many related problems have also been generated from Web service evolution such as Web service adaptation, Web service versioning and Web service change management. To treat with these issues efficiently, a complete evolution model for Web service should be built. In this paper, we introduce our change-centric model for Web service evolution and how we us...
Simple neoclassical point model for transport and scaling in EBT
Energy Technology Data Exchange (ETDEWEB)
Hedrick, C.L.; Jaeger, E.F.; Spong, D.A.; Guest, G.E.; Krall, N.A.; McBride, J.B.; Stuart, G.W.
1977-04-01
A simple neoclassical point model is presented for the ELMO Bumpy Torus experiment. Solutions for steady state are derived. Comparison with experimental observations is made and reasonable agreement is obtained.
UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING
I. Sarakinou; Papadimitriou, K; O. Georgoula; Patias, P.
2016-01-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used f...
Modeling Co-evolution of Speech and Biology.
de Boer, Bart
2016-04-01
Two computer simulations are investigated that model interaction of cultural evolution of language and biological evolution of adaptations to language. Both are agent-based models in which a population of agents imitates each other using realistic vowels. The agents evolve under selective pressure for good imitation. In one model, the evolution of the vocal tract is modeled; in the other, a cognitive mechanism for perceiving speech accurately is modeled. In both cases, biological adaptations to using and learning speech evolve, even though the system of speech sounds itself changes at a more rapid time scale than biological evolution. However, the fact that the available acoustic space is used maximally (a self-organized result of cultural evolution) is constant, and therefore biological evolution does have a stable target. This work shows that when cultural and biological traits are continuous, their co-evolution may lead to cognitive adaptations that are strong enough to detect empirically. Copyright © 2016 Cognitive Science Society, Inc.
A Novel Fast Method for Point-sampled Model Simplification
Directory of Open Access Journals (Sweden)
Cao Zhi
2016-01-01
Full Text Available A novel fast simplification method for point-sampled statue model is proposed. Simplifying method for 3d model reconstruction is a hot topic in the field of 3D surface construction. But it is difficult as point cloud of many 3d models is very large, so its running time becomes very long. In this paper, a two-stage simplifying method is proposed. Firstly, a feature-preserved non-uniform simplification method for cloud points is presented, which simplifies the data set to remove the redundancy while keeping down the features of the model. Secondly, an affinity clustering simplifying method is used to classify the point cloud into a sharp point or a simple point. The advantage of Affinity Propagation clustering is passing messages among data points and fast speed of processing. Together with the re-sampling, it can dramatically reduce the duration of the process while keep a lower memory cost. Both theoretical analysis and experimental results show that after the simplification, the performance of the proposed method is efficient as well as the details of the surface are preserved well.
Kanzaki, Natsumi; Ragsdale, Erik J; Herrmann, Matthias; Susoy, Vladislav; Sommer, Ralf J
2013-09-01
Rhabditid nematodes are one of a few animal taxa in which androdioecious reproduction, involving hermaphrodites and males, is found. In the genus Pristionchus, several cases of androdioecy are known, including the model species P. pacificus. A comprehensive understanding of the evolution of reproductive mode depends on dense taxon sampling and careful morphological and phylogenetic reconstruction. In this article, two new androdioecious species, P. boliviae n. sp. and P. mayeri n. sp., and one gonochoristic outgroup, P. atlanticus n. sp., are described on morphological, molecular, and biological evidence. Their phylogenetic relationships are inferred from 26 ribosomal protein genes and a partial SSU rRNA gene. Based on current representation, the new androdioecious species are sister taxa, indicating either speciation from an androdioecious ancestor or rapid convergent evolution in closely related species. Male sexual characters distinguish the new species, and new characters for six closely related Pristionchus species are presented. Male papillae are unusually variable in P. boliviae n. sp. and P. mayeri n. sp., consistent with the predictions of "selfing syndrome." Description and phylogeny of new androdioecious species, supported by fuller outgroup representation, establish new reference points for mechanistic studies in the Pristionchus system by expanding its comparative context.
Topological bifurcations in the evolution of coherent structures in a convection model
DEFF Research Database (Denmark)
Dam, Magnus; Rasmussen, Jens Juul; Naulin, Volker
2017-01-01
Blob filaments are coherent structures in a turbulent plasma flow. Understanding the evolution of these structures is important to improve magnetic plasma confinement. Three state variables describe blob filaments in a plasma convection model. A dynamical systems approach analyzes the evolution...... of these three variables. A critical point of a variable defines a feature point for a region where that variable is significant. For a range of Rayleigh and Prandtl numbers, the bifurcations of the critical points of the three variables are investigated with time as the primary bifurcation parameter...
Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project
Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.
2015-12-01
During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.
Accuracy limit of rigid 3-point water models
Izadi, Saeed
2016-01-01
Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed. PMID:27544113
Energy Technology Data Exchange (ETDEWEB)
Ngayam-Happy, R., E-mail: raoul.ngayamhappy@gmail.com [EDF-R and D, Département Matériaux et Mécanique des Composants (MMC), Les Renardières, F-77818 Moret sur Loing Cedex (France); Unité Matériaux et Transformations (UMET), UMR CNRS 8207, Université de Lille 1, ENSCL, F-59655 Villeneuve d’Ascq Cedex (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Becquart, C.S. [Unité Matériaux et Transformations (UMET), UMR CNRS 8207, Université de Lille 1, ENSCL, F-59655 Villeneuve d’Ascq Cedex (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France); Domain, C. [EDF-R and D, Département Matériaux et Mécanique des Composants (MMC), Les Renardières, F-77818 Moret sur Loing Cedex (France); Unité Matériaux et Transformations (UMET), UMR CNRS 8207, Université de Lille 1, ENSCL, F-59655 Villeneuve d’Ascq Cedex (France); Laboratoire commun EDF-CNRS Etude et Modélisation des Microstructures pour le Vieillissement des Matériaux (EM2VM) (France)
2013-09-15
The formation and medium-term evolution of point defect and solute-rich clusters under neutron irradiation have been modelled in a complex Fe–CuMnNiSiP alloy representative of RPV steels, by means of first principle-based atomistic kinetic Monte Carlo simulations. The results obtained reproduce most features observed in available experimental studies, highlighting the very good agreement between both series. According to simulation, solute-rich clusters form and develop via an induced segregation mechanism on either the vacancy or interstitial clusters, and these point defect clusters are efficiently generated only in cascade debris and not Frenkel pair flux. The results have revealed the existence of two distinct populations of clusters with different characteristic features. Solute-rich clusters in the first group are bound essentially to interstitial clusters and they are enriched in Mn mostly, but also Ni to a lesser extent. Over the low dose regime, their density increases in the alloy as a result of the accumulation of highly stable interstitial clusters. In the second group, the solute-rich clusters are merged with vacancy clusters, and they contain mostly Cu and Si, but also substantial amount of Mn and Ni. The formation of a sub-population of pure solute clusters has been observed, which results from annihilation of the low stable vacancy clusters on sinks. The results indicate finally that the Mn content in clusters is up to 50%, Cu, Si, and Ni sharing the other half in more or less equivalent amounts. This composition has not demonstrated any noticeable modification with increasing dose over irradiation.
Automata network models of galaxy evolution
Chappell, David; Scalo, John
1993-01-01
Two ideas appear frequently in theories of star formation and galaxy evolution: (1) star formation is nonlocally excitatory, stimulating star formation in neighboring regions by propagation of a dense fragmenting shell or the compression of preexisting clouds; and (2) star formation is nonlocally inhibitory, making H2 regions and explosions which can create low-density and/or high temperature regions and increase the macroscopic velocity dispersion of the cloudy gas. Since it is not possible, given the present state of hydrodynamic modeling, to estimate whether one of these effects greatly dominates the other, it is of interest to investigate the predicted spatial pattern of star formation and its temporal behavior in simple models which incorporate both effects in a controlled manner. The present work presents preliminary results of such a study which is based on lattice galaxy models with various types of nonlocal inhibitory and excitatory couplings of the local SFR to the gas density, temperature, and velocity field meant to model a number of theoretical suggestions.
Shape modelling using Markov random field restoration of point correspondences.
Paulsen, Rasmus R; Hilger, Klaus B
2003-07-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semi-landmarks that are highly correlated in the shape tangent space. The method is demonstrated on a set of human ear canals extracted from 3D-laser scans.
Shape Modelling Using Markov Random Field Restoration of Point Correspondences
DEFF Research Database (Denmark)
Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen
2003-01-01
A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized...... shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semilandmarks that are highly correlated in the shape tangent space. The method...
Detecting Tipping points in Ecological Models with Sensitivity Analysis
Broeke, ten G.A.; Voorn, van G.A.K.; Kooi, B.W.; Molenaar, Jaap
2016-01-01
Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possible ecological effects. Sensitivity analysis is a key tool in the study of model responses to changes in conditions. The
Detecting tipping points in ecological models with sensitivity analysis
ten Broeke, G.A.; van Voorn, G.A.K.; Kooi, B.W.; Molenaar, J.
2016-01-01
Simulation models are commonly used to understand and predict the development of ecological systems, for instance to study the occurrence of tipping points and their possibleecological effects. Sensitivity analysis is a key tool in the study of model responses to change s in conditions. The
Determining and modeling the dispersion of non point source ...
African Journals Online (AJOL)
In this study, pollutants in runoff are characterized and their dispersion after they enter the lake is measured and modeled at different points in the study areas. The objective is to develop a one dimensional mathematical model which can be used to predict the nutrient (ammonia, nitrite, nitrate, and phosphate) dispersion ...
Determining and modeling the dispersion of non point source ...
African Journals Online (AJOL)
EJIRO
Lake Victoria is an important source of livelihood that is threatened by rising pollution. In this study, pollutants in runoff are characterized and their dispersion after they enter the lake is measured and modeled at different points in the study areas. The objective is to develop a one dimensional mathematical model which can ...
A 'Turing' Test for Landscape Evolution Models
Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.
2008-12-01
Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.
Random unitary evolution model of quantum Darwinism with pure decoherence
Balanesković, Nenad
2015-10-01
We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.
The Time Evolution of a Constant Mass of Air Pollutant Emitted by a Point Source
Directory of Open Access Journals (Sweden)
M.H.A. Hassan
2006-06-01
Full Text Available The transient behaviour of a constant mass (i.e. a blob of pollutant released from a point source at a height, , above ground level at time is studied. The time dependent atmospheric diffusion equation in the presence of diffusion in both horizontal and vertical directions is used to model the problem. The model is found to be governed by an initial-boundary-value problem for the concentration of the pollutant. The solution is obtained in closed form using integral transform methods. The solution is illustrated graphically using appropriate numerical integrations. As time passes, the pollutant blob moves with a central point of accumulation of pollutant while the blob increases in volume to spread the pollutant around it. The motion of the accumulation point in space and time is strongly influenced by wind and gravity while the spread of the pollutant is governed by diffusion. The time taken by the blob to diffuse into space is estimated as a function of the parameters governing wind, gravity and diffusion.
FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS
Directory of Open Access Journals (Sweden)
W. Nguatem
2012-07-01
Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.
Zoccola, Didier; Ganot, Philippe; Bertucci, Anthony; Caminiti-Segonds, Natacha; Techer, Nathalie; Voolstra, Christian R; Aranda, Manuel; Tambutté, Eric; Allemand, Denis; Casey, Joseph R; Tambutté, Sylvie
2015-06-04
The bicarbonate ion (HCO3(-)) is involved in two major physiological processes in corals, biomineralization and photosynthesis, yet no molecular data on bicarbonate transporters are available. Here, we characterized plasma membrane-type HCO3(-) transporters in the scleractinian coral Stylophora pistillata. Eight solute carrier (SLC) genes were found in the genome: five homologs of mammalian-type SLC4 family members, and three of mammalian-type SLC26 family members. Using relative expression analysis and immunostaining, we analyzed the cellular distribution of these transporters and conducted phylogenetic analyses to determine the extent of conservation among cnidarian model organisms. Our data suggest that the SLC4γ isoform is specific to scleractinian corals and responsible for supplying HCO3(-) to the site of calcification. Taken together, SLC4γ appears to be one of the key genes for skeleton building in corals, which bears profound implications for our understanding of coral biomineralization and the evolution of scleractinian corals within cnidarians.
Design, Results, Evolution and Status of the ATLAS simulation in Point1 project.
Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander
2015-01-01
During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2500 virtual machines (VM) provided with 8 CPU cores each, for a total of up to 20000 parallel running jobs. This contribution gives a thorough review of the design, the results and the evolution of the Sim@P1 project operating a large scale Openstack based virtualized platform deployed on top of the ATLAS TDAQ farm computing resources. During LS1, Sim@P1 was one of the most productive GRID sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities within the ATLAS collaboration. The particular design ...
Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project
AUTHOR|(SzGeCERN)377840; Fressard-Batraneanu, Silvia Maria; Ballestrero, Sergio; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander
2015-01-01
Abstract. During the LHC Long Shutdown 1 period (LS1), that started in 2013, the Simulation at Point1 (Sim@P1) Project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 virtual machines (VMs) provided with 8 CPU cores each, for a total of up to 22000 parallel running jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 Project; operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities. The design aspects a...
Monitoring Rural Water Points in Tanzania with Mobile Phones: The Evolution of the SEMA App
Directory of Open Access Journals (Sweden)
Rob Lemmens
2017-10-01
Full Text Available Development professionals have deployed several mobile phone-based ICT (Information and Communications Technology platforms in the global South for improving water, health, and education services. In this paper, we focus on a mobile phone-based ICT platform for water services, called Sensors, Empowerment and Accountability in Tanzania (SEMA, developed by our team in the context of an action research project in Tanzania. Water users in villages and district water engineers in local governments may use it to monitor the functionality status of rural water points in the country. We describe the current architecture of the platform’s front-end (the SEMA app and back-end and elaborate on its deployment in four districts in Tanzania. To conceptualize the evolution of the SEMA app, we use three concepts: transaction-intensiveness, discretion and crowdsourcing. The SEMA app effectively digitized only transaction-intensive tasks in the information flow between water users in villages and district water engineers. Further, it resolved two tensions over time: the tension over what to report (by decreasing the discretion of reporters and over who should report (by constraining the reporting “crowd”.
The Rotation of M Dwarfs Observed by the Apache Point Galactic Evolution Experiment
Gilhool, Steven H.; Blake, Cullen H.; Terrien, Ryan C.; Bender, Chad; Mahadevan, Suvrath; Deshpande, Rohit
2018-01-01
We present the results of a spectroscopic analysis of rotational velocities in 714 M-dwarf stars observed by the SDSS-III Apache Point Galactic Evolution Experiment (APOGEE) survey. We use a template-fitting technique to estimate v\\sin i while simultaneously estimating {log}g, [{{M}}/{{H}}], and {T}{eff}. We conservatively estimate that our detection limit is 8 km s‑1. We compare our results to M-dwarf rotation studies in the literature based on both spectroscopic and photometric measurements. Like other authors, we find an increase in the fraction of rapid rotators with decreasing stellar temperature, exemplified by a sharp increase in rotation near the M4 transition to fully convective stellar interiors, which is consistent with the hypothesis that fully convective stars are unable to shed angular momentum as efficiently as those with radiative cores. We compare a sample of targets observed both by APOGEE and the MEarth transiting planet survey and find no cases where the measured v\\sin i and rotation period are physically inconsistent, requiring \\sin i> 1. We compare our spectroscopic results to the fraction of rotators inferred from photometric surveys and find that while the results are broadly consistent, the photometric surveys exhibit a smaller fraction of rotators beyond the M4 transition by a factor of ∼2. We discuss possible reasons for this discrepancy. Given our detection limit, our results are consistent with a bimodal distribution in rotation that is seen in photometric surveys.
Optimization of Regression Models of Experimental Data Using Confirmation Points
Ulbrich, N.
2010-01-01
A new search metric is discussed that may be used to better assess the predictive capability of different math term combinations during the optimization of a regression model of experimental data. The new search metric can be determined for each tested math term combination if the given experimental data set is split into two subsets. The first subset consists of data points that are only used to determine the coefficients of the regression model. The second subset consists of confirmation points that are exclusively used to test the regression model. The new search metric value is assigned after comparing two values that describe the quality of the fit of each subset. The first value is the standard deviation of the PRESS residuals of the data points. The second value is the standard deviation of the response residuals of the confirmation points. The greater of the two values is used as the new search metric value. This choice guarantees that both standard deviations are always less or equal to the value that is used during the optimization. Experimental data from the calibration of a wind tunnel strain-gage balance is used to illustrate the application of the new search metric. The new search metric ultimately generates an optimized regression model that was already tested at regression model independent confirmation points before it is ever used to predict an unknown response from a set of regressors.
Fixed Points in Discrete Models for Regulatory Genetic Networks
Directory of Open Access Journals (Sweden)
Orozco Edusmildo
2007-01-01
Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.
Genetic Models in Evolutionary Game Theory: The Evolution of Altruism
Rubin, Hannah
2015-01-01
While prior models of the evolution of altruism have assumed that organisms reproduce asexually, this paper presents a model of the evolution of altruism for sexually reproducing organisms using Hardy–Weinberg dynamics. In this model, the presence of reciprocal altruists allows the population to
A point reactivity model for in-core fuel cycles
Energy Technology Data Exchange (ETDEWEB)
Parks, G.T.; Lewins, J.D. (Univ. of Cambridge, Engineering Dept., Trumpington Street, Cambridge CB2 1PZ (GB))
1988-09-01
A lumped (point) representation of the reactivity of a mixed-assembly reactor is derived from the basis of perturbation theory. This gives good agreement with exact static reactivity calculations for some simple examples. It is also compared with the simple partial reactivity model used widely in fuel management theory. A similar comparison is made for alternative representations in terms of the excess multiplication factor of the system. Although it is shown that the error in using the partial reactivity concept may be regarded as second order, the transient behavior of three simple refueling systems predicted by the point reactivity model differs markedly from previously published partial reactivity results.
An Improved Nonlinear Five-Point Model for Photovoltaic Modules
Directory of Open Access Journals (Sweden)
Sakaros Bogning Dongue
2013-01-01
Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.
Last interglacial temperature evolution – a model inter-comparison
Directory of Open Access Journals (Sweden)
P. Bakker
2013-03-01
temperatures. Secondly, for the Atlantic region, the Southern Ocean and the North Pacific, possible changes in the characteristics of the Atlantic meridional overturning circulation are crucial. Thirdly, the presence of remnant continental ice from the preceding glacial has shown to be important when determining the timing of maximum LIG warmth in the Northern Hemisphere. Finally, the results reveal that changes in the monsoon regime exert a strong control on the evolution of LIG temperatures over parts of Africa and India. By listing these inter-model differences, we provide a starting point for future proxy-data studies and the sensitivity experiments needed to constrain the climate simulations and to further enhance our understanding of the temperature evolution of the LIG period.
Modelling dune evolution and dynamic roughness in rivers
Paarlberg, Andries
2008-01-01
Accurate river flow models are essential tools for water managers, but these hydraulic simulation models often lack a proper description of dynamic roughness due to hysteresis effects in dune evolution. To incorporate the effects of dune evolution directly into the resistance coefficients of
Modeling evolution using the probability of fixation: history and implications.
McCandlish, David M; Stoltzfus, Arlin
2014-09-01
Many models of evolution calculate the rate of evolution by multiplying the rate at which new mutations originate within a population by a probability of fixation. Here we review the historical origins, contemporary applications, and evolutionary implications of these "origin-fixation" models, which are widely used in evolutionary genetics, molecular evolution, and phylogenetics. Origin-fixation models were first introduced in 1969, in association with an emerging view of "molecular" evolution. Early origin-fixation models were used to calculate an instantaneous rate of evolution across a large number of independently evolving loci; in the 1980s and 1990s, a second wave of origin-fixation models emerged to address a sequence of fixation events at a single locus. Although origin fixation models have been applied to a broad array of problems in contemporary evolutionary research, their rise in popularity has not been accompanied by an increased appreciation of their restrictive assumptions or their distinctive implications. We argue that origin-fixation models constitute a coherent theory of mutation-limited evolution that contrasts sharply with theories of evolution that rely on the presence of standing genetic variation. A major unsolved question in evolutionary biology is the degree to which these models provide an accurate approximation of evolution in natural populations.
Multi-dimensional Point Process Models in R
Directory of Open Access Journals (Sweden)
Roger Peng
2003-09-01
Full Text Available A software package for fitting and assessing multidimensional point process models using the R statistical computing environment is described. Methods of residual analysis based on random thinning are discussed and implemented. Features of the software are demonstrated using data on wildfire occurrences in Los Angeles County, California and earthquake occurrences in Northern California.
Tracking facial feature points with Gabor wavelets and shape models
McKenna, SJ; Gong, SG; Wurtz, RP; Tanner, J; Banin, D; Bigun, J; Chollet, G; Borgefors, G
1997-01-01
A feature-based approach to tracking rigid and non-rigid facial motion is described. Feature points are characterised using Gabor wavelets and can be individually tracked by phase-based displacement estimation. In order to achieve robust tracking a flexible shape model is used to impose global
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models
2015-07-06
Cubature/ Unscented/ Sigma Point Kalman Filtering with Angular Measurement Models David Frederic Crouse Naval Research Laboratory 4555 Overlook Ave...measurement and process non- linearities, such as the cubature Kalman filter, can perform ex- tremely poorly in many applications involving angular... Kalman filtering is a realization of the best linear unbiased estimator (BLUE) that evaluates certain integrals for expected values using different forms
Teaching With Models: A Starting Point Resource Module
Mackay, R. M.; Manduca, C. A.
2003-12-01
The use of models in entry-level geoscience classes provides an ideal framework for the creation of interactive student-centered learning environments while providing opportunities to introduce students to an important and useful tool. To assist faculty in using models in entry-level courses, we have created a website "Teaching with Models" which is part of "Starting Point", a website aimed at supporting faculty teaching entry-level geoscience with information and materials. The "Teaching with Models" site provides: a definition/clarification of modeling in an introductory geoscience education context; a discussion of when and where different model types are useful and why one would want to use them to promote student learning; a description of how to effectively use models, including pedagogical and technical issues; and specific modeling examples. This basic structure of what, when and why, how, and examples is repeated at various levels throughout the website. We define "model" very broadly to include five model types: conceptual or mental models; physical models; mathematical models; statistical models; and visualization models. We identify three key motivating factors supporting the usefulness of models in introductory geoscience education: 1) The extensive use of models by professional geoscientists suggests that introductory geoscience students should be exposed to the basic philosophy and usefulness of models; 2) Models provide an excellent framework for the creation of interactive student-centered learning environments; and 3) Many concepts from systems thinking and Earth-system science are ideally suited to the use of models. Our presentation will include assessment results based on student surveys for a Fall 2003 introductory Earth's Climate course and a description of several "Teaching with Models" modeling examples available online at: http://serc.carleton.edu/introgeo/models/index.html.
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING
Directory of Open Access Journals (Sweden)
I. Sarakinou
2016-06-01
Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
Modeling rocky coastline evolution and equilibrium
Limber, P. W.; Murray, A. B.
2010-12-01
Many of the world’s rocky coastlines exhibit planform roughness in the form of alternating headlands and embayments. Along cliffed coasts, it is often assumed that headlands consist of rock that is more resistant to wave attack than in neighboring bays, because of either structural or lithologic variations. Bays would then retreat landward faster than headlands, creating the undulating planform profiles characteristic of a rocky coastal landscape. While the interplay between alongshore rock strength and nearshore wave energy is, in some circumstances, a fundamental control on coastline shape, beach sediment is also important. Laboratory experiments and field observations have shown that beach sediment, in small volumes, can act as an abrasive tool to encourage sea cliff retreat. In large volumes, though, sediment discourages wave attack on the cliff face, acting as a protective barrier. This nonlinearity suggests a means for headland persistence, even without alongshore variations in rock strength: bare-rock headlands could retreat more slowly than, or at the same rate as, neighboring sediment-filled embayments because of alongshore variations in the availability of beach sediment. Accordingly, nearshore sediment dynamics (i.e. sediment production from sea cliff retreat and alongshore sediment transport) could promote the development of autogenic planform geometry. To explore these ideas, we present numerical and analytical modeling of large-scale (> one kilometer) and long-term (millennial-scale) planform rocky coastline evolution, in which sediment is supplied by both sea cliff erosion and coastal rivers and is distributed by alongshore sediment transport. We also compare model predictions with real landscapes. Previously, our modeling exercises focused on a basic rocky coastline configuration where lithologically-homogeneous sea cliffs supplied all beach sediment and maintained a constant alongshore height. Results showed that 1) an equilibrium alongshore
Zoccola, Didier
2015-06-04
The bicarbonate ion (HCO3−) is involved in two major physiological processes in corals, biomineralization and photosynthesis, yet no molecular data on bicarbonate transporters are available. Here, we characterized plasma membrane-type HCO3− transporters in the scleractinian coral Stylophora pistillata. Eight solute carrier (SLC) genes were found in the genome: five homologs of mammalian-type SLC4 family members, and three of mammalian-type SLC26 family members. Using relative expression analysis and immunostaining, we analyzed the cellular distribution of these transporters and conducted phylogenetic analyses to determine the extent of conservation among cnidarian model organisms. Our data suggest that the SLC4γ isoform is specific to scleractinian corals and responsible for supplying HCO3− to the site of calcification. Taken together, SLC4γ appears to be one of the key genes for skeleton building in corals, which bears profound implications for our understanding of coral biomineralization and the evolution of scleractinian corals within cnidarians.
Carretier, S.; Martinez, J.; Martinod, P.; Reich, M.; Godderis, Y.
2014-12-01
During mountain uplift, fresh silicate rocks are exhumed and broken into small pieces, potentially increasing their chemical weathering rate and thus the consumption of atmospheric CO2. This process remains debated because although erosion provides fresh rocks, it may also decrease their residence time near Earth's surface where clasts weather. Several recent publications also emphasized the key role of forelands in the weathering of clasts exported from the mountains by erosion. Predicting the chemical outflux of mountains requires to account for the chemical evolution of these rocks from their source to outlet. Powerful chemical models based on diffusion-advection of species between rocks and water have been developed at pedon scale, and recently at hillslope scale. In order to track the weathered material, we have developed a different approach based on the introduction into a 3D landscape evolution model (CIDRE) of dissolving discrete spherical clasts that move downslope. In CIDRE, local erosion and deposition depend on slope and water discharge which adapt dynamically during the topographical evolution. On a cell, bedrock is converted to soil at a rate depending on soil thickness. Clasts are initially spread at specified depths. They have a specified initial size and mineralogical composition. Once they enter the soil, they begins to dissolve at a rate depending on their minerals, temperature and exposed area, which decreases the clast size. Clasts move downstream according to probabilities depending on the ratio between the calculated local deposition and erosion fluxes. Chemical outflux is calculated for each clast during its life. At pedon scale, the model predicts chemical depleted fractions close to that obtained with advection-diffusion models and in agreement with measurements. An integrated chemical flux is estimated for the whole landscape from the clast dissolution rates. This flux reaches a stable solution using a suitable number of initial clasts
Comprehensive overview of the Point-by-Point model of prompt emission in fission
Energy Technology Data Exchange (ETDEWEB)
Tudora, A. [University of Bucharest, Faculty of Physics, Bucharest Magurele (Romania); Hambsch, F.J. [European Commission, Joint Research Centre, Directorate G - Nuclear Safety and Security, Unit G2, Geel (Belgium)
2017-08-15
The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for {sup 252}Cf(SF) and {sup 235}U(n,f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν(A,TKE) and γ-ray energy E{sub γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A (e.g., ν(A), E{sub γ}(A), left angle ε right angle (A) etc.), as a function of TKE (e.g., ν(TKE), E{sub γ}(TKE)) up to the prompt neutron distribution P(ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning
Dynamic landscapes: a model of context and contingency in evolution.
Foster, David V; Rorick, Mary M; Gesell, Tanja; Feeney, Laura M; Foster, Jacob G
2013-10-07
Although the basic mechanics of evolution have been understood since Darwin, debate continues over whether macroevolutionary phenomena are driven by the fitness structure of genotype space or by ecological interaction. In this paper we propose a simple model capturing key features of fitness-landscape and ecological models of evolution. Our model describes evolutionary dynamics in a high-dimensional, structured genotype space with interspecies interaction. We find promising qualitative similarity with the empirical facts about macroevolution, including broadly distributed extinction sizes and realistic exploration of the genotype space. The abstraction of our model permits numerous applications beyond macroevolution, including protein and RNA evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bisous model-Detecting filamentary patterns in point processes
Tempel, E.; Stoica, R. S.; Kipper, R.; Saar, E.
2016-07-01
The cosmic web is a highly complex geometrical pattern, with galaxy clusters at the intersection of filaments and filaments at the intersection of walls. Identifying and describing the filamentary network is not a trivial task due to the overwhelming complexity of the structure, its connectivity and the intrinsic hierarchical nature. To detect and quantify galactic filaments we use the Bisous model, which is a marked point process built to model multi-dimensional patterns. The Bisous filament finder works directly with the galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field. Using these two fields, we can extract filament spines from the data. Together with this paper we publish the computer code for the Bisous model that is made available in GitHub. The Bisous filament finder has been successfully used in several cosmological applications and further development of the model will allow to detect the filamentary network also in photometric redshift surveys, using the full redshift posterior. We also want to encourage the astro-statistical community to use the model and to connect it with all other existing methods for filamentary pattern detection and characterisation.
Thuan, T. X.; Hart, M. H.; Ostriker, J. P.
1975-01-01
The two basic approaches of physical theory required to calculate the evolution of a galactic system are considered, taking into account stellar evolution theory and the dynamics of a gas-star system. Attention is given to intrinsic (stellar) physics, extrinsic (dynamical) physics, and computations concerning the fractionation of an initial mass of gas into stars. The characteristics of a 'standard' model and its variants are discussed along with the results obtained with the aid of these models.
Energy Technology Data Exchange (ETDEWEB)
Nidever, David L.; Zasowski, Gail; Majewski, Steven R.; Beaton, Rachael L.; Wilson, John C.; Skrutskie, Michael F.; O' Connell, Robert W. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bird, Jonathan; Schoenrich, Ralph; Johnson, Jennifer A.; Sellgren, Kris [Department of Astronomy and the Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Robin, Annie C.; Schultheis, Mathias [Institut Utinam, CNRS UMR 6213, OSU THETA, Universite de Franche-Comte, 41bis avenue de l' Observatoire, F-25000 Besancon (France); Martinez-Valpuesta, Inma; Gerhard, Ortwin [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Shetrone, Matthew [McDonald Observatory, University of Texas at Austin, Fort Davis, TX 79734 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 North A' Ohoku Place, Hilo, HI 96720 (United States); Weiner, Benjamin [Steward Observatory, 933 North Cherry Street, University of Arizona, Tucson, AZ 85721 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Allende Prieto, Carlos, E-mail: dln5q@virginia.edu [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain); and others
2012-08-20
Commissioning observations with the Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, have produced radial velocities (RVs) for {approx}4700 K/M-giant stars in the Milky Way (MW) bulge. These high-resolution (R {approx} 22, 500), high-S/N (>100 per resolution element), near-infrared (NIR; 1.51-1.70 {mu}m) spectra provide accurate RVs ({epsilon}{sub V} {approx} 0.2 km s{sup -1}) for the sample of stars in 18 Galactic bulge fields spanning -1 Degree-Sign
Energy Technology Data Exchange (ETDEWEB)
Hamidouche, T., E-mail: t.hamidouche@crna.d [Division de l' Environnement, de la Surete et des Dechets Radioactifs, Centre de Recherche Nucleaire d' Alger, 02 Boulevard Frantz Fanon, BP 399 Alger RP (Algeria); Bousbia-Salah, A. [DIMNP - University of Pisa, Via Diotisalvi 02, 56126 Pisa (Italy)
2010-03-15
The current study emphasizes an aspect related to the assessment of a model embedded in a computer code. The study concerns more particularly the point neutron kinetics model of the RELAP5/Mod3 code which is worldwide used. The model is assessed against positive reactivity insertion transient taking into account calculations involving thermal-hydraulic feedback as well as transients with no feedback effects. It was concluded that the RELAP5 point kinetics model provides unphysical power evolution trends due most probably to a bug during the programming process.
Directory of Open Access Journals (Sweden)
Karine K. Abgaryan
2015-09-01
Full Text Available A very important task on the way of improving the technologies of synthesizing highly effective light-emitting diodes on the basis of silicon is theoretical research into the formation of point defect clusters. One method of obtaining silicon with photoluminescent properties is radiation impact. It causes the formation of various defects in its structure, including point and linear defects, their clusters and complexes. In this paper a mathematical model was used to determine the coordinates and velocities of all particles in the system. The model was used for describing point defect formation processes and studying their evolution with time and temperature. The multi-parametrical Tersoff potential was used for the description of interactions between particles. The values of the Tersoff potential were selected by solving the parametric identification problem for silicon. For developing the models we used the system cohesive energy values obtained by an ab initio calculation based on the density functional theory (DFT. The resultant computer model allows MD simulation of silicon crystal structure with point defects and their cluster with possible visualization and animation of simulation results.
Storkel, Holly L
2015-02-01
Word learning consists of at least two neurocognitive processes: learning from input during training and memory evolution during gaps between training sessions. Fine-grained analysis of word learning by normal adults provides evidence that learning from input is swift and stable, whereas memory evolution is a point of potential vulnerability on the pathway to mastery. Moreover, success during learning from input is linked to positive outcomes from memory evolution. These two neurocognitive processes can be overlaid on to components of clinical treatment with within-session variables (i.e. dose form and dose) potentially linked to learning from input and between-session variables (i.e. dose frequency) linked to memory evolution. Collecting data at the beginning and end of a treatment session can be used to identify the point of vulnerability in word learning for a given client and the appropriate treatment component can then be adjusted to improve the client's word learning. Two clinical cases are provided to illustrate this approach.
The impact of pollution on stellar evolution models
Dotter, Aaron; Chaboyer, Brian
2003-01-01
An approach is introduced for incorporating the concept of stellar pollution into stellar evolution models. The approach involves enhancing the metal content of the surface layers of stellar models. In addition, the surface layers of stars in the mass range of 0.5-2.0 Solar masses are mixed to an artificial depth motivated by observations of lithium abundance. The behavior of polluted stellar evolution models is explored assuming the pollution occurs after the star has left the fully convecti...
Computer modelling as a tool for understanding language evolution
de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D
2006-01-01
This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the
Knowledge Growth: Applied Models of General and Individual Knowledge Evolution
Silkina, Galina Iu.; Bakanova, Svetlana A.
2016-01-01
The article considers the mathematical models of the growth and accumulation of scientific and applied knowledge since it is seen as the main potential and key competence of modern companies. The problem is examined on two levels--the growth and evolution of objective knowledge and knowledge evolution of a particular individual. Both processes are…
Advances in Modelling of Large Scale Coastal Evolution
Stive, M.J.F.; De Vriend, H.J.
1995-01-01
The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.
Numerical Simulation of Missouri River Bed Evolution Downstream of Gavins Point Dam
Sulaiman, Z. A.; Blum, M. D.; Lephart, G.; Viparelli, E.
2016-12-01
The Missouri River originates in the Rocky Mountains in western Montana and joins the Mississippi River near Saint Louis, Missouri. In the 1900s dam construction and river engineering works, such as river alignment, narrowing and bank protections were performed in the Missouri River basin to control the flood flows, ensure navigation and use the water for agricultural, industrial and municipal needs, for the production of hydroelectric power generation and for recreation. These projects altered the flow and the sediment transport regimes in the river and the exchange of sediment between the river and the adjoining floodplain. Here we focus on the long term effect of dam construction and channel narrowing on the 1200 km long reach of the Missouri River between Gavins Point Dam, Nebraska and South Dakota, and the confluence with the Mississippi River. Field observations show that two downstream migrating waves of channel bed degradation formed in this reach in response to the changes in flow regime, sediment load and channel geometry. We implemented a one dimensional morphodynamic model for large, low slope sand bed rivers, we validated the model at field scale by comparing the numerical results with the available field data and we use the model to 1) predict the magnitude and the migration rate of the waves of degradation at engineering time scales ( 150 years into the future), 2) quantify the changes in the sand load delivered to the Mississippi River, where field observations at Thebes, i.e. downstream of Saint Louis, suggest a decline in the mean annual sand load in the past 50 years, and 3) identify the role of the main tributaries - Little Sioux River, Platte River and Kansas River - on the wave migration speed and the annual sand load in the Missouri River main channel.
Point process models for household distributions within small areal units
Directory of Open Access Journals (Sweden)
Zack W. Almquist
2012-06-01
Full Text Available Spatio-demographic data sets are increasingly available worldwide, permitting ever more realistic modeling and analysis of social processes ranging from mobility to disease trans- mission. The information provided by these data sets is typically aggregated by areal unit, for reasons of both privacy and administrative cost. Unfortunately, such aggregation does not permit fine-grained assessment of geography at the level of individual households. In this paper, we propose to partially address this problem via the development of point pro- cess models that can be used to effectively simulate the location of individual households within small areal units.
Development of a numerical 2-dimensional beach evolution model
DEFF Research Database (Denmark)
Baykal, Cüneyt
2014-01-01
This paper presents the description of a 2-dimensional numerical model constructed for the simulation of beach evolution under the action of wind waves only over the arbitrary land and sea topographies around existing coastal structures and formations. The developed beach evolution numerical model...... on the gradients of sediment transport rates in cross-shore and longshore directions. The developed models are applied successfully to the SANDYDUCK field experiments and to some conceptual benchmark cases including simulation of rip currents around beach cusps, beach evolution around a single shore perpendicular...
Chempy: A flexible chemical evolution model for abundance fitting
Rybizki, J.; Just, A.; Rix, H.-W.; Fouesneau, M.
2017-02-01
Chempy models Galactic chemical evolution (GCE); it is a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of 5-10 parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: e.g. the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF) and the incidence of supernova of type Ia (SN Ia). Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets, performing essentially as a chemical evolution fitting tool. Chempy can be used to confront predictions from stellar nucleosynthesis with complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.
Landscape Evolution Modelling of naturally dammed rivers
van Gorp, Wouter; Temme, Arnaud J. A. M.; Baartman, Jantiene E. M.; Schoorl, Jeroen M.
2014-01-01
Natural damming of upland river systems, such as landslide or lava damming, occurs worldwide. Many dams fail shortly after their creation, while other dams are long-lived and therefore have a long-term impact on fluvial and landscape evolution. This long-term impact is still poorly understood and
Modelling the Evolution of Rates of Ageing
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 3; Issue 8 ... Research News Volume 3 Issue 8 August 1998 pp 67-72 ... Evolution and Behaviour Laboratory, Animal Behaviour Unit, Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur P. O. Bangalore 560 064, India; 403 Shriji Palace, ...
Considering bioactivity in modelling continental growth and the Earth's evolution
Höning, D.; Spohn, T.
2013-09-01
The complexity of planetary evolution increases with the number of interacting reservoirs. On Earth, even the biosphere is speculated to interact with the interior. It has been argued (e.g., Rosing et al. 2006; Sleep et al, 2012) that the formation of continents could be a consequence of bioactivity harvesting solar energy through photosynthesis to help build the continents and that the mantle should carry a chemical biosignature. Through plate tectonics, the surface biosphere can impact deep subduction zone processes and the interior of the Earth. Subducted sediments are particularly important, because they influence the Earth's interior in several ways, and in turn are strongly influenced by the Earth's biosphere. In our model, we use the assumption that a thick sedimentary layer of low permeability on top of the subducting oceanic crust, caused by a biologically enhanced weathering rate, can suppress shallow dewatering. This in turn leads to greater vailability of water in the source region of andesitic partial melt, resulting in an enhanced rate of continental production and regassing rate into the mantle. Our model includes (i) mantle convection, (ii) continental erosion and production, and (iii) mantle water degassing at mid-ocean ridges and regassing at subduction zones. The mantle viscosity of our model depends on (i) the mantle water concentration and (ii) the mantle temperature, whose time dependency is given by radioactive decay of isotopes in the Earth's mantle. Boundary layer theory yields the speed of convection and the water outgassing rate of the Earth's mantle. Our results indicate that present day values of continental surface area and water content of the Earth's mantle represent an attractor in a phase plane spanned by both parameters. We show that the biologic enhancement of the continental erosion rate is important for the system to reach this fixed point. An abiotic Earth tends to reach an alternative stable fixed point with a smaller
A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD
Directory of Open Access Journals (Sweden)
J. Tang
2017-09-01
Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.
Self-Exciting Point Process Modeling of Conversation Event Sequences
Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo
Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.
Structure of the Scientific Community Modelling the Evolution of Resistance
2007-01-01
Faced with the recurrent evolution of resistance to pesticides and drugs, the scientific community has developed theoretical models aimed at identifying the main factors of this evolution and predicting the efficiency of resistance management strategies. The evolutionary forces considered by these models are generally similar for viruses, bacteria, fungi, plants or arthropods facing drugs or pesticides, so interaction between scientists working on different biological organisms would be expec...
Synthetic clusters of massive stars to test stellar evolution models
Georgy, Cyril; Ekström, Sylvia
2017-03-01
During the last few years, the Geneva stellar evolution group has released new grids of stellar models, including the effect of rotation and with updated physical inputs (Ekström et al. 2012; Georgy et al. 2013a, b). To ease the comparison between the outputs of the stellar evolution computations and the observations, a dedicated tool was developed: the Syclist toolbox (Georgy et al. 2014). It allows to compute interpolated stellar models, isochrones, synthetic clusters, and to simulate the time-evolution of stellar populations.
Modelling of nonlinear shoaling based on stochastic evolution equations
DEFF Research Database (Denmark)
Kofoed-Hansen, Henrik; Rasmussen, Jørgen Hvenekær
1998-01-01
A one-dimensional stochastic model is derived to simulate the transformation of wave spectra in shallow water including generation of bound sub- and super-harmonics, near-resonant triad wave interaction and wave breaking. Boussinesq type equations with improved linear dispersion characteristics...... are recast into evolution equations for the complex amplitudes, and serve as the underlying deterministic model. Next, a set of evolution equations for the cumulants is derived. By formally introducing the well-known Gaussian closure hypothesis, nonlinear evolution equations for the power spectrum...
[Modeling asthma evolution by a multi-state model].
Boudemaghe, T; Daurès, J P
2000-06-01
There are many scores for the evaluation of asthma. However, most do not take into account the evolutionary aspects of this illness. We propose a model for the clinical course of asthma by a homogeneous Markov model process based on data provided by the A.R.I.A. (Association de Recherche en Intelligence Artificielle dans le cadre de l'asthme et des maladies respiratoires). The criterion used is the activity of the illness during the month before consultation. The activity is divided into three levels: light (state 1), mild (state 2) and severe (state 3). The model allows the evaluation of the strength of transition between states. We found that strong intensities were implicated towards state 2 (lambda(12) and lambda(32)), less towards state 1 (lambda(21) and lambda(31)), and minimum towards state 3 (lambda(23)). This results in an equilibrium distribution essentially divided between state 1 and 2 (44.6% and 51.0% respectively) with a small proportion in state 3 (4.4%). In the future, the increasing amount of available data should permit the introduction of covariables, the distinction of subgroups and the implementation of clinical studies. The interest of this model falls within the domain of the quantification of the illness as well as the representation allowed thereof, while offering a formal framework for the clinical notion of time and evolution.
Evolutionary Sequential Monte Carlo Samplers for Change-Point Models
Directory of Open Access Journals (Sweden)
Arnaud Dufays
2016-03-01
Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.
FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
Y. Sun
2016-06-01
Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
First Prismatic Building Model Reconstruction from Tomosar Point Clouds
Sun, Y.; Shahzad, M.; Zhu, X.
2016-06-01
This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR) point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007) and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce) the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center) in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.
Separation-Mixing as a Model of Composition Evolution of any Nature
Directory of Open Access Journals (Sweden)
Tomas G. Petrov
2014-02-01
Full Text Available Model of separation-mixing is applicable when studying the compositional evolution of systems of different nature, from physicochemical to the social ones. To display the processes, RHAT information language-method is proposed; it takes into account at the same point on the chart an indefinitely wide variation of components and their quantity. The possibilities of the model application are showed.
A mathematical model for evolution and SETI.
Maccone, Claudio
2011-12-01
Darwinian evolution theory may be regarded as a part of SETI theory in that the factor f(l) in the Drake equation represents the fraction of planets suitable for life on which life actually arose. In this paper we firstly provide a statistical generalization of the Drake equation where the factor f(l) is shown to follow the lognormal probability distribution. This lognormal distribution is a consequence of the Central Limit Theorem (CLT) of Statistics, stating that the product of a number of independent random variables whose probability densities are unknown and independent of each other approached the lognormal distribution when the number of factors increased to infinity. In addition we show that the exponential growth of the number of species typical of Darwinian Evolution may be regarded as the geometric locus of the peaks of a one-parameter family of lognormal distributions (b-lognormals) constrained between the time axis and the exponential growth curve. Finally, since each b-lognormal distribution in the family may in turn be regarded as the product of a large number (actually "an infinity") of independent lognormal probability distributions, the mathematical way is paved to further cast Darwinian Evolution into a mathematical theory in agreement with both its typical exponential growth in the number of living species and the Statistical Drake Equation.
The Critical Point Entanglement and Chaos in the Dicke Model
Directory of Open Access Journals (Sweden)
Lina Bao
2015-07-01
Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.
Computer models of vocal tract evolution: an overview and critique
de Boer, B.; Fitch, W. T.
2010-01-01
Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be
Accuracy analysis of point cloud modeling for evaluating concrete specimens
D'Amico, Nicolas; Yu, Tzuyang
2017-04-01
Photogrammetric methods such as structure from motion (SFM) have the capability to acquire accurate information about geometric features, surface cracks, and mechanical properties of specimens and structures in civil engineering. Conventional approaches to verify the accuracy in photogrammetric models usually require the use of other optical techniques such as LiDAR. In this paper, geometric accuracy of photogrammetric modeling is investigated by studying the effects of number of photos, radius of curvature, and point cloud density (PCD) on estimated lengths, areas, volumes, and different stress states of concrete cylinders and panels. Four plain concrete cylinders and two plain mortar panels were used for the study. A commercially available mobile phone camera was used in collecting all photographs. Agisoft PhotoScan software was applied in photogrammetric modeling of all concrete specimens. From our results, it was found that the increase of number of photos does not necessarily improve the geometric accuracy of point cloud models (PCM). It was also found that the effect of radius of curvature is not significant when compared with the ones of number of photos and PCD. A PCD threshold of 15.7194 pts/cm3 is proposed to construct reliable and accurate PCM for condition assessment. At this PCD threshold, all errors for estimating lengths, areas, and volumes were less than 5%. Finally, from the study of mechanical property of a plain concrete cylinder, we have found that the increase of stress level inside the concrete cylinder can be captured by the increase of radial strain in its PCM.
Modelling the evolution and spread of HIV immune escape mutants.
Fryer, Helen R; Frater, John; Duda, Anna; Roberts, Mick G; Phillips, Rodney E; McLean, Angela R
2010-11-18
During infection with human immunodeficiency virus (HIV), immune pressure from cytotoxic T-lymphocytes (CTLs) selects for viral mutants that confer escape from CTL recognition. These escape variants can be transmitted between individuals where, depending upon their cost to viral fitness and the CTL responses made by the recipient, they may revert. The rates of within-host evolution and their concordant impact upon the rate of spread of escape mutants at the population level are uncertain. Here we present a mathematical model of within-host evolution of escape mutants, transmission of these variants between hosts and subsequent reversion in new hosts. The model is an extension of the well-known SI model of disease transmission and includes three further parameters that describe host immunogenetic heterogeneity and rates of within host viral evolution. We use the model to explain why some escape mutants appear to have stable prevalence whilst others are spreading through the population. Further, we use it to compare diverse datasets on CTL escape, highlighting where different sources agree or disagree on within-host evolutionary rates. The several dozen CTL epitopes we survey from HIV-1 gag, RT and nef reveal a relatively sedate rate of evolution with average rates of escape measured in years and reversion in decades. For many epitopes in HIV, occasional rapid within-host evolution is not reflected in fast evolution at the population level.
Topological evolution of virtual social networks by modeling social activities
Sun, Xin; Dong, Junyu; Tang, Ruichun; Xu, Mantao; Qi, Lin; Cai, Yang
2015-09-01
With the development of Internet and wireless communication, virtual social networks are becoming increasingly important in the formation of nowadays' social communities. Topological evolution model is foundational and critical for social network related researches. Up to present most of the related research experiments are carried out on artificial networks, however, a study of incorporating the actual social activities into the network topology model is ignored. This paper first formalizes two mathematical abstract concepts of hobbies search and friend recommendation to model the social actions people exhibit. Then a social activities based topology evolution simulation model is developed to satisfy some well-known properties that have been discovered in real-world social networks. Empirical results show that the proposed topology evolution model has embraced several key network topological properties of concern, which can be envisioned as signatures of real social networks.
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
2014-01-01
A numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...
Modeling the Microstructural Evolution During Constrained Sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
2015-01-01
A numerical model able to simulate solid-state constrained sintering is presented. The model couples an existing kinetic Monte Carlo model for free sintering with a finite element model (FEM) for calculating stresses on a microstructural level. The microstructural response to the local stress...
A neutral model of transcriptome evolution.
Directory of Open Access Journals (Sweden)
Philipp Khaitovich
2004-05-01
Full Text Available Microarray technologies allow the identification of large numbers of expression differences within and between species. Although environmental and physiological stimuli are clearly responsible for changes in the expression levels of many genes, it is not known whether the majority of changes of gene expression fixed during evolution between species and between various tissues within a species are caused by Darwinian selection or by stochastic processes. We find the following: (1 expression differences between species accumulate approximately linearly with time; (2 gene expression variation among individuals within a species correlates positively with expression divergence between species; (3 rates of expression divergence between species do not differ significantly between intact genes and expressed pseudogenes; (4 expression differences between brain regions within a species have accumulated approximately linearly with time since these regions emerged during evolution. These results suggest that the majority of expression differences observed between species are selectively neutral or nearly neutral and likely to be of little or no functional significance. Therefore, the identification of gene expression differences between species fixed by selection should be based on null hypotheses assuming functional neutrality. Furthermore, it may be possible to apply a molecular clock based on expression differences to infer the evolutionary history of tissues.
Augmenting Epidemiological Models with Point-Of-Care Diagnostics Data.
Directory of Open Access Journals (Sweden)
Özgür Özmen
Full Text Available Although adoption of newer Point-of-Care (POC diagnostics is increasing, there is a significant challenge using POC diagnostics data to improve epidemiological models. In this work, we propose a method to process zip-code level POC datasets and apply these processed data to calibrate an epidemiological model. We specifically develop a calibration algorithm using simulated annealing and calibrate a parsimonious equation-based model of modified Susceptible-Infected-Recovered (SIR dynamics. The results show that parsimonious models are remarkably effective in predicting the dynamics observed in the number of infected patients and our calibration algorithm is sufficiently capable of predicting peak loads observed in POC diagnostics data while staying within reasonable and empirical parameter ranges reported in the literature. Additionally, we explore the future use of the calibrated values by testing the correlation between peak load and population density from Census data. Our results show that linearity assumptions for the relationships among various factors can be misleading, therefore further data sources and analysis are needed to identify relationships between additional parameters and existing calibrated ones. Calibration approaches such as ours can determine the values of newly added parameters along with existing ones and enable policy-makers to make better multi-scale decisions.
A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.
Kumkar, Yogesh V.; Sen, P. N.; Chaudhari, Hemankumar S.; Oh, Jai-Ho
2018-02-01
In this paper, an attempt has been made to conduct a numerical experiment with the high-resolution global model GME to predict the tropical storms in the North Indian Ocean during the year 2007. Numerical integrations using the icosahedral hexagonal grid point global model GME were performed to study the evolution of tropical cyclones, viz., Akash, Gonu, Yemyin and Sidr over North Indian Ocean during 2007. It has been seen that the GME model forecast underestimates cyclone's intensity, but the model can capture the evolution of cyclone's intensity especially its weakening during landfall, which is primarily due to the cutoff of the water vapor supply in the boundary layer as cyclones approach the coastal region. A series of numerical simulation of tropical cyclones have been performed with GME to examine model capability in prediction of intensity and track of the cyclones. The model performance is evaluated by calculating the root mean square errors as cyclone track errors.
Evolution of Money Distribution in a Simple Economic Model
Liang, X. San; Carter, Thomas J.
An analytical approach is utilized to study the money evolution in a simple agent-based economic model, where every agent randomly selects someone else and gives the target one dollar unless he runs out of money. (No one is allowed to go into debt.) If originally no agent is in poverty, for most of time the economy is found to be dominated by a Gaussian money distribution, with a fixed mean and an increasing variance proportional to time. This structure begins to be drifted toward the left when the tail of the Gaussian hits the left boundary, and the drift becomes faster and faster, until a steady state is reached. The steady state generally follows the Boltzmann-Gibbs distribution, except for the points around the origin. Our result shows that, the pdf for the utterly destitute is only half of that predicted by the Boltzmann solution. An implication of this is that the economic structure may be improved through manipulating transaction rules.
Universality away from critical points in a thermostatistical model
Lapilli, C. M.; Wexler, C.; Pfeifer, P.
Nature uses phase transitions as powerful regulators of processes ranging from climate to the alteration of phase behavior of cell membranes to protect cells from cold, building on the fact that thermodynamic properties of a solid, liquid, or gas are sensitive fingerprints of intermolecular interactions. The only known exceptions from this sensitivity are critical points. At a critical point, two phases become indistinguishable and thermodynamic properties exhibit universal behavior: systems with widely different intermolecular interactions behave identically. Here we report a major counterexample. We show that different members of a family of two-dimensional systems —the discrete p-state clock model— with different Hamiltonians describing different microscopic interactions between molecules or spins, may exhibit identical thermodynamic behavior over a wide range of temperatures. The results generate a comprehensive map of the phase diagram of the model and, by virtue of the discrete rotors behaving like continuous rotors, an emergent symmetry, not present in the Hamiltonian. This symmetry, or many-to-one map of intermolecular interactions onto thermodynamic states, demonstrates previously unknown limits for macroscopic distinguishability of different microscopic interactions.
Model experiments on platelet adhesion in stagnation point flow.
Wurzinger, L J; Blasberg, P; van de Loecht, M; Suwelack, W; Schmid-Schönbein, H
1984-01-01
Experiments with glass models of arterial branchings and bends, perfused with bovine platelet rich plasma (PRP), revealed platelet deposition being strongly dependent on fluid dynamic factors. Predilection sites of platelet deposits are characterized by flow vectors directed against the wall, so-called stagnation point flow. Thus collision of suspended particles with the wall, an absolute prerequisite for adhesion of platelets to surfaces even as thrombogenic as glass, appears mediated by convective forces. The extent of platelet deposition is correlated to the magnitude of flow components normal to the surface as well as to the state of biological activation of the platelets. The latter could be effective by an increase in hydrodynamically effective volume, invariably associated with the platelet shape change reaction to biochemical stimulants like ADP. The effect of altered rheological properties of platelets upon their deposition and of mechanical properties of surfaces was examined in a stagnation point flow chamber. Roughnesses in the order of 5 microns, probably by creating local flow disturbances, significantly enhance platelet adhesion, as compared to a smooth surface of identical chemical composition.
Neutral null models for diversity in serial transfer evolution experiments.
Harpak, Arbel; Sella, Guy
2014-09-01
Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
Modeling elephant-mediated cascading effects of water point closure.
Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F
2015-03-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically
Analogue model for anti-de Sitter as a description of point sources in fluids
Mosna, Ricardo A; Richartz, Maurício
2016-01-01
We introduce an analogue model for a nonglobally hyperbolic spacetime in terms of a two-dimensional fluid. This is done by considering the propagation of sound waves in a radial flow with constant velocity. We show that the equation of motion satisfied by sound waves is the wave equation on $AdS_2\\times S^1$. Since this spacetime is not globally hyperbolic, the dynamics of the Klein-Gordon field is not well defined until boundary conditions at the spatial boundary of $AdS_2$ are prescribed. On the analogue model end, those extra boundary conditions provide an effective description of the point source at $r=0$. For waves with circular symmetry, we relate the different physical evolutions to the phase difference between ingoing and outgoing scattered waves. We also show that the fluid configuration can be stable or unstable depending on the chosen boundary condition.
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Pryds, Nini
A mesoscale numerical model able to simulate solid state constrained sintering is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element method for calculating stresses. The sintering behavior of a sample constrained by a rigid substrate...
Modeling the microstructural evolution during constrained sintering
DEFF Research Database (Denmark)
Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.
A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...
A last updating evolution model for online social networks
Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui
2013-05-01
As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.
Forecasting Macedonian Business Cycle Turning Points Using Qual Var Model
Directory of Open Access Journals (Sweden)
Petrovska Magdalena
2016-09-01
Full Text Available This paper aims at assessing the usefulness of leading indicators in business cycle research and forecast. Initially we test the predictive power of the economic sentiment indicator (ESI within a static probit model as a leading indicator, commonly perceived to be able to provide a reliable summary of the current economic conditions. We further proceed analyzing how well an extended set of indicators performs in forecasting turning points of the Macedonian business cycle by employing the Qual VAR approach of Dueker (2005. In continuation, we evaluate the quality of the selected indicators in pseudo-out-of-sample context. The results show that the use of survey-based indicators as a complement to macroeconomic data work satisfactory well in capturing the business cycle developments in Macedonia.
Protein Evolution along Phylogenetic Histories under Structurally Constrained Substitution Models
Arenas, Miguel; Dos Santos, Helena G.; Posada, David; Bastolla, Ugo
2017-01-01
Motivation Models of molecular evolution aim at describing the evolutionary processes at the molecular level. However, current models rarely incorporate information from protein structure. Conversely, structure-based models of protein evolution have not been commonly applied to simulate sequence evolution in a phylogenetic framework and they often ignore relevant evolutionary processes such as recombination. A simulation evolutionary framework that integrates substitution models that account for protein structure stability should be able to generate more realistic in silico evolved proteins for a variety of purposes. Results We developed a method to simulate protein evolution that combines models of protein folding stability, such that the fitness depends on the stability of the native state both with respect to unfolding and misfolding, with phylogenetic histories that can be either specified by the user or simulated with the coalescent under complex evolutionary scenarios including recombination, demographics and migration. We have implemented this framework in a computer program called ProteinEvolver. Remarkably, comparing these models with empirical amino acid replacement models, we found that the former produce amino acid distributions closer to distributions observed in real protein families, and proteins that are predicted to be more stable. Therefore, we conclude that evolutionary models that consider protein stability and realistic evolutionary histories constitute a better approximation of the real evolutionary process. Availability ProteinEvolver is written in C, can run in parallel, and is freely available from http://code.google.com/p/proteinevolver/. PMID:24037213
Application of the evolution theory in modelling of innovation diffusion
Directory of Open Access Journals (Sweden)
Krstić Milan
2016-01-01
Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.
Mathematical Models for the Epidemiology and Evolution of Mycobacterium tuberculosis.
Pečerska, Jūlija; Wood, James; Tanaka, Mark M; Stadler, Tanja
2017-01-01
This chapter reviews the use of mathematical and computational models to facilitate understanding of the epidemiology and evolution of Mycobacterium tuberculosis. First, we introduce general epidemiological models, and describe their use with respect to epidemiological dynamics of a single strain and of multiple strains of M. tuberculosis. In particular, we discuss multi-strain models that include drug sensitivity and drug resistance. Second, we describe models for the evolution of M. tuberculosis within and between hosts, and how the resulting diversity of strains can be assessed by considering the evolutionary relationships among different strains. Third, we discuss developments in integrating evolutionary and epidemiological models to analyse M. tuberculosis genetic sequencing data. We conclude the chapter with a discussion of the practical implications of modelling - particularly modelling strain diversity - for controlling the spread of tuberculosis, and future directions for research in this area.
Modeling the connection between development and evolution: Preliminary report
Energy Technology Data Exchange (ETDEWEB)
Mjolsness, E.; Reinitz, J. [Yale Univ., New Haven, CT (United States); Garrett, C.D. [Washington Univ., Seattle, WA (United States). Dept. of Computer Science; Sharp, D.H. [Los Alamos National Lab., NM (United States)
1993-07-29
In this paper we outline a model which incorporates development processes into an evolutionary frame work. The model consists of three sectors describing development, genetics, and the selective environment. The formulation of models governing each sector uses dynamical grammars to describe processes in which state variables evolve in a quantitative fashion, and the number and type of participating biological entities can change. This program has previously been elaborated for development. Its extension to the other sectors of the model is discussed here and forms the basis for further approximations. A specific implementation of these ideas is described for an idealized model of the evolution of a multicellular organism. While this model doe not describe an actual biological system, it illustrates the interplay of development and evolution. Preliminary results of numerical simulations of this idealized model are presented.
Evolution and experience with the ATLAS Simulation at Point1 Project
Ballestrero, Sergio; The ATLAS collaboration
2017-01-01
The Simulation at Point1 project is successfully running traditional ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We present our experience with using the Event Service that provides the event-level granularity of computations. We show the design decisions and overhead time related to the usage of the Event Service. The improved utilisation of the resources is also presented with the recent development in monitoring, automatic alerting, deployment and GUI.
Evolution and experience with the ATLAS simulation at Point1 project
Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Di Girolamo, Alessandro; Kouba, Tomas; Lee, Christopher; Scannicchio, Diana; Schovancova, Jaroslava; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander
2016-01-01
The Simulation at Point1 project is successfully running traditional ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We will present our experience with using the Event Service that provides the event-level granularity of computations. We will show the design decisions and overhead time related to the usage of the Event Service. The improved utilization of the resources will also be presented with the recent development in monitoring, automatic alerting, deployment and GUI.
Ledbetter, Michael P.; Hwang, Tony W.; Stovall, Gwendolyn M.; Ellington, Andrew D.
2013-01-01
Evolution is a defining criterion of life and is central to understanding biological systems. However, the timescale of evolutionary shifts in phenotype limits most classroom evolution experiments to simple probability simulations. "In vitro" directed evolution (IVDE) frequently serves as a model system for the study of Darwinian…
Functional Characterization of Cnidarian HCN Channels Points to an Early Evolution of Ih.
Directory of Open Access Journals (Sweden)
Emma C Baker
Full Text Available HCN channels play a unique role in bilaterian physiology as the only hyperpolarization-gated cation channels. Their voltage-gating is regulated by cyclic nucleotides and phosphatidylinositol 4,5-bisphosphate (PIP2. Activation of HCN channels provides the depolarizing current in response to hyperpolarization that is critical for intrinsic rhythmicity in neurons and the sinoatrial node. Additionally, HCN channels regulate dendritic excitability in a wide variety of neurons. Little is known about the early functional evolution of HCN channels, but the presence of HCN sequences in basal metazoan phyla and choanoflagellates, a protozoan sister group to the metazoans, indicate that the gene family predates metazoan emergence. We functionally characterized two HCN channel orthologs from Nematostella vectensis (Cnidaria, Anthozoa to determine which properties of HCN channels were established prior to the emergence of bilaterians. We find Nematostella HCN channels share all the major functional features of bilaterian HCNs, including reversed voltage-dependence, activation by cAMP and PIP2, and block by extracellular Cs+. Thus bilaterian-like HCN channels were already present in the common parahoxozoan ancestor of bilaterians and cnidarians, at a time when the functional diversity of voltage-gated K+ channels was rapidly expanding. NvHCN1 and NvHCN2 are expressed broadly in planulae and in both the endoderm and ectoderm of juvenile polyps.
Detecting Character Dependencies in Stochastic Models of Evolution.
Chakrabarty, Deeparnab; Kannan, Sampath; Tian, Kevin
2016-03-01
Stochastic models of biological evolution generally assume that different characters (runs of the stochastic process) are independent and identically distributed. In this article we determine the asymptotic complexity of detecting dependence for some fairly general models of evolution, but simple models of dependence. A key difference from much of the previous work is that our algorithms work without knowledge of the tree topology. Specifically, we consider various stochastic models of evolution ranging from the common ones used by biologists (such as Cavender-Farris-Neyman and Jukes-Cantor models) to very general ones where evolution of different characters can be governed by different transition matrices on each edge of the evolutionary tree (phylogeny). We also consider several models of dependence between two characters. In the most specific model, on each edge of the phylogeny the joint distribution of the dependent characters undergoes a perturbation of a fixed magnitude, in a fixed direction from what it would be if the characters were evolving independently. More general dependence models don't require such a strong "signal." Instead they only require that on each edge, the perturbation of the joint distribution has a significant component in a specific direction. Our main results are nearly tight bounds on the induced or operator norm of the transition matrices that would allow us to detect dependence efficiently for most models of evolution and dependence that we consider. We make essential use of a new concentration result for multistate random variables of a Markov random field on arbitrary trivalent trees: We show that the random variable counting the number of leaves in any particular state has variance that is subquadratic in the number of leaves.
Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.
2012-12-01
Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.
A new molecular evolution model for limited insertion independent of substitution.
Lèbre, Sophie; Michel, Christian J
2013-10-01
We recently introduced a new molecular evolution model called the IDIS model for Insertion Deletion Independent of Substitution [13,14]. In the IDIS model, the three independent processes of substitution, insertion and deletion of residues have constant rates. In order to control the genome expansion during evolution, we generalize here the IDIS model by introducing an insertion rate which decreases when the sequence grows and tends to 0 for a maximum sequence length nmax. This new model, called LIIS for Limited Insertion Independent of Substitution, defines a matrix differential equation satisfied by a vector P(t) describing the sequence content in each residue at evolution time t. An analytical solution is obtained for any diagonalizable substitution matrix M. Thus, the LIIS model gives an expression of the sequence content vector P(t) in each residue under evolution time t as a function of the eigenvalues and the eigenvectors of matrix M, the residue insertion rate vector R, the total insertion rate r, the initial and maximum sequence lengths n0 and nmax, respectively, and the sequence content vector P(t0) at initial time t0. The derivation of the analytical solution is much more technical, compared to the IDIS model, as it involves Gauss hypergeometric functions. Several propositions of the LIIS model are derived: proof that the IDIS model is a particular case of the LIIS model when the maximum sequence length nmax tends to infinity, fixed point, time scale, time step and time inversion. Using a relation between the sequence length l and the evolution time t, an expression of the LIIS model as a function of the sequence length l=n(t) is obtained. Formulas for 'insertion only', i.e. when the substitution rates are all equal to 0, are derived at evolution time t and sequence length l. Analytical solutions of the LIIS model are explicitly derived, as a function of either evolution time t or sequence length l, for two classical substitution matrices: the 3
A compact cyclic plasticity model with parameter evolution
DEFF Research Database (Denmark)
Krenk, Steen; Tidemann, L.
2017-01-01
by the Armstrong–Frederick model, contained as a special case of the present model for a particular choice of the shape parameter. In contrast to previous work, where shaping the stress-strain loops is derived from multiple internal stress states, this effect is here represented by a single parameter......The paper presents a compact model for cyclic plasticity based on energy in terms of external and internal variables, and plastic yielding described by kinematic hardening and a flow potential with an additive term controlling the nonlinear cyclic hardening. The model is basically described by five......, and it is demonstrated that this simple formulation enables very accurate representation of experimental results. An extension of the theory to account for model parameter evolution effects, e.g. in the form of changing yield level, is included in the form of extended evolution equations for the model parameters...
Structure of the scientific community modelling the evolution of resistance.
2007-12-05
Faced with the recurrent evolution of resistance to pesticides and drugs, the scientific community has developed theoretical models aimed at identifying the main factors of this evolution and predicting the efficiency of resistance management strategies. The evolutionary forces considered by these models are generally similar for viruses, bacteria, fungi, plants or arthropods facing drugs or pesticides, so interaction between scientists working on different biological organisms would be expected. We tested this by analysing co-authorship and co-citation networks using a database of 187 articles published from 1977 to 2006 concerning models of resistance evolution to all major classes of pesticides and drugs. These analyses identified two main groups. One group, led by ecologists or agronomists, is interested in agricultural crop or stock pests and diseases. It mainly uses a population genetics approach to model the evolution of resistance to insecticidal proteins, insecticides, herbicides, antihelminthic drugs and miticides. By contrast, the other group, led by medical scientists, is interested in human parasites and mostly uses epidemiological models to study the evolution of resistance to antibiotic and antiviral drugs. Our analyses suggested that there is also a small scientific group focusing on resistance to antimalaria drugs, and which is only poorly connected with the two larger groups. The analysis of cited references indicates that each of the two large communities publishes its research in a different set of literature and has its own keystone references: citations with a large impact in one group are almost never cited by the other. We fear the lack of exchange between the two communities might slow progress concerning resistance evolution which is currently a major issue for society.
Evolution and History in a new "Mathematical SETI" model
Maccone, Claudio
2014-01-01
important exact equations yielding the b-lognormal when its birth time, senility-time (descending inflexion point) and death time (where the tangent at senility intercepts the time axis) are known. These also are brand-new results. In particular, the σ=1 b-lognormals are shown to be related to the golden ratio, so famous in the arts and in architecture, and these special b-lognormals we call "golden b-lognormals". Applying this new mathematical apparatus to Human History leads to the discovery of the exponential trend of progress between Ancient Greece and the current USA Empire as the envelope of the b-lognormals of all Western Civilizations over a period of 2500 years. We then invoke Shannon's Information Theory. The entropy of the obtained b-lognormals turns out to be the index of "development level" reached by each historic civilization. As a consequence, we get a numerical estimate of the entropy difference (i.e. the difference in the evolution levels) between any two civilizations. In particular, this was the case when Spaniards first met with Aztecs in 1519, and we find the relevant entropy difference between Spaniards an Aztecs to be 3.84 bits/individual over a period of about 50 centuries of technological difference. In a similar calculation, the entropy difference between the first living organism on Earth (RNA?) and Humans turns out to equal 25.57 bits/individual over a period of 3.5 billion years of Darwinian Evolution. Finally, we extrapolate our exponentials into the future, which is of course arbitrary, but is the best Humans can do before they get in touch with any alien civilization. The results are appalling: the entropy difference between aliens 1 million years more advanced than Humans is of the order of 1000 bits/individual, while 10,000 bits/individual would be requested to any Civilization wishing to colonize the whole Galaxy (Fermi Paradox). In conclusion, we have derived a mathematical model capable of estimating how much more advanced than humans
Independence Model estimation using Artificial Evolution
Barrière, Olivier; Lutton, Evelyne; Wuillemin, Pierre-Henri
2010-01-01
Cet article est une version condensée d'une précédente publication présentée dans une conférence sur les algorithmes génétiques, il n'est donc pas éligible pour publication dans une revue.; National audience; In this paper, we consider a Bayesian network structure estimation problem as a two step problem based on an independence model representation. We first perform an evolutionary search for an approximation of an independence model. A deterministic algorithm is then used to deduce a Bayesi...
A finite population model of molecular evolution: theory and computation.
Dixit, Narendra M; Srivastava, Piyush; Vishnoi, Nisheeth K
2012-10-01
This article is concerned with the evolution of haploid organisms that reproduce asexually. In a seminal piece of work, Eigen and coauthors proposed the quasispecies model in an attempt to understand such an evolutionary process. Their work has impacted antiviral treatment and vaccine design strategies. Yet, predictions of the quasispecies model are at best viewed as a guideline, primarily because it assumes an infinite population size, whereas realistic population sizes can be quite small. In this paper we consider a population genetics-based model aimed at understanding the evolution of such organisms with finite population sizes and present a rigorous study of the convergence and computational issues that arise therein. Our first result is structural and shows that, at any time during the evolution, as the population size tends to infinity, the distribution of genomes predicted by our model converges to that predicted by the quasispecies model. This justifies the continued use of the quasispecies model to derive guidelines for intervention. While the stationary state in the quasispecies model is readily obtained, due to the explosion of the state space in our model, exact computations are prohibitive. Our second set of results are computational in nature and address this issue. We derive conditions on the parameters of evolution under which our stochastic model mixes rapidly. Further, for a class of widely used fitness landscapes we give a fast deterministic algorithm which computes the stationary distribution of our model. These computational tools are expected to serve as a framework for the modeling of strategies for the deployment of mutagenic drugs.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer
Energy Technology Data Exchange (ETDEWEB)
Liu, Yifang; Lee, Chi-Guhn [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8 (Canada); Chan, Timothy C. Y., E-mail: tcychan@mie.utoronto.ca [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124-100 College Street Toronto, Ontario M5G 1P5 (Canada); Cho, Young-Bin [Department of Radiation Physics, Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148-150 College Street, Toronto, Ontario M5S 3S2 (Canada); Islam, Mohammad K. [Department of Radiation Physics, Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148-150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124-100 College Street, Toronto, Ontario M5G 1P5 (Canada)
2014-02-15
Purpose: To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. Methods: The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxels on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. Results: The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. Conclusions: By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.
A stochastic model for tumor geometry evolution during radiation therapy in cervical cancer.
Liu, Yifang; Chan, Timothy C Y; Lee, Chi-Guhn; Cho, Young-Bin; Islam, Mohammad K
2014-02-01
To develop mathematical models to predict the evolution of tumor geometry in cervical cancer undergoing radiation therapy. The authors develop two mathematical models to estimate tumor geometry change: a Markov model and an isomorphic shrinkage model. The Markov model describes tumor evolution by investigating the change in state (either tumor or nontumor) of voxels on the tumor surface. It assumes that the evolution follows a Markov process. Transition probabilities are obtained using maximum likelihood estimation and depend on the states of neighboring voxels. The isomorphic shrinkage model describes tumor shrinkage or growth in terms of layers of voxels on the tumor surface, instead of modeling individual voxels. The two proposed models were applied to data from 29 cervical cancer patients treated at Princess Margaret Cancer Centre and then compared to a constant volume approach. Model performance was measured using sensitivity and specificity. The Markov model outperformed both the isomorphic shrinkage and constant volume models in terms of the trade-off between sensitivity (target coverage) and specificity (normal tissue sparing). Generally, the Markov model achieved a few percentage points in improvement in either sensitivity or specificity compared to the other models. The isomorphic shrinkage model was comparable to the Markov approach under certain parameter settings. Convex tumor shapes were easier to predict. By modeling tumor geometry change at the voxel level using a probabilistic model, improvements in target coverage and normal tissue sparing are possible. Our Markov model is flexible and has tunable parameters to adjust model performance to meet a range of criteria. Such a model may support the development of an adaptive paradigm for radiation therapy of cervical cancer.
Nunes, Ricardo; Araújo, Joice
2010-03-01
First calculations, employed to address the properties of polycrystalline graphene, indicate that the electronic structure of tilt grain boundaries in this system [1-4] displays a rather complex evolution towards graphene bulk, as the tilt angle decreases, with the generation of a new Dirac point at the Fermi level, and an anisotropic Dirac cone of low energy excitations. Moreover, the usual Dirac point at the K point falls below the Fermi level, and rises towards it as the tilt angle decreases. Further, our calculations indicate that the grain-boundary formation energy behaves non-monotonically with the tilt angle, due to a change in the the spatial distribution and relative contributions of the bond-stretching and bond-bending deformations associated with the formation of the defect.[4pt] [1] L. B. Biedermann et al., Phys. Rev. B 79, 125411 (2009). [0pt] [2] S. S. Datta et al., Nanoletters 9, 7 (2009). [0pt] [3] P. Simonis et al., Surf. Sci. 511, 319 (2002). [0pt] [4] G. Gu et al., Appl. Phys. Lett. 90, 253507 (2007).
Energetics in a model of prebiotic evolution
Intoy, B. F.; Halley, J. W.
2017-12-01
Previously we reported [A. Wynveen et al., Phys. Rev. E 89, 022725 (2014), 10.1103/PhysRevE.89.022725] that requiring that the systems regarded as lifelike be out of chemical equilibrium in a model of abstracted polymers undergoing ligation and scission first introduced by Kauffman [S. A. Kauffman, The Origins of Order (Oxford University Press, New York, 1993), Chap. 7] implied that lifelike systems were most probable when the reaction network was sparse. The model was entirely statistical and took no account of the bond energies or other energetic constraints. Here we report results of an extension of the model to include effects of a finite bonding energy in the model. We studied two conditions: (1) A food set is continuously replenished and the total polymer population is constrained but the system is otherwise isolated and (2) in addition to the constraints in (1) the system is in contact with a finite-temperature heat bath. In each case, detailed balance in the dynamics is guaranteed during the computations by continuous recomputation of a temperature [in case (1)] and of the chemical potential (in both cases) toward which the system is driven by the dynamics. In the isolated case, the probability of reaching a metastable nonequilibrium state in this model depends significantly on the composition of the food set, and the nonequilibrium states satisfying lifelike condition turn out to be at energies and particle numbers consistent with an equilibrium state at high negative temperature. As a function of the sparseness of the reaction network, the lifelike probability is nonmonotonic, as in our previous model, but the maximum probability occurs when the network is less sparse. In the case of contact with a thermal bath at a positive ambient temperature, we identify two types of metastable nonequilibrium states, termed locally and thermally alive, and locally dead and thermally alive, and evaluate their likelihood of appearance, finding maxima at an optimal
2004-06-01
are particularly interesting since several biologically significant molecules, including a family of sugar molecules, are aldehydes. "The GBT can be used to fully explore the possibility that a significant amount of prebiotic chemistry may occur in space long before it occurs on a newly formed planet," said Remijan. "Comets form from interstellar clouds and incessantly bombard a newly formed planet early in its history. Craters on our Moon attest to this. Thus, comets may be the delivery vehicles for organic molecules necessary for life to begin on a new planet." Laboratory experiments also demonstrate that atomic addition reactions -- similar to those assumed to occur in interstellar clouds -- play a role in synthesizing complex molecules by subjecting ices containing simpler molecules such as water, carbon dioxide, and methanol to ionizing radiation dosages. Thus, laboratory experiments can now be devised with various ice components to attempt production of the aldehydes observed with the GBT. "The detection of the two new aldehydes, which are related by a common chemical pathway called hydrogen addition, demonstrates that evolution to more complex species occurs routinely in interstellar clouds and that a relatively simple mechanism may build large molecules out of smaller ones. The GBT is now a key instrument in exploring chemical evolution in space," said Hollis. The GBT is the world's largest fully steerable radio telescope; it is operated by the NRAO. "The large diameter and high precision of the GBT allowed us to study small interstellar clouds that can absorb the radiation from a bright, background source. The sensitivity and flexibility of the telescope gave us an important new tool for the study of complex interstellar molecules," said Jewell. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
Directory of Open Access Journals (Sweden)
Fernando Beltrán
2006-05-01
Full Text Available Este artículo presenta los aspectos principales del desarrollo histórico y de asuntos actuales en el mercado suramericano de acceso a Internet: los acuerdos de interconexión para el intercambio de tráfico local y regional en Suramérica, los incentivos que tienen los proveedores de acceso a Internet para mantener o modificar la naturaleza de los acuerdos y los métodos de recuperación de costos en los puntos de intercambio de tráfico. El artículo también identifica algunas amenazas a la estabilidad de los puntos de intercambio de tráfico y las ilustra con dos casos. / This paper presents the main aspects of the historical development and the current issues at stake in the South American Internet access market: the interconnection schemes for the exchange of local and regional traffic in the South American region, the incentives Internet access providers have for keeping or modifying the nature of the agreements, and the cost recovery methods at the traffic exchange points. Some threats to the stability of the scheme for domestic traffic exchange adopted throughout the region are also identified and subsequently illustrated with country-cases.
Modeling river dune evolution using a parameterization of flow separation
Paarlberg, Andries; Dohmen-Janssen, Catarine M.; Hulscher, Suzanne J.M.H.; Termes, Paul
2009-01-01
This paper presents an idealized morphodynamic model to predict river dune evolution. The flow field is solved in a vertical plane assuming hydrostatic pressure conditions. The sediment transport is computed using a Meyer-Peter–Müller type of equation, including gravitational bed slope effects and a
Modelling the Evolution of Social Structure.
Sutcliffe, A G; Dunbar, R I M; Wang, D
2016-01-01
Although simple social structures are more common in animal societies, some taxa (mainly mammals) have complex, multi-level social systems, in which the levels reflect differential association. We develop a simulation model to explore the conditions under which multi-level social systems of this kind evolve. Our model focuses on the evolutionary trade-offs between foraging and social interaction, and explores the impact of alternative strategies for distributing social interaction, with fitness criteria for wellbeing, alliance formation, risk, stress and access to food resources that reward social strategies differentially. The results suggest that multi-level social structures characterised by a few strong relationships, more medium ties and large numbers of weak ties emerge only in a small part of the overall fitness landscape, namely where there are significant fitness benefits from wellbeing and alliance formation and there are high levels of social interaction. In contrast, 'favour-the-few' strategies are more competitive under a wide range of fitness conditions, including those producing homogeneous, single-level societies of the kind found in many birds and mammals. The simulations suggest that the development of complex, multi-level social structures of the kind found in many primates (including humans) depends on a capacity for high investment in social time, preferential social interaction strategies, high mortality risk and/or differential reproduction. These conditions are characteristic of only a few mammalian taxa.
Mass Loss and Stellar Evolution Models of Polaris
Neilson, Hilding R.; Engle, S. G.; Guinan, E.; Langer, N.
2012-01-01
Polaris is a first-overtone Cepheid with a measured rate of period change that probes real-time evolution of that star. In this work, we compare the measured period change with rates computed from a grid of state-of-the-art stellar evolution models that are consistent with the effective temperature, luminosity and mass of Polaris. We find that the theoretical and measured rates of period change do not agree, and we show this difference implies that Polaris is losing mass at a rate of 10-6 solar masses per year.
A Solvable Model of Species Body Mass Evolution
Clauset, Aaron
2008-01-01
We present a quantitative model for the biological evolution of species body masses within large groups of related species, e.g., terrestrial mammals, in which body mass M evolves according to branching (speciating) multiplicative diffusion and an extinction probability that increases logarithmically with mass. We describe this evolution in terms of a convection-diffusion-reaction equation for ln M. The steady-state behavior is in good agreement with empirical data on recent terrestrial mammals, and the time-dependent behavior also agrees with data on extinct mammal species between 95 - 50 million years ago.
plant: A package for modelling forest trait ecology and evolution
Falster, D.S.; FitzJohn, R.G.; Brännström, Å; Dieckmann, U.; Westoby, M; McMahon, S
2016-01-01
1. Population dynamics in forests are strongly size-structured: larger plants shade smaller plants while also expending proportionately more energy on building and maintaining woody stems. Although the importance of size structure for demography is widely recognized, many models either omit it entirely or include only coarse approximations. 2. Here, we introduce the plant package, an extensible framework for modelling size- and trait-structured demography, ecology and evolution in simulat...
Natural Models for Evolution on Networks
Mertzios, George B; Raptopoulos, Christoforos; Spirakis, Paul G
2011-01-01
Evolutionary dynamics have been traditionally studied in the context of homogeneous populations, mainly described my the Moran process. Recently, this approach has been generalized in \\cite{LHN} by arranging individuals on the nodes of a network. Undirected networks seem to have a smoother behavior than directed ones, and thus it is more challenging to find suppressors/amplifiers of selection. In this paper we present the first class of undirected graphs which act as suppressors of selection, by achieving a fixation probability that is at most one half of that of the complete graph, as the number of vertices increases. Moreover, we provide some generic upper and lower bounds for the fixation probability of general undirected graphs. As our main contribution, we introduce the natural alternative of the model proposed in \\cite{LHN}, where all individuals interact simultaneously and the result is a compromise between aggressive and non-aggressive individuals. That is, the behavior of the individuals in our new m...
2010-11-01
attacks [3],[6] which due to botnet development have gone evolution from theoretical to real informational weapons [7], click fraud , key cracking...changed every few minutes [4],[31]. It does not roam the Internet looking for vulnerabilities in machines that it can exploit [31]. After the
Reservoir pressure evolution model during exploration drilling
Directory of Open Access Journals (Sweden)
Korotaev B. A.
2017-03-01
Full Text Available Based on the analysis of laboratory studies and literature data the method for estimating reservoir pressure in exploratory drilling has been proposed, it allows identify zones of abnormal reservoir pressure in the presence of seismic data on reservoir location depths. This method of assessment is based on developed at the end of the XX century methods using d- and σ-exponentials taking into account the mechanical drilling speed, rotor speed, bit load and its diameter, lithological constant and degree of rocks' compaction, mud density and "regional density". It is known that in exploratory drilling pulsation of pressure at the wellhead is observed. Such pulsation is a consequence of transferring reservoir pressure through clay. In the paper the mechanism for transferring pressure to the bottomhole as well as the behaviour of the clay layer during transmission of excess pressure has been described. A laboratory installation has been built, it has been used for modelling pressure propagation to the bottomhole of the well through a layer of clay. The bulge of the clay layer is established for 215.9 mm bottomhole diameter. Functional correlation of pressure propagation through the layer of clay has been determined and a reaction of the top clay layer has been shown to have bulge with a height of 25 mm. A pressure distribution scheme (balance has been developed, which takes into account the distance from layers with abnormal pressure to the bottomhole. A balance equation for reservoir pressure evaluation has been derived including well depth, distance from bottomhole to the top of the formation with abnormal pressure and density of clay.
Modeling elephant-mediated cascading effects of water point closure
Hilbers, J.P.; Langevelde, van, F.; Prins, H.H.T.; C.C. Grant; Peel, M; M. B. Coughenour; Knegt, de, B.; Slotow, R.; I. Smit; Kiker, G. A.; Boer, de, I.J.M.
2014-01-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested to cha...
A Cellular Automata Models of Evolution of Transportation Networks
Directory of Open Access Journals (Sweden)
Mariusz Paszkowski
2002-01-01
Full Text Available We present a new approach to modelling of transportation networks. Supply of resources and their influence on the evolution of the consuming environment is a princqral problem considered. ne present two concepts, which are based on cellular automata paradigm. In the first model SCAM4N (Simple Cellular Automata Model of Anastomosing Network, the system is represented by a 2D mesh of elementary cells. The rules of interaction between them are introduced for modelling ofthe water flow and other phenomena connected with anastomosing river: Due to limitations of SCAMAN model, we introduce a supplementary model. The MANGraCA (Model of Anastomosing Network with Graph of Cellular Automata model beside the classical mesh of automata, introduces an additional structure: the graph of cellular automata, which represents the network pattern. Finally we discuss the prospective applications ofthe models. The concepts of juture implementation are also presented.
A microscopic model of rate and state friction evolution
Li, Tianyi; Rubin, Allan M.
2017-08-01
Whether rate- and state-dependent friction evolution is primarily slip dependent or time dependent is not well resolved. Although slide-hold-slide experiments are traditionally interpreted as supporting the aging law, implying time-dependent evolution, recent studies show that this evidence is equivocal. In contrast, the slip law yields extremely good fits to velocity step experiments, although a clear physical picture for slip-dependent friction evolution is lacking. We propose a new microscopic model for rate and state friction evolution in which each asperity has a heterogeneous strength, with individual portions recording the velocity at which they became part of the contact. Assuming an exponential distribution of asperity sizes on the surface, the model produces results essentially similar to the slip law, yielding very good fits to velocity step experiments but not improving much the fits to slide-hold-slide experiments. A numerical kernel for the model is developed, and an analytical expression is obtained for perfect velocity steps, which differs from the slip law expression by a slow-decaying factor. By changing the quantity that determines the intrinsic strength, we use the same model structure to investigate aging-law-like time-dependent evolution. Assuming strength to increase logarithmically with contact age, for two different definitions of age we obtain results for velocity step increases significantly different from the aging law. Interestingly, a solution very close to the aging law is obtained if we apply a third definition of age that we consider to be nonphysical. This suggests that under the current aging law, the state variable is not synonymous with contact age.
Directory of Open Access Journals (Sweden)
Marina Figueiredo Magalhães
Full Text Available Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes. In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p 0.05. In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin.
Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus
2015-01-01
Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin. PMID:26070073
Magalhães, Marina Figueiredo; Dibai-Filho, Almir Vieira; de Oliveira Guirro, Elaine Caldeira; Girasol, Carlos Eduardo; de Oliveira, Alessandra Kelly; Dias, Fabiana Rodrigues Cancio; Guirro, Rinaldo Roberto de Jesus
2015-01-01
Some assessment and diagnosis methods require palpation or the application of certain forces on the skin, which affects the structures beneath, we highlight the importance of defining possible influences on skin temperature as a result of this physical contact. Thus, the aim of the present study is to determine the ideal time for performing thermographic examination after palpation based on the assessment of skin temperature evolution. Randomized and crossover study carried out with 15 computer-user volunteers of both genders, between 18 and 45 years of age, who were submitted to compressive forces of 0, 1, 2 and 3 kg/cm2 for 30 seconds with a washout period of 48 hours using a portable digital dynamometer. Compressive forces were applied on the following spots on the dominant upper limb: myofascial trigger point in the levator scapulae, biceps brachii muscle and palmaris longus tendon. Volunteers were examined by means of infrared thermography before and after the application of compressive forces (15, 30, 45 and 60 minutes). In most comparisons made over time, a significant decrease was observed 30, 45 and 60 minutes after the application of compressive forces (p 0.05). In conclusion, infrared thermography can be used after assessment or diagnosis methods focused on the application of forces on tendons and muscles, provided the procedure is performed 15 minutes after contact with the skin. Regarding to the myofascial trigger point, the thermographic examination can be performed within 60 minutes after the contact with the skin.
The Supercritical Pile GRB Model: The Prompt to Afterglow Evolution
Mastichiadis, A.; Kazanas, D.
2009-01-01
The "Supercritical Pile" is a very economical GRB model that provides for the efficient conversion of the energy stored in the protons of a Relativistic Blast Wave (RBW) into radiation and at the same time produces - in the prompt GRB phase, even in the absence of any particle acceleration - a spectral peak at energy approx. 1 MeV. We extend this model to include the evolution of the RBW Lorentz factor Gamma and thus follow its spectral and temporal features into the early GRB afterglow stage. One of the novel features of the present treatment is the inclusion of the feedback of the GRB produced radiation on the evolution of Gamma with radius. This feedback and the presence of kinematic and dynamic thresholds in the model can be the sources of rich time evolution which we have began to explore. In particular. one can this may obtain afterglow light curves with steep decays followed by the more conventional flatter afterglow slopes, while at the same time preserving the desirable features of the model, i.e. the well defined relativistic electron source and radiative processes that produce the proper peak in the (nu)F(sub nu), spectra. In this note we present the results of a specific set of parameters of this model with emphasis on the multiwavelength prompt emission and transition to the early afterglow.
Memory effects on epidemic evolution: The susceptible-infected-recovered epidemic model
Saeedian, M.; Khalighi, M.; Azimi-Tafreshi, N.; Jafari, G. R.; Ausloos, M.
2017-02-01
Memory has a great impact on the evolution of every process related to human societies. Among them, the evolution of an epidemic is directly related to the individuals' experiences. Indeed, any real epidemic process is clearly sustained by a non-Markovian dynamics: memory effects play an essential role in the spreading of diseases. Including memory effects in the susceptible-infected-recovered (SIR) epidemic model seems very appropriate for such an investigation. Thus, the memory prone SIR model dynamics is investigated using fractional derivatives. The decay of long-range memory, taken as a power-law function, is directly controlled by the order of the fractional derivatives in the corresponding nonlinear fractional differential evolution equations. Here we assume "fully mixed" approximation and show that the epidemic threshold is shifted to higher values than those for the memoryless system, depending on this memory "length" decay exponent. We also consider the SIR model on structured networks and study the effect of topology on threshold points in a non-Markovian dynamics. Furthermore, the lack of access to the precise information about the initial conditions or the past events plays a very relevant role in the correct estimation or prediction of the epidemic evolution. Such a "constraint" is analyzed and discussed.
Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution
Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.
2016-12-01
Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Directory of Open Access Journals (Sweden)
Alireza khajegir
2016-01-01
Full Text Available As a universal human phenomenon, religion is rooted in human nature, and human beings instinctively require a superior and supreme power. Besides this internal need for religion, attention to the meaning, function, and interpretation of religion has always been prevalent in the history of human thought from West to East, and scholars have always tried to comment on and analyze this fundamental issue of human life . From among the approaches that arose about the interpretation and explanation of religion, rationalism tendency—influenced by evolution—has stood up because in the establishment of religion, rationalism takes its genesis and evolution as manifestations of the evolution of human thought, and it takes the development and evolution of religion as equal. This approach considers religion as answer to the need of the cognitive need of human beings. In this anthropological approach, religion is the product of primitive human beings’ effort to identify objects and events in the surrounding environment. As a results, as the man’s knowledge of the world around him increases, the need for religion decreases . Anthropologist like Edward Tylor and James Frazer have taken this view to the origin and evolution of religion. They emphasize on principles such as the bodily and cognitive unity of the mind, the survival principal, and the evolutionary intellectual pattern of human beings in order to interpret religion stages from animism and magic till monism and monotheism, which will eventually decline during the development of science . Taylor regards anthropology as the best scientific method to achieve a universal theory to understand the origin of religion. Based on its psychological unity, religion in all times and places—despite its diversity—is a unique phenomenon and has an exclusive identity because the very existence of commonalities in all practices and customs of the people of the world is indicative of the basic
Directory of Open Access Journals (Sweden)
Alireza khajegir
2015-12-01
Full Text Available As a universal human phenomenon, religion is rooted in human nature, and human beings instinctively require a superior and supreme power. Besides this internal need for religion, attention to the meaning, function, and interpretation of religion has always been prevalent in the history of human thought from West to East, and scholars have always tried to comment on and analyze this fundamental issue of human life . From among the approaches that arose about the interpretation and explanation of religion, rationalism tendency—influenced by evolution—has stood up because in the establishment of religion, rationalism takes its genesis and evolution as manifestations of the evolution of human thought, and it takes the development and evolution of religion as equal. This approach considers religion as answer to the need of the cognitive need of human beings. In this anthropological approach, religion is the product of primitive human beings’ effort to identify objects and events in the surrounding environment. As a results, as the man’s knowledge of the world around him increases, the need for religion decreases . Anthropologist like Edward Tylor and James Frazer have taken this view to the origin and evolution of religion. They emphasize on principles such as the bodily and cognitive unity of the mind, the survival principal, and the evolutionary intellectual pattern of human beings in order to interpret religion stages from animism and magic till monism and monotheism, which will eventually decline during the development of science . Taylor regards anthropology as the best scientific method to achieve a universal theory to understand the origin of religion. Based on its psychological unity, religion in all times and places—despite its diversity—is a unique phenomenon and has an exclusive identity because the very existence of commonalities in all practices and customs of the people of the world is indicative of the basic
Spatial Multiplication Model as an alternative to the Point Model in Neutron Multiplicity Counting
Energy Technology Data Exchange (ETDEWEB)
Hauck, Danielle K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Henzl, Vladimir [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-03-26
The point model is commonly used in neutron multiplicity counting to relate the correlated neutron detection rates (singles, doubles, triples) to item properties (mass, (α,n) reaction rate and neutron multiplication). The point model assumes that the probability that a neutron will induce fission is a constant across the physical extent of the item. However, in reality, neutrons near the center of an item have a greater probability of inducing fission then items near the edges. As a result, the neutron multiplication has a spatial distribution.
Insecticide resistance evolution with mixtures and sequences: a model-based explanation.
South, Andy; Hastings, Ian M
2018-02-15
Insecticide resistance threatens effective vector control, especially for mosquitoes and malaria. To manage resistance, recommended insecticide use strategies include mixtures, sequences and rotations. New insecticides are being developed and there is an opportunity to develop use strategies that limit the evolution of further resistance in the short term. A 2013 review of modelling and empirical studies of resistance points to the advantages of mixtures. However, there is limited recent, accessible modelling work addressing the evolution of resistance under different operational strategies. There is an opportunity to improve the level of mechanistic understanding within the operational community of how insecticide resistance can be expected to evolve in response to different strategies. This paper provides a concise, accessible description of a flexible model of the evolution of insecticide resistance. The model is used to develop a mechanistic picture of the evolution of insecticide resistance and how it is likely to respond to potential insecticide use strategies. The aim is to reach an audience unlikely to read a more detailed modelling paper. The model itself, as described here, represents two independent genes coding for resistance to two insecticides. This allows the representation of the use of insecticides in isolation, sequence and mixtures. The model is used to demonstrate the evolution of resistance under different scenarios and how this fits with intuitive reasoning about selection pressure. Using an insecticide in a mixture, relative to alone, always prompts slower evolution of resistance to that insecticide. However, when resistance to both insecticides is considered, resistance thresholds may be reached later for a sequence relative to a mixture. Increasing the ability of insecticides to kill susceptible mosquitoes (effectiveness), has the most influence on favouring a mixture over a sequence because one highly effective insecticide provides more
Modeling elephant-mediated cascading effects of water point closure
Hilbers, J.P.; Langevelde, van F.; Prins, H.H.T.; Grant, C.C.; Peel, M.; Coughenour, M.B.; Knegt, de H.J.; Slotow, R.; Smit, I.; Kiker, G.A.; Boer, de W.F.
2015-01-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The
Modeling elephant-mediated cascading effects of water point closure
Hilbers, J.P.; Langevelde, van F.; Prins, H.H.T.; Grant, C.C.; Peel, M.; Coughenour, M.B.; Knegt, de H.J.; Slotow, R.; Smit, I.; Kiker, G.A.; Boer, de W.F.
2014-01-01
Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The
Neural network model for cumulative grade point average (CGPA ...
African Journals Online (AJOL)
One of the major tasks performed in any institutions of higher learning is the assessment of students' performance by ways of conducting Tests and Examinations on semester-by-semester basis. Following the examination to be written by student(s), the result would be computed using an appropriate grades and grade point, ...
Overdeepening development in a glacial landscape evolution model with quarrying
DEFF Research Database (Denmark)
Ugelvig, Sofie Vej; Egholm, D.L.; Iverson, Neal R.
) introduced a new model for subglacial erosion by quarrying that operates from the theory of adhesive wear. The model is based on the fact that cavities, with a high level of bedrock differential stress, form in the lee of bed obstacles when the sliding velocity is too high to allow for the ice to creep...... are the primary factors influencing the erosion rate of this new quarrying model [Iverson, 2012]. We have implemented the quarrying model in a depth-integrated higher-order ice-sheet model [Egholm et al. 2011], coupled to a model for glacial hydrology. In order to also include the effects of cavitation......, F02012 (2011). Iverson, N. R. A theory of glacial quarrying for landscape evolution models. Geology, v. 40, no. 8, 679-682 (2012). Schoof, C. The effect of cavitation on glacier sliding. Proc. R. Soc. A , 461, 609-627 (2005)....
Yetemen, O.; Saco, P. M.
2016-12-01
Orography induced precipitation and its implications on vegetation dynamics and landscape morphology have long been documented in the literature. However a numerical framework that integrates a range of ecohydrologic and geomorphic processes to explore the coupled ecohydro-geomorphic landscape response of catchments where pronounced orographic precipitation prevails has been missing. In this study, our aim is to realistically represent orographic-precipitation-driven ecohydrologic dynamics in a landscape evolution model (LEM). The model is used to investigate how ecohydro-geomorphic differences caused by differential precipitation patterns on the leeward and windward sides of low-relief landscapes lead to differences in the organization of modelled topography, soil moisture and plant biomass. We use the CHILD LEM equipped with a vegetation dynamics component that explicitly tracks above- and below-ground biomass, and a precipitation forcing component that simulates rainfall as a function of elevation and orientation. The preliminary results of the model show how the competition between an increased shear stress through runoff production and an enhanced resistance force due to denser canopy cover shape the landscape. Moreover, orographic precipitation leads to not only the migration of the divide between leeward and windward slopes but also a change in the concavity of streams. These results clearly demonstrate the strong coupling between landform evolution and climate processes.
Pearson, A.; Budin, M.; Brocks, J. J.
2003-12-01
The evolution of sterol biosynthesis is of critical interest to geoscientists as well as to evolutionary biologists. The first enzyme in the pathway, squalene monooxygenase (Sqmo), requires molecular oxygen (O2), suggesting that this process post-dates the evolution of Cyanobacteria. Additionally, the presence of steranes in ancient rocks marks the suggested time-point of eukaryogenesis(1). Sterol biosynthesis is viewed primarily as a eukaryotic process, and the frequency of its occurrence in bacteria long has been a subject of controversy. In this work, 19 protein gene sequences for Sqmo from eukaryotes were compared to all available complete and partial prokaryotic genomes. Twelve protein gene sequences representing oxidosqualene cyclase (Osc), the second enzyme of the sterol biosynthetic pathway, also were examined. The only unequivocal matches among the bacteria were the alpha-proteobacterium, Methylococcus capsulatus, in which sterol biosynthesis already is known, and the planctomycete, Gemmata obscuriglobus. The latter species contains the most abbreviated sterol pathway yet identified in any organism. Experiments show that the major sterols in Gemmata are lanosterol and its uncommon isomer, parkeol. In bacteria, the sterol biosynthesis genes occupy a contiguous coding region and may represent a single operon. Phylogenetic trees show that the sterol pathway in bacteria and eukaryotes has a common ancestry. Gemmata may retain the most ancient remnants of the pathway's origin, and it is likely that sterol biosynthesis in eukaryotes was acquired through gene transfer from bacteria. However, this work indicates that no known prokaryotes could produce the 24-ethyl steranes found in Archaean rocks(1). Therefore these compounds remain indicative of the presence of both eukaryotes and O2 at 2.7 Ga. 1. J. J. Brocks, G. A. Logan, R. Buick, R. E. Summons, (1999) Science 285, 1033-1036.
Takahashi, N.; Kasaba, Y.; Nishimura, Y.; Shinbori, A.; Kikuchi, T.; Hori, T.; Nishitani, N.
2016-12-01
Sudden commencements (SCs) are triggered by an abrupt compression of the dayside magnetopause, which causes a fast mode wave propagating toward the Earth in the equatorial magnetosphere across the magnetic field line. The sudden compression also induces the Alfven wave propagation toward the polar ionosphere along magnetic field lines. The latter causes the global transmission of ionospheric electric field at speed of light, and can propagate the influence back to the inner and/or nightside magnetosphere. These general propagation processes have been demonstrated in previous papers using direct observations. We study the spatial and temporal evolution of electric fields and the direction of Poynting fluxes between the magnetosphere and ionosphere associated with SCs. We use multi-point magnetospheric and ionospheric satellites (THEMIS, RBSP, GOES, and C/NOFS) with radars (SuperDARN). An event study on 17 March 2013 shows that the magnetospheric electric field is propagated from dayside to nightside magnetosphere. At the onset time, the magnetospheric magnetic field starts to increase, which indicates that the detected electric field is associated with the compression of the magnetosphere. In the ionosphere, C/NOFS satellite and SuperDARN radar detect the dusk-to-dawn electric field about 1 min after the onset in the magnetosphere. Poynting fluxes evaluated from THEMIS and RBSP data are directed toward the ionosphere along magnetic field lines in both dayside and nightside, which indicates that the Alfven wave launches toward the polar ionosphere at the onset. The spatial evolution of magnetospheric electric fields can be interpreted as follows: First, the fast mode wave propagates from dayside to nightside magnetosphere, and 105-120 s after the onset, the magnetospheric convection becomes stronger. We also find that the spatial distribution of the response time is asymmetric between dawn and dusk, which can be due to the asymmetry of the plasmapause location.
Energy Technology Data Exchange (ETDEWEB)
Liang, J.H., E-mail: jhliang@ess.nthu.edu.tw [Institute of Nuclear Engineering and Science, National Tsing Hua University, 101, Section 2, Kuang-Fu Road, Hsinchu 300, Taiwan (China); Department of Engineering and System Science, National Tsing Hua University, Hsinchu 300, Taiwan (China); Hu, C.H.; Bai, C.Y. [Department of Engineering and System Science, National Tsing Hua University, Hsinchu 300, Taiwan (China); Chao, D.S. [Nuclear Science and Technology Development Center, National Tsing Hua University, Hsinchu 300, Taiwan (China); Lin, C.M. [Department of Applied Science, National Hsinchu University of Education, Hsinchu 300, Taiwan (China)
2012-08-01
This study investigated the dependence of surface blistering and exfoliation phenomena on post-annealing time in H{sup +}-implanted Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket . Czochralski-grown n-type Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket wafers were room-temperature ion-implanted with 40 keV hydrogen monomers to a fluence of 5 Multiplication-Sign 10{sup 16} cm{sup -2}, and followed by furnace annealing treatments at 400 and 500 Degree-Sign C for various durations ranging from 0.25 to 3 h. The corresponding analysis results for Si Left-Pointing-Angle-Bracket 1 0 0 Right-Pointing-Angle-Bracket (Liang et al., 2008); (Bai, 2007) were adopted in order to make comparisons. The evolution of blister formation and growth for Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket at 400 Degree-Sign C has a shorter characteristic time compared to Si Left-Pointing-Angle-Bracket 1 0 0 Right-Pointing-Angle-Bracket . However, there is a longer characteristic time when annealing takes place at 500 Degree-Sign C. In addition, no craters were observed for Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket annealed at 400 Degree-Sign C while the opposite is true for Si Left-Pointing-Angle-Bracket 1 0 0 Right-Pointing-Angle-Bracket . The evolution of crater development for Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket annealed at 500 Degree-Sign C has a longer characteristic time compared to Si Left-Pointing-Angle-Bracket 1 0 0 Right-Pointing-Angle-Bracket . These results are attributed to the fact that compared to Si Left-Pointing-Angle-Bracket 1 0 0 Right-Pointing-Angle-Bracket , Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket has a smaller surface binding energy of silicon atoms and a larger areal number density of silicon atoms on the plane perpendicular to the incident-ion axis. Furthermore, Si Left-Pointing-Angle-Bracket 1 1 1 Right-Pointing-Angle-Bracket has a
A weighted network model for interpersonal relationship evolution
Hu, Bo; Jiang, Xin-Yu; Ding, Jun-Feng; Xie, Yan-Bo; Wang, Bing-Hong
2005-08-01
A simple model is proposed to mimic and study the evolution of interpersonal relationships in a student class. The small social group is simply assumed as an undirected and weighted graph, in which students are represented by vertices, and the depth of favor or disfavor between them are denoted by the corresponding edge weight. In our model, we find that the first impression between people has a crucial influence on the final status of student relations (i.e., the final distribution of edge weights). The system displays a phase transition in the final hostility proportion depending on the initial amity possibility. We can further define the strength of vertices to describe the individual popularity, which exhibits nonlinear evolution. Meanwhile, various nonrandom perturbations to the initial system have been investigated, and simulation results are in accord with common real-life observations.
Modeling gene family evolution and reconciling phylogenetic discord.
Szöllosi, Gergely J; Daubin, Vincent
2012-01-01
Large-scale databases are available that contain homologous gene families constructed from hundreds of complete genome sequences from across the three domains of life. Here, we discuss the approaches of increasing complexity aimed at extracting information on the pattern and process of gene family evolution from such datasets. In particular, we consider the models that invoke processes of gene birth (duplication and transfer) and death (loss) to explain the evolution of gene families. First, we review birth-and-death models of family size evolution and their implications in light of the universal features of family size distribution observed across different species and the three domains of life. Subsequently, we proceed to recent developments on models capable of more completely considering information in the sequences of homologous gene families through the probabilistic reconciliation of the phylogenetic histories of individual genes with the phylogenetic history of the genomes in which they have resided. To illustrate the methods and results presented, we use data from the HOGENOM database, demonstrating that the distribution of homologous gene family sizes in the genomes of the eukaryota, archaea, and bacteria exhibits remarkably similar shapes. We show that these distributions are best described by models of gene family size evolution, where for individual genes the death (loss) rate is larger than the birth (duplication and transfer) rate but new families are continually supplied to the genome by a process of origination. Finally, we use probabilistic reconciliation methods to take into consideration additional information from gene phylogenies, and find that, for prokaryotes, the majority of birth events are the result of transfer.
Dark Sage: Semi-analytic model of galaxy evolution
Stevens, Adam R. H.; Croton, Darren J.; Mutch, Simon J.; Sinha, Manodeep
2017-06-01
DARK SAGE is a semi-analytic model of galaxy formation that focuses on detailing the structure and evolution of galaxies' discs. The code-base, written in C, is an extension of SAGE (ascl:1601.006) and maintains the modularity of SAGE. DARK SAGE runs on any N-body simulation with trees organized in a supported format and containing a minimum set of basic halo properties.
Higgs quartic coupling and neutrino sector evolution in 2UED models
Abdalgabar, A.
2014-05-20
Two compact universal extra-dimensional models are an interesting class of models for different theoretical and phenomenological issues, such as the justification of having three standard model fermion families, suppression of proton decay rate, dark matter parity from relics of the six-dimensional Lorentz symmetry, origin of masses and mixings in the standard model. However, these theories are merely effective ones, with typically a reduced range of validity in their energy scale. We explore two limiting cases of the three standard model generations all propagating in the bulk or all localised to a brane, from the point of view of renormalisation group equation evolutions for the Higgs sector and for the neutrino sector of these models. The recent experimental results of the Higgs boson from the LHC allow, in some scenarios, stronger constraints on the cutoff scale to be placed, from the requirement of the stability of the Higgs potential. 2014 The Author(s).
Models for the directed evolution of bacterial allelopathy: bacteriophage lysins.
Bull, James J; Crandall, Cameron; Rodriguez, Anna; Krone, Stephen M
2015-01-01
Microbes produce a variety of compounds that are used to kill or suppress other species. Traditional antibiotics have their origins in these natural products, as do many types of compounds being pursued today in the quest for new antibacterial drugs. When a potential toxin can be encoded by and exported from a species that is not harmed, the opportunity exists to use directed evolution to improve the toxin's ability to kill other species-allelopathy. In contrast to the typical application of directed evolution, this case requires the co-culture of at least two species or strains, a host that is unharmed by the toxin plus the intended target of the toxin. We develop mathematical and computational models of this directed evolution process. Two contexts are considered, one with the toxin encoded on a plasmid and the other with the toxin encoded in a phage. The plasmid system appears to be more promising than the phage system. Crucial to both designs is the ability to co-culture two species/strains (host and target) such that the host is greatly outgrown by the target species except when the target species is killed. The results suggest that, if these initial conditions can be satisfied, directed evolution is feasible for the plasmid-based system. Screening with a plasmid-based system may also enable rapid improvement of a toxin.
Models for the directed evolution of bacterial allelopathy: bacteriophage lysins
Directory of Open Access Journals (Sweden)
James J. Bull
2015-04-01
Full Text Available Microbes produce a variety of compounds that are used to kill or suppress other species. Traditional antibiotics have their origins in these natural products, as do many types of compounds being pursued today in the quest for new antibacterial drugs. When a potential toxin can be encoded by and exported from a species that is not harmed, the opportunity exists to use directed evolution to improve the toxin’s ability to kill other species—allelopathy. In contrast to the typical application of directed evolution, this case requires the co-culture of at least two species or strains, a host that is unharmed by the toxin plus the intended target of the toxin. We develop mathematical and computational models of this directed evolution process. Two contexts are considered, one with the toxin encoded on a plasmid and the other with the toxin encoded in a phage. The plasmid system appears to be more promising than the phage system. Crucial to both designs is the ability to co-culture two species/strains (host and target such that the host is greatly outgrown by the target species except when the target species is killed. The results suggest that, if these initial conditions can be satisfied, directed evolution is feasible for the plasmid-based system. Screening with a plasmid-based system may also enable rapid improvement of a toxin.
A model for evolution of overlapping community networks
Karan, Rituraj; Biswal, Bibhu
2017-05-01
A model is proposed for the evolution of network topology in social networks with overlapping community structure. Starting from an initial community structure that is defined in terms of group affiliations, the model postulates that the subsequent growth and loss of connections is similar to the Hebbian learning and unlearning in the brain and is governed by two dominant factors: the strength and frequency of interaction between the members, and the degree of overlap between different communities. The temporal evolution from an initial community structure to the current network topology can be described based on these two parameters. It is possible to quantify the growth occurred so far and predict the final stationary state to which the network is likely to evolve. Applications in epidemiology or the spread of email virus in a computer network as well as finding specific target nodes to control it are envisaged. While facing the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities one faces the most basic questions: how do communities evolve in time? This work aims to address this issue by developing a mathematical model for the evolution of community networks and studying it through computer simulation.
Subgrid Modeling Geomorphological and Ecological Processes in Salt Marsh Evolution
Shi, F.; Kirby, J. T., Jr.; Wu, G.; Abdolali, A.; Deb, M.
2016-12-01
Numerical modeling a long-term evolution of salt marshes is challenging because it requires an extensive use of computational resources. Due to the presence of narrow tidal creeks, variations of salt marsh topography can be significant over spatial length scales on the order of a meter. With growing availability of high-resolution bathymetry measurements, like LiDAR-derived DEM data, it is increasingly desirable to run a high-resolution model in a large domain and for a long period of time to get trends of sedimentation patterns, morphological change and marsh evolution. However, high spatial-resolution poses a big challenge in both computational time and memory storage, when simulating a salt marsh with dimensions of up to O(100 km^2) with a small time step. In this study, we have developed a so-called Pre-storage, Sub-grid Model (PSM, Wu et al., 2015) for simulating flooding and draining processes in salt marshes. The simulation of Brokenbridge salt marsh, Delaware, shows that, with the combination of the sub-grid model and the pre-storage method, over 2 orders of magnitude computational speed-up can be achieved with minimal loss of model accuracy. We recently extended PSM to include a sediment transport component and models for biomass growth and sedimentation in the sub-grid model framework. The sediment transport model is formulated based on a newly derived sub-grid sediment concentration equation following Defina's (2000) area-averaging procedure. Suspended sediment transport is modeled by the advection-diffusion equation in the coarse grid level, but the local erosion and sedimentation rates are integrated over the sub-grid level. The morphological model is based on the existing morphological model in NearCoM (Shi et al., 2013), extended to include organic production from the biomass model. The vegetation biomass is predicted by a simple logistic equation model proposed by Marani et al. (2010). The biomass component is loosely coupled with hydrodynamic and
An incremental-iterative method for modeling damage evolution in voxel-based microstructure models
Zhu, Qi-Zhi; Yvonnet, Julien
2015-02-01
Numerical methods motivated by rapid advances in image processing techniques have been intensively developed during recent years and increasingly applied to simulate heterogeneous materials with complex microstructure. The present work aims at elaborating an incremental-iterative numerical method for voxel-based modeling of damage evolution in quasi-brittle microstructures. The iterative scheme based on the Lippmann-Schwinger equation in the real space domain (Yvonnet, in Int J Numer Methods Eng 92:178-205, 2012) is first cast into an incremental form so as to implement nonlinear material models efficiently. In the proposed scheme, local strain increments at material grid points are computed iteratively by a mapping operation through a transformation array, while local stresses are determined using a constitutive model that accounts for material degradation by damage. For validation, benchmark studies and numerical simulations using microtomographic data of concrete are performed. For each test, numerical predictions by the incremental-iterative scheme and the finite element method, respectively, are presented and compared for both global responses and local damage distributions. It is emphasized that the proposed incremental-iterative formulation can be straightforwardly applied in the framework of other Lippmann-Schwinger equation-based schemes, like the fast Fourier transform method.
Time evolution in deparametrized models of loop quantum gravity
Assanioussi, Mehdi; Lewandowski, Jerzy; Mäkinen, Ilkka
2017-07-01
An important aspect in understanding the dynamics in the context of deparametrized models of loop quantum gravity (LQG) is to obtain a sufficient control on the quantum evolution generated by a given Hamiltonian operator. More specifically, we need to be able to compute the evolution of relevant physical states and observables with a relatively good precision. In this article, we introduce an approximation method to deal with the physical Hamiltonian operators in deparametrized LQG models, and we apply it to models in which a free Klein-Gordon scalar field or a nonrotational dust field is taken as the physical time variable. This method is based on using standard time-independent perturbation theory of quantum mechanics to define a perturbative expansion of the Hamiltonian operator, the small perturbation parameter being determined by the Barbero-Immirzi parameter β . This method allows us to define an approximate spectral decomposition of the Hamiltonian operators and hence to compute the evolution over a certain time interval. As a specific example, we analyze the evolution of expectation values of the volume and curvature operators starting with certain physical initial states, using both the perturbative method and a straightforward expansion of the expectation value in powers of the time variable. This work represents a first step toward achieving the goal of understanding and controlling the new dynamics developed in Alesci et al. [Phys. Rev. D 91, 124067 (2015), 10.1103/PhysRevD.91.124067] and Assanioussi et al. [Phys. Rev. D 92, 044042 (2015), 10.1103/PhysRevD.92.044042].
Modelling Influence and Opinion Evolution in Online Collective Behaviour.
Directory of Open Access Journals (Sweden)
Corentin Vande Kerckhove
Full Text Available Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861, it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection.
Considering Planetary Constraints and Dynamic Screening in Solar Evolution Modeling
Wood, Suzannah R.; Mussack, Katie; Guzik, Joyce A.
2018-01-01
The ‘faint early sun problem’ remains unsolved. This problem consists of the apparent contradiction between the standard solar model prediction of lower luminosity (70% of current luminosity) and the observations of liquid water on early Earth and Mars. The presence of liquid water on early Earth and Mars should not be neglected and should be used as a constraint for solar evolution modeling. In addition, modifications to standard solar models are needed to address the discrepancy with solar structure inferred from helioseismology given the latest solar abundance determinations. Here, we will utilize the three different solar abundances: GN93 (Grevesse & Noels, 1993), AGS05 (Asplund et al., 2005), AGSS09 (Asplund et al., 2009). Here, we propose an early mass loss model with an initial solar mass between 1.07 and 1.15 solar masses and an exponentially decreasing mass-loss rate to meet conditions in the early solar system (Wood et al, submitted). Additionally, we investigate the effects of dynamic screening and the new OPLIB opacities from Los Alamos (Colgan et al., 2016). We show the effects of these modifications to the standard solar evolution models on the interior structure, neutrino fluxes, sound speed, p-mode frequencies, convection zone depth, and envelope helium and element abundance of the model sun at the present day.
Energy Technology Data Exchange (ETDEWEB)
Wirth, Brian [Univ. of Tennessee, Knoxville, TN (United States)
2015-04-08
Materials used in extremely hostile environment such as nuclear reactors are subject to a high flux of neutron irradiation, and thus vast concentrations of vacancy and interstitial point defects are produced because of collisions of energetic neutrons with host lattice atoms. The fate of these defects depends on various reaction mechanisms which occur immediately following the displacement cascade evolution and during the longer-time kinetically dominated evolution such as annihilation, recombination, clustering or trapping at sinks of vacancies, interstitials and their clusters. The long-range diffusional transport and evolution of point defects and self-defect clusters drive a microstructural and microchemical evolution that are known to produce degradation of mechanical properties including the creep rate, yield strength, ductility, or fracture toughness, and correspondingly affect material serviceability and lifetimes in nuclear applications. Therefore, a detailed understanding of microstructural evolution in materials at different time and length scales is of significant importance. The primary objective of this work is to utilize a hierarchical computational modeling approach i) to evaluate the potential for nanoscale precipitates to enhance point defect recombination rates and thereby the self-healing ability of advanced structural materials, and ii) to evaluate the stability and irradiation-induced evolution of such nanoscale precipitates resulting from enhanced point defect transport to and annihilation at precipitate interfaces. This project will utilize, and as necessary develop, computational materials modeling techniques within a hierarchical computational modeling approach, principally including molecular dynamics, kinetic Monte Carlo and spatially-dependent cluster dynamics modeling, to identify and understand the most important physical processes relevant to promoting the “selfhealing” or radiation resistance in advanced materials containing
Using Pareto points for model identification in predictive toxicology
2013-01-01
Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649
A systemic approach for modeling biological evolution using Parallel DEVS.
Heredia, Daniel; Sanz, Victorino; Urquia, Alfonso; Sandín, Máximo
2015-08-01
A new model for studying the evolution of living organisms is proposed in this manuscript. The proposed model is based on a non-neodarwinian systemic approach. The model is focused on considering several controversies and open discussions about modern evolutionary biology. Additionally, a simplification of the proposed model, named EvoDEVS, has been mathematically described using the Parallel DEVS formalism and implemented as a computer program using the DEVSLib Modelica library. EvoDEVS serves as an experimental platform to study different conditions and scenarios by means of computer simulations. Two preliminary case studies are presented to illustrate the behavior of the model and validate its results. EvoDEVS is freely available at http://www.euclides.dia.uned.es. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Structured Constructs Models Based on Change-Point Analysis
Shin, Hyo Jeong; Wilson, Mark; Choi, In-Hee
2017-01-01
This study proposes a structured constructs model (SCM) to examine measurement in the context of a multidimensional learning progression (LP). The LP is assumed to have features that go beyond a typical multidimentional IRT model, in that there are hypothesized to be certain cross-dimensional linkages that correspond to requirements between the…
A Chemical Evolution Model for the Fornax Dwarf Spheroidal Galaxy
Directory of Open Access Journals (Sweden)
Yuan Zhen
2016-01-01
Full Text Available Fornax is the brightest Milky Way (MW dwarf spheroidal galaxy and its star formation history (SFH has been derived from observations. We estimate the time evolution of its gas mass and net inflow and outflow rates from the SFH usinga simple star formation law that relates the star formation rate to the gas mass. We present a chemical evolution model on a 2D mass grid with supernovae (SNe as sources of metal enrichment. We find that a key parameter controlling the enrichment is the mass Mx of the gas to mix with the ejecta from each SN. The choice of Mx depends on the evolution of SN remnants and on the global gas dynamics. It differs between the two types of SNe involved and between the periods before and after Fornax became an MW satellite at time t = tsat. Our results indicate that due to the global gas outflow at t > tsat, part of the ejecta from each SN may directly escape from Fornax. Sample results from our model are presented and compared with data.
Modulating and evaluating receptor promiscuity through directed evolution and modeling.
Stainbrook, Sarah C; Yu, Jessica S; Reddick, Michael P; Bagheri, Neda; Tyo, Keith E J
2017-06-01
The promiscuity of G-protein-coupled receptors (GPCRs) has broad implications in disease, pharmacology and biosensing. Promiscuity is a particularly crucial consideration for protein engineering, where the ability to modulate and model promiscuity is essential for developing desirable proteins. Here, we present methodologies for (i) modifying GPCR promiscuity using directed evolution and (ii) predicting receptor response and identifying important peptide features using quantitative structure-activity relationship models and grouping-exhaustive feature selection. We apply these methodologies to the yeast pheromone receptor Ste2 and its native ligand α-factor. Using directed evolution, we created Ste2 mutants with altered specificity toward a library of α-factor variants. We then used the Vectors of Hydrophobic, Steric, and Electronic properties and partial least squares regression to characterize receptor-ligand interactions, identify important ligand positions and properties, and predict receptor response to novel ligands. Together, directed evolution and computational analysis enable the control and evaluation of GPCR promiscuity. These approaches should be broadly useful for the study and engineering of GPCRs and other protein-small molecule interactions. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Morris, Chloe; Coulthard, Tom; Parsons, Daniel R.; Manson, Susan; Barkwith, Andrew
2017-04-01
Landscape Evolution Models (LEMs) are proven to be useful tools in understanding the morphodynamics of coast and estuarine systems. However, perhaps owing to the lack of research in this area, current models are not capable of simulating the dynamic interactions between these systems and their co-evolution at the meso-scale. Through a novel coupling of numerical models, this research is designed to explore coupled coastal-estuarine interactions, controls on system behaviour and the influence that environmental change could have. This will contribute to the understanding of the morphodynamics of these systems and how they may behave and evolve over the next century in response to climate changes, with the aim of informing management practices. This goal is being achieved through the modification and coupling of the one-line Coastline Evolution Model (CEM) with the hydrodynamic LEM CAESAR-Lisflood (C-L). The major issues faced with coupling these programs are their differing complexities and the limited graphical visualisations produced by the CEM that hinder the dissemination of results. The work towards overcoming these issues and reported here, include a new version of the CEM that incorporates a range of more complex geomorphological processes and boasts a graphical user interface that guides users through model set-up and projects a live output during model runs. The improved version is a stand-alone tool that can be used for further research projects and for teaching purposes. A sensitivity analysis using the Morris method has been completed to identify which key variables, including wave climate, erosion and weathering values, dominate the control of model behaviour. The model is being applied and tested using the evolution of the Holderness Coast, Humber Estuary and Spurn Point on the east coast of England (UK), which possess diverse geomorphologies and complex, co-evolving sediment pathways. Simulations using the modified CEM are currently being completed to
Modeling the evolution of the cerebellum: from macroevolution to function.
Smaers, Jeroen B
2014-01-01
The purpose of this contribution is to explore how macroevolutionary studies of the cerebellum can contribute to theories on cerebellar function and connectivity. New approaches in modeling the evolution of biological traits have provided new insights in the evolutionary pathways that underlie cerebellar evolution. These approaches reveal patterns of coordinated size changes among brain structures across evolutionary time, demonstrate how particular lineages/species stand out, and what the rate and timing of neuroanatomical changes were in evolutionary history. Using these approaches, recent studies demonstrated that changes in the relative size of the posterior cerebellar cortex and associated cortical areas indicate taxonomic differences in great apes and humans. Considering comparative differences in behavioral capacity, macroevolutionary results are discussed in the context of theories on cerebellar function and learning. © 2014 Elsevier B.V. All rights reserved.
Sand Point, Alaska MHW Coastal Digital Elevation Model
National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...
Bayesian Spatial Point Process Modeling of Neuroimaging Data
Johnson, Timothy D.
2017-01-01
Talk given during the "Where’s Your Signal? Explicit Spatial Models to Improve Interpretability and Sensitivity of Neuroimaging Results" workshop at the 2012 Organization for Human Brain Mapping (OHBM) conference in in Beijing, 10-14 June.
A Thermodynamic Point of View on Dark Energy Models
Cardone, Vincenzo F.; Ninfa Radicella; Antonio Troisi
2017-01-01
We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO) standard ruler data and the Planck d...
Gaussian approximations of fluorescence microscope point-spread function models.
Zhang, Bo; Zerubia, Josiane; Olivo-Marin, Jean-Christophe
2007-04-01
We comprehensively study the least-squares Gaussian approximations of the diffraction-limited 2D-3D paraxial-nonparaxial point-spread functions (PSFs) of the wide field fluorescence microscope (WFFM), the laser scanning confocal microscope (LSCM), and the disk scanning confocal microscope (DSCM). The PSFs are expressed using the Debye integral. Under an L(infinity) constraint imposing peak matching, optimal and near-optimal Gaussian parameters are derived for the PSFs. With an L1 constraint imposing energy conservation, an optimal Gaussian parameter is derived for the 2D paraxial WFFM PSF. We found that (1) the 2D approximations are all very accurate; (2) no accurate Gaussian approximation exists for 3D WFFM PSFs; and (3) with typical pinhole sizes, the 3D approximations are accurate for the DSCM and nearly perfect for the LSCM. All the Gaussian parameters derived in this study are in explicit analytical form, allowing their direct use in practical applications.
Modelling population-based cancer survival trends using join point models for grouped survival data.
Yu, Binbing; Huang, Lan; Tiwari, Ram C; Feuer, Eric J; Johnson, Karen A
2009-04-01
In the United States cancer as a whole is the second leading cause of death and a major burden to health care, thus the medical progress against cancer is a major public health goal. There are many individual studies to suggest that cancer treatment breakthroughs and early diagnosis have significantly improved the prognosis of cancer patients. To better understand the relationship between medical improvements and the survival experience for the patient population at large, it is useful to evaluate cancer survival trends on the population level, e.g., to find out when and how much the cancer survival rates changed. In this paper, we analyze the population-based grouped cancer survival data by incorporating joinpoints into the survival models. A joinpoint survival model facilitates the identification of trends with significant change points in cancer survival, when related to cancer treatments or interventions. The Bayesian Information Criterion is used to select the number of joinpoints. The performance of the joinpoint survival models is evaluated with respect to cancer prognosis, joinpoint locations, annual percent changes in death rates by year of diagnosis, and sample sizes through intensive simulation studies. The model is then applied to the grouped relative survival data for several major cancer sites from the Surveillance, Epidemiology and End Results (SEER) Program of the National Cancer Institute. The change points in the survival trends for several major cancer sites are identified and the potential driving forces behind such change points are discussed.
Charge state evolution in the solar wind. III. Model comparison with observations
Energy Technology Data Exchange (ETDEWEB)
Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)
2014-08-01
We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.
Evolution Game Model of Travel Mode Choice in Metropolitan
Directory of Open Access Journals (Sweden)
Chaoqun Wu
2015-01-01
Full Text Available The paper describes an evolution game model of travel mode choice to determine whether transportation policies would have the desired effect. The model is first expressed as a two-stage sequential game in the extensive form based on the similarity between evolution game theory and the travel mode choice process. Second, backward induction is used to solve for Nash equilibrium of the game based on the Folk Theorem. Third, the sensitivity analysis suggests that a payoff reduction of travel by any mode will result in a rising proportion of inhabitants travelling by that mode and falling proportions of inhabitants travelling by other modes. Finally, the model is applied to Beijing inhabitants’ travel mode choices during morning peak hours and draws the conclusion that the proportion of inhabitants travelling by rail would increase when traffic congestion is more severe. This confirms that fast construction of the urban rail transit would be an effective means of alleviating traffic congestion. The model may be a useful tool for policy makers for analyzing the complex influence of travel mode choice processes on transport policies and transport construction projects.
A Stochastic Model for the HIV/AIDS Dynamic Evolution
Directory of Open Access Journals (Sweden)
Raimondo Manca
2007-08-01
Full Text Available This paper analyses the HIV/AIDS dynamic evolution as defined by CD4 levels, from a macroscopic point of view, by means of homogeneous semi-Markov stochastic processes. A large number of results have been obtained including the following conditional probabilities: an infected patient will be in state j after a time t given that she/he entered at time 0 (starting time in state i; that she/he will survive up to a time t, given the starting state; that she/he will continue to remain in the starting state up to time t; that she/he reach stage j of the disease in the next transition, if the previous state was i and no state change occurred up to time t. The immunological states considered are based on CD4 counts and our data refer to patients selected from a series of 766 HIV-positive intravenous drug users.
Eco-genetic modeling of contemporary life-history evolution.
Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf
2009-10-01
We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by
Phylogenetic ANOVA: The Expression Variance and Evolution Model for Quantitative Trait Evolution.
Rohlfs, Rori V; Nielsen, Rasmus
2015-09-01
A number of methods have been developed for modeling the evolution of a quantitative trait on a phylogeny. These methods have received renewed interest in the context of genome-wide studies of gene expression, in which the expression levels of many genes can be modeled as quantitative traits. We here develop a new method for joint analyses of quantitative traits within- and between species, the Expression Variance and Evolution (EVE) model. The model parameterizes the ratio of population to evolutionary expression variance, facilitating a wide variety of analyses, including a test for lineage-specific shifts in expression level, and a phylogenetic ANOVA that can detect genes with increased or decreased ratios of expression divergence to diversity, analogous to the famous Hudson Kreitman Aguadé (HKA) test used to detect selection at the DNA level. We use simulations to explore the properties of these tests under a variety of circumstances and show that the phylogenetic ANOVA is more accurate than the standard ANOVA (no accounting for phylogeny) sometimes used in transcriptomics. We then apply the EVE model to a mammalian phylogeny of 15 species typed for expression levels in liver tissue. We identify genes with high expression divergence between species as candidates for expression level adaptation, and genes with high expression diversity within species as candidates for expression level conservation and/or plasticity. Using the test for lineage-specific expression shifts, we identify several candidate genes for expression level adaptation on the catarrhine and human lineages, including genes putatively related to dietary changes in humans. We compare these results to those reported previously using a model which ignores expression variance within species, uncovering important differences in performance. We demonstrate the necessity for a phylogenetic model in comparative expression studies and show the utility of the EVE model to detect expression divergence
RANS-VOF modelling of the Wavestar point absorber
DEFF Research Database (Denmark)
Ransley, E. J.; Greaves, D. M.; Raby, A.
2017-01-01
Highlights •A fully nonlinear, coupled model of the Wavestar WEC has been created using open-source CFD software, OpenFOAM®. •The response of the Wavestar WEC is simulated in regular waves with different steepness. •Predictions of body motion, surface elevation, fluid velocity, pressure and load ...
From Point Cloud to Textured Model, the Zamani Laser Scanning ...
African Journals Online (AJOL)
The paper describes the stages of the laser scanning pipeline from data acquisition to the final 3D computer model based on experiences gained during the ongoing creation of data for the African Cultural Heritage Sites and Landscapes database. The various processes are briefly discussed and challenges are highlighted ...
From Point Cloud to Textured Model, the Zamani Laser Scanning ...
African Journals Online (AJOL)
roshan
Abstract. The paper describes the stages of the laser scanning pipeline from data acquisition to the final. 3D computer model based on experiences gained during the ongoing creation of data for the. African Cultural Heritage Sites and Landscapes database. The various processes are briefly discussed and challenges are ...
Rybizki, Jan; Just, Andreas; Rix, Hans-Walter
2017-09-01
Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar
Bhoodoo, C.; Hupfer, A.; Vines, L.; Monakhov, E. V.; Svensson, B. G.
2016-11-01
Hydrothermally grown n -type ZnO samples, implanted with helium (He+) at a sample temperature of ˜40 K and fluences of 5 ×109 and 5 ×1010cm-2 , have been studied in situ by capacitance voltage (CV) and junction spectroscopy measurements. The results are complemented by data from secondary ion mass spectrometry and Fourier transform infrared absorption measurements and first-principles calculations. Removal/passivation of an implantation-induced shallow donor center or alternatively growth of a deep acceptor defect are observed after annealing, monitored via charge carrier concentration (Nd) versus depth profiles extracted from CV data. Isothermal anneals in the temperature range of 290-325 K were performed to study the evolution in Nd, revealing a first-order kinetics with an activation energy, Ea≈0.7 eV and frequency factor, c0˜106s-1 . Two models are discussed in order to explain these annealing results. One relies on transition of oxygen interstitials (Oi) from a split configuration (neutral state) to an octahedral configuration (deep double acceptor state) as a key feature. The other one is based on the migration of Zn interstitials (double donor) and trapping by neutral Zn-vacancy-hydrogen complexes as the core ingredient. In particular, the latter model exhibits good quantitative agreement with the experimental data and gives an activation energy of ˜0.75 eV for the migration of Zn interstitials.
Room acoustics modeling using a point-cloud representation of the room geometry
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...
A model for the evolution of nucleotide polymerase directionality.
Directory of Open Access Journals (Sweden)
Joshua Ballanco
Full Text Available BACKGROUND: In all known living organisms, every enzyme that synthesizes nucleic acid polymers does so by adding nucleotide 5′-triphosphates to the 3′-hydroxyl group of the growing chain. This results in the well known 5'→3' directionality of all DNA and RNA Polymerases. The lack of any alternative mechanism, e.g. addition in a 3'→5' direction, may indicate a very early founder effect in the evolution of life, or it may be the result of a selective pressure against such an alternative. METHODOLOGY/PRINCIPAL FINDINGS: In an attempt to determine whether the lack of an alternative polymerase directionality is the result of a founder effect or evolutionary selection, we have constructed a basic model of early polymerase evolution. This model is informed by the essential chemical properties of the nucleotide polymerization reaction. With this model, we are able to simulate the growth of organisms with polymerases that synthesize either 5'→3' or 3'→5' in isolation or in competition with each other. CONCLUSIONS/SIGNIFICANCE: We have found that a competition between organisms with 5'→3' polymerases and 3'→5' polymerases only results in a evolutionarily stable strategy under certain conditions. Furthermore, we have found that mutations lead to a much clearer delineation between conditions that lead to a stable coexistence of these populations and conditions which ultimately lead to success for the 5'→3' form. In addition to presenting a plausible explanation for the uniqueness of enzymatic polymerization reactions, we hope these results also provide an example of how whole organism evolution can be understood based on molecular details.
Modeling the mesozoic-cenozoic structural evolution of east texas
Pearson, Ofori N.; Rowan, Elisabeth L.; Miller, John J.
2012-01-01
The U.S. Geological Survey (USGS) recently assessed the undiscovered technically recoverable oil and gas resources within Jurassic and Cretaceous strata of the onshore coastal plain and State waters of the U.S. Gulf Coast. Regional 2D seismic lines for key parts of the Gulf Coast basin were interpreted in order to examine the evolution of structural traps and the burial history of petroleum source rocks. Interpretation and structural modeling of seismic lines from eastern Texas provide insights into the structural evolution of this part of the Gulf of Mexico basin. Since completing the assessment, the USGS has acquired additional regional seismic lines in east Texas; interpretation of these new lines, which extend from the Texas-Oklahoma state line to the Gulf Coast shoreline, show how some of the region's prominent structural elements (e.g., the Talco and Mount Enterprise fault zones, the East Texas salt basin, and the Houston diapir province) vary along strike. The interpretations also indicate that unexplored structures may lie beneath the current drilling floor. Structural restorations based upon interpretation of these lines illustrate the evolution of key structures and show the genetic relation between structural growth and movement of the Jurassic Louann Salt. 1D thermal models that integrate kinetics and burial histories were also created for the region's two primary petroleum source rocks, the Oxfordian Smackover Formation and the Cenomanian-Turonian Eagle Ford Shale. Integrating results from the thermal models with the structural restorations provides insights into the distribution and timing of petroleum expulsion from the Smackover Formation and Eagle Ford Shale in eastern Texas.
Point process modelling of the Afghan War Diary.
Zammit-Mangion, Andrew; Dewar, Michael; Kadirkamanathan, Visakan; Sanguinetti, Guido
2012-07-31
Modern conflicts are characterized by an ever increasing use of information and sensing technology, resulting in vast amounts of high resolution data. Modelling and prediction of conflict, however, remain challenging tasks due to the heterogeneous and dynamic nature of the data typically available. Here we propose the use of dynamic spatiotemporal modelling tools for the identification of complex underlying processes in conflict, such as diffusion, relocation, heterogeneous escalation, and volatility. Using ideas from statistics, signal processing, and ecology, we provide a predictive framework able to assimilate data and give confidence estimates on the predictions. We demonstrate our methods on the WikiLeaks Afghan War Diary. Our results show that the approach allows deeper insights into conflict dynamics and allows a strikingly statistically accurate forward prediction of armed opposition group activity in 2010, based solely on data from previous years.
Numerical Modeling of a Wave Energy Point Absorber
DEFF Research Database (Denmark)
Hernandez, Lorenzo Banos; Frigaard, Peter; Kirkegaard, Poul Henning
2009-01-01
The present study deals with numerical modelling of the Wave Star Energy WSE device. Hereby, linear potential theory is applied via a BEM code on the wave hydrodynamics exciting the floaters. Time and frequency domain solutions of the floater response are determined for regular and irregular seas....... Furthermore, these results are used to estimate the power and the energy absorbed by a single oscillating floater. Finally, a latching control strategy is analysed in open-loop configuration for energy maximization....
Generative Models in Deep Learning: Constraints for Galaxy Evolution
Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.
2018-01-01
New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.
Competition and adaptation in an Internet evolution model.
Serrano, M Angeles; Boguñá, Marián; Díaz-Guilera, Albert
2005-01-28
We model the evolution of the Internet at the autonomous system level as a process of competition for users and adaptation of bandwidth capability. From a weighted network formalism, where both nodes and links are weighted, we find the exponent of the degree distribution as a simple function of the growth rates of the number of autonomous systems and connections in the Internet, both empirically measurable quantities. Our approach also accounts for a high level of clustering as well as degree-degree correlations, both with the same hierarchical structure present in the real Internet. Further, it also highlights the interplay between bandwidth, connectivity, and traffic of the network.
Evolution of Models of Working Memory and Cognitive Resources.
Wingfield, Arthur
2016-01-01
The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today. Linear flow models of information processing common in the 1960s and 1970s centered on the transfer of verbal information from a limited-capacity short-term memory store to long-term memory through rehearsal. Current conceptions see working memory as a dynamic system that includes both maintaining and manipulating information through a series of interactive components that include executive control and attentional resources. These models also reflect the evolution from an almost exclusive concentration on working memory for verbal materials to inclusion of a visual working memory component. Although differing in postulated mechanisms and emphasis, these evolving viewpoints all share the recognition that human information processing is a limited-capacity system with limits on the amount of information that can be attended to, remain activated in memory, and utilized at one time. These limitations take on special importance in spoken language comprehension, especially when the stimuli have complex linguistic structures or listening effort is increased by poor acoustic quality or reduced hearing acuity.
Microeconomic co-evolution model for financial technical analysis signals
Rotundo, G.; Ausloos, M.
2007-01-01
Technical analysis (TA) has been used for a long time before the availability of more sophisticated instruments for financial forecasting in order to suggest decisions on the basis of the occurrence of data patterns. Many mathematical and statistical tools for quantitative analysis of financial markets have experienced a fast and wide growth and have the power for overcoming classical TA methods. This paper aims to give a measure of the reliability of some information used in TA by exploring the probability of their occurrence within a particular microeconomic agent-based model of markets, i.e., the co-evolution Bak-Sneppen model originally invented for describing species population evolutions. After having proved the practical interest of such a model in describing financial index so-called avalanches, in the prebursting bubble time rise, the attention focuses on the occurrence of trend line detection crossing of meaningful barriers, those that give rise to some usual TA strategies. The case of the NASDAQ crash of April 2000 serves as an illustration.
Modeling the Evolution of Female Meiotic Drive in Maize.
Hall, David W; Dawe, R Kelly
2017-11-09
Autosomal drivers violate Mendel's law of segregation in that they are overrepresented in gametes of heterozygous parents. For drivers to be polymorphic within populations rather than fixing, their transmission advantage must be offset by deleterious effects on other fitness components. In this paper we develop an analytical model for the evolution of autosomal drivers that is motivated by the neocentromere drive system found in maize. In particular we model both the transmission advantage and deleterious fitness effects on seed viability, pollen viability, seed to adult survival mediated by maternal genotype, and seed to adult survival mediated by offspring genotype. We derive general, biologically intuitive, conditions for the four most likely evolutionary outcomes and discuss the expected evolution of autosomal drivers given these conditions. Finally, we determine the expected equilibrium allele frequencies predicted by the model given recent estimates of fitness components for all relevant genotypes and show that the predicted equilibrium is within the range observed in maize land races for levels of drive at the low end of what has been observed. Copyright © 2017, G3: Genes, Genomes, Genetics.
Dynamical evolution of volume fractions in multipressure multiphase flow models.
Chang, C H; Ramshaw, J D
2008-06-01
Compared to single-pressure models, multipressure multiphase flow models require additional closure relations to determine the individual pressures of the different phases. These relations are often taken to be evolution equations for the volume fractions. We present a rigorous theoretical framework for constructing such equations for compressible multiphase mixtures in terms of submodels for the relative volumetric expansion rates DeltaE_{i} of the phases. These quantities are essentially the rates at which the phases dynamically expand or contract in response to pressure differences, and represent the general tendency of the volume fractions to relax toward values that produce local pressure equilibrium. We present a simple provisional model of this type in which DeltaE_{i} is proportional to pressure differences divided by the time required for sound waves to traverse an appropriate characteristic length. It is shown that the resulting approach to pressure equilibrium is monotonic rather than oscillatory, and occurs instantaneously in the incompressible limit.
Modeling the summertime evolution of sea-ice melt ponds
DEFF Research Database (Denmark)
Lüthje, Mikael; Feltham, D.L.; Taylor, P.D.
2006-01-01
We present a mathematical model describing the summer melting of sea ice. We simulate the evolution of melt ponds and determine area coverage and total surface ablation. The model predictions are tested for sensitivity to the melt rate of unponded ice, enhanced melt rate beneath the melt ponds......, vertical seepage, and horizontal permeability. The model is initialized with surface topographies derived from laser altimetry corresponding to first-year sea ice and multiyear sea ice. We predict that there are large differences in the depth of melt ponds and the area of coverage between the two types...... of ice. We also find that the vertical seepage rate and the melt rate of unponded ice are important in determining the total surface ablation and area covered by melt ponds....
The Biological Big Bang model for the major transitions in evolution.
Koonin, Eugene V
2007-08-20
Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin's original proposal, remains the dominant description of biological evolution. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable. Usually, this pattern is attributed to cladogenesis compressed in time, combined with the inevitable erosion of the phylogenetic signal. I propose that most or all major evolutionary transitions that show the "explosive" pattern of emergence of new types of biological entities correspond to a boundary between two qualitatively distinct evolutionary phases. The first, inflationary phase is characterized by extremely rapid evolution driven by various processes of genetic information exchange, such as horizontal gene transfer, recombination, fusion, fission, and spread of mobile elements. These processes give rise to a vast diversity of forms from which the main classes of entities at the new level of complexity emerge independently, through a sampling process. In the second phase, evolution dramatically slows down, the respective process of genetic information exchange tapers off, and multiple lineages of the new type of entities emerge, each of them evolving in a tree-like fashion from that point on. This biphasic model of evolution incorporates the previously developed
Peter, Ulmschneider
When we are looking for intelligent life outside the Earth, there is a fundamental question: Assuming that life has formed on an extraterrestrial planet, will it also develop toward intelligence? As this is hotly debated, we will now describe the development of life on Earth in more detail in order to show that there are good reasons why evolution should culminate in intelligent beings.
Modeling precipitate evolution in zirconium alloys during irradiation
Energy Technology Data Exchange (ETDEWEB)
Robson, J.D., E-mail: joseph.robson@manchester.ac.uk
2016-08-01
The second phase precipitates (SPPs) in zirconium alloys are critical in controlling their performance. During service, SPPs are subject to both thermal and irradiation effects that influence volume fraction, number, and size. In this paper, a model has been developed to capture the combined effect of thermal and irradiation exposure on the Zr(Fe,Cr){sub 2} precipitates in Zircaloy. The model includes irradiation induced precipitate destabilization integrated into a classical size class model for nucleation, growth and coarsening. The model has been applied to predict the effect of temperature and irradiation on SPP evolution. Increasing irradiation displacement rate is predicted to strongly enhance the loss of particles that arises from coarsening alone. The effect of temperature is complex due to competition between coarsening and irradiation damage. As temperature increases, coarsening is predicted to become increasingly important compared to irradiation induced dissolution and may increase resistance to irradiation induced dissolution by increasing particle size. - Highlights: • Model developed to predict effect of thermal and irradiation exposure on precipitates in zirconium alloys. • Model applied to predict effect of changing irradiation dose rate and temperature on precipitates in Zircaloy-4. • Model reveals competition between thermal coarsening and irradiation-induced dissolution. • Model identifies important areas for further study to understand re-precipitation of precipitates after dissolution.
A Massless-Point-Charge Model for the Electron
Directory of Open Access Journals (Sweden)
Daywitt W. C.
2010-04-01
Full Text Available "It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foundations. Maxwell's theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrodynamics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle" (Grandy W.T. Jr. Relativistic quantum mechanics of leptons and fields. Kluwer Academic Publishers, Dordrecht-London, 1991, p.367. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff with the theory of the Planck vacuum (PV, the basic idea for the model following from Puthoff with the PV theory adding some important details.
Modeling Flow Pattern and Evolution of Meandering Channels with a Nonlinear Model
Directory of Open Access Journals (Sweden)
Leilei Gu
2016-09-01
Full Text Available Meander dynamics has been the focus of river engineering for decades; however, it remains a challenge for researchers to precisely replicate natural evolution processes of meandering channels with numerical models due to the high nonlinearity of the governing equations. The present study puts forward a nonlinear model to simulate the flow pattern and evolution of meandering channels. The proposed meander model adopts the nonlinear hydrodynamic submodel developed by Blanckaert and de Vriend, which accounts for the nonlinear interactions between secondary flow and main flow and therefore has no curvature restriction. With the computational flow field, the evolution process of the channel centerline is simulated using the Bank Erosion and Retreat Model (BERM developed by Chen and Duan. Verification against two laboratory flume experiments indicates the proposed meander model yields satisfactory agreement with the measured data. For comparison, the same experimental cases are also simulated with the linear version of the hydrodynamic submodel. Calculated results show that the flow pattern and meander evolution process predicted by the nonlinear and the linear models are similar for mildly curved channels, whereas they exhibit different characteristics when channel sinuosity becomes relatively high. It is indicated that the nonlinear interactions between main flow and secondary flow prevent the growth of the secondary flow and induce a more uniform transverse velocity profile in high-sinuosity channels, which slows down the evolution process of meandering channels.
A case study on point process modelling in disease mapping
DEFF Research Database (Denmark)
Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor
2005-01-01
We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE), and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependen...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.......We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE), and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence...
Unemployment estimation: Spatial point referenced methods and models
Pereira, Soraia
2017-06-26
Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to
Łącki, Mateusz; Damski, Bogdan; Zakrzewski, Jakub
2016-12-02
We show that the critical point of the two-dimensional Bose-Hubbard model can be easily found through studies of either on-site atom number fluctuations or the nearest-neighbor two-point correlation function (the expectation value of the tunnelling operator). Our strategy to locate the critical point is based on the observation that the derivatives of these observables with respect to the parameter that drives the superfluid-Mott insulator transition are singular at the critical point in the thermodynamic limit. Performing the quantum Monte Carlo simulations of the two-dimensional Bose-Hubbard model, we show that this technique leads to the accurate determination of the position of its critical point. Our results can be easily extended to the three-dimensional Bose-Hubbard model and different Hubbard-like models. They provide a simple experimentally-relevant way of locating critical points in various cold atomic lattice systems.
Modelling the morphodynamics and co-evolution of coast and estuarine environments
Morris, Chloe; Coulthard, Tom; Parsons, Daniel R.; Manson, Susan; Barkwith, Andrew
2017-04-01
The morphodynamics of coast and estuarine environments are known to be sensitive to environmental change and sea-level rise. However, whilst these systems have received considerable individual research attention, how they interact and co-evolve is relatively understudied. These systems are intrinsically linked and it is therefore advantageous to study them holistically in order to build a more comprehensive understanding of their behaviour and to inform sustainable management over the long term. Complex environments such as these are often studied using numerical modelling techniques. Inherent from the limited research in this area, existing models are currently not capable of simulating dynamic coast-estuarine interactions. A new model is being developed through coupling the one-line Coastline Evolution Model (CEM) with CAESAR-Lisflood (C-L), a hydrodynamic Landscape Evolution Model. It is intended that the eventual model be used to advance the understanding of these systems and how they may evolve over the mid to long term in response to climate change. In the UK, the Holderness Coast, Humber Estuary and Spurn Point system offers a diverse and complex case study for this research. Holderness is one of the fastest eroding coastlines in Europe and research suggests that the large volumes of material removed from its cliffs are responsible for the formation of the Spurn Point feature and for the Holocene infilling of the Humber Estuary. Marine, fluvial and coastal processes are continually reshaping this system and over the next century, it is predicted that climate change could lead to increased erosion along the coast and supply of material to the Humber Estuary and Spurn Point. How this manifests will be hugely influential to the future morphology of these systems and the existence of Spurn Point. Progress to date includes a new version of the CEM that has been prepared for integration into C-L and includes an improved graphical user interface and more complex
Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism
Marković, Dimitrije; Gläscher, Jan; Bossaerts, Peter; O’Doherty, John; Kiebel, Stefan J.
2015-01-01
For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST) with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects’ behavior and found that attention-like features in the behavioral model are essential for explaining subjects’ responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects. PMID:26495984
On religion and language evolutions seen through mathematical and agent based models
Ausloos, M
2011-01-01
(shortened version) Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also a characteristics of the human kind. New religions, new languages appear though others disappear. All religions and languages evolve when they adapt to the society developments. On the other hand, the number of adherents of a given religion, the number of persons speaking a language is not fixed. Several questions can be raised. E.g. from a macroscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve?. From a microscopic view point: can one invent agent based models to describe macroscopic aspects? Does it exist simple evolution equations? It is intuitively accepted, but also found through from statistical analysis of the frequency distribution that an ...
Modeling and Analysis of Adjacent Grid Point Wind Speed Profiles within and Above a Forest Canopy
National Research Council Canada - National Science Library
Tunick, Arnold
1999-01-01
Adjacent grid point profile data from the canopy coupled to the surface layer (C-CSL) model are examined to illustrate the model's capability to represent effects of the surface boundary on wind flow...
Modelling of Damage Evolution in Braided Composites: Recent Developments
Wang, Chen; Roy, Anish; Silberschmidt, Vadim V.; Chen, Zhong
2017-12-01
Composites reinforced with woven or braided textiles exhibit high structural stability and excellent damage tolerance thanks to yarn interlacing. With their high stiffness-to-weight and strength-to-weight ratios, braided composites are attractive for aerospace and automotive components as well as sports protective equipment. In these potential applications, components are typically subjected to multi-directional static, impact and fatigue loadings. To enhance material analysis and design for such applications, understanding mechanical behaviour of braided composites and development of predictive capabilities becomes crucial. Significant progress has been made in recent years in development of new modelling techniques allowing elucidation of static and dynamic responses of braided composites. However, because of their unique interlacing geometric structure and complicated failure modes, prediction of damage initiation and its evolution in components is still a challenge. Therefore, a comprehensive literature analysis is presented in this work focused on a review of the state-of-the-art progressive damage analysis of braided composites with finite-element simulations. Recently models employed in the studies on mechanical behaviour, impact response and fatigue analyses of braided composites are presented systematically. This review highlights the importance, advantages and limitations of as-applied failure criteria and damage evolution laws for yarns and composite unit cells. In addition, this work provides a good reference for future research on FE simulations of braided composites.
Effect of Vapor Pressure Scheme on Multiday Evolution of SOA in an Explicit Model
Lee-Taylor, J.; Madronich, S.; Aumont, B.; Camredon, M.; Emmons, L. K.; Tyndall, G. S.; Valorso, R.
2011-12-01
Recent modeling of the evolution of Secondary Organic Aerosol (SOA) has led to the critically important prediction that SOA mass continues to increase for several days after emission of primary pollutants. This growth of organic aerosol in dispersing plumes originating from urban point sources has direct implications for regional aerosol radiative forcing. We investigate the robustness of predicted SOA mass growth downwind of Mexico City in the model GECKO-A (Generator of Explicit Chemistry and Kinetics of Organics in the Atmosphere), by assessing its sensitivity to the choice of vapor pressure prediction scheme. We also explore the implications for multi-day SOA mass growth of glassification / solidification of SOA constituents during aging. Finally we use output from the MOZART-4 chemical transport model to evaluate our results in the regional and global context.
Stochastic group selection model for the evolution of altruism
Silva, A T C; Silva, Ana T. C.
1999-01-01
We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into $M$ reproductively isolated groups (demes) of size $N$. The cost associated with being altruistic is modelled by assigning the fitness $1- \\tau$, with $\\tau \\in [0,1]$, to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage $\\tau$ is not too large, we show that the finite $M$ fluctuations are small and practically do not alter the deterministic results obtained for $M \\to \\infty$. However, for large $\\tau$ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.
Stochastic group selection model for the evolution of altruism
Silva, Ana T. C.; Fontanari, J. F.
We study numerically and analytically a stochastic group selection model in which a population of asexually reproducing individuals, each of which can be either altruist or non-altruist, is subdivided into M reproductively isolated groups (demes) of size N. The cost associated with being altruistic is modelled by assigning the fitness 1- τ, with τ∈[0,1], to the altruists and the fitness 1 to the non-altruists. In the case that the altruistic disadvantage τ is not too large, we show that the finite M fluctuations are small and practically do not alter the deterministic results obtained for M→∞. However, for large τ these fluctuations greatly increase the instability of the altruistic demes to mutations. These results may be relevant to the dynamics of parasite-host systems and, in particular, to explain the importance of mutation in the evolution of parasite virulence.
Heavy ion collision evolution modeling with ECHO-QGP
Rolando, V.; Inghirami, G.; Beraudo, A.; Del Zanna, L.; Becattini, F.; Chandra, V.; De Pace, A.; Nardi, M.
2014-11-01
We present a numerical code modeling the evolution of the medium formed in relativistic heavy ion collisions, ECHO-QGP. The code solves relativistic hydrodynamics in (3 + 1)D, with dissipative terms included within the framework of Israel-Stewart theory; it can work both in Minkowskian and in Bjorken coordinates. Initial conditions are provided through an implementation of the Glauber model (both Optical and Monte Carlo), while freezeout and particle generation are based on the Cooper-Frye prescription. The code is validated against several test problems and shows remarkable stability and accuracy with the combination of a conservative (shock-capturing) approach and the high-order methods employed. In particular it beautifully agrees with the semi-analytic solution known as Gubser flow, both in the ideal and in the viscous Israel-Stewart case, up to very large times and without any ad hoc tuning of the algorithm.
A Stochastic Model for Evolution of Altruistic Genes
Donato, Roberta
1996-04-01
We study numerically a stochastic model for evolution and maintenance of a population which reproduces asexually under selective pressure and is divided into smaller groups of variable size. An altruistic trait is defined as lowering the fitness of the carrier, but the survival probability of all the members of a group with a large enough number of altruists is enhanced. Numerical results show that there is a transition in the average proportion of altruists versus the relative advantage conferred by the presence of altruists to all the members of the group. At the transition the distribution of altruistic frequence in the groups is not trivial, and average deme size reaches a minimum. We found also an error-threshold in mutation rate analogous to the quasi-species model of M. Eigen.
Computational Modeling of Thermochemical Evolution of Aluminum Smelter Crust
Zhang, Qinsong; Taylor, Mark P.; Chen, John J. J.
2015-02-01
In an aluminum reduction cell, crushed anode cover at room temperature is added onto the exposed bulk electrolyte surface around newly positioned anodes and is heated by high heat flux from this liquid electrolyte. Liquid electrolyte penetrates inside the porous anode cover. Solid cryolite and alumina crystallize from the liquid electrolyte due to the temperature gradient in the anode cover. A solidified crust forms at the bottom part of the anode cover during the heating up period. A thermochemical model which takes into account both the liquid electrolyte penetration and phase transformations has been developed to simulate the temperature evolution, chemical composition development, and liquid front penetration and content in the anode cover. The model is tested against experimental data obtained from industrial cells and laboratory experiments in this paper.
Using many pilot points and singular value decomposition in groundwater model calibration
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
corresponding to significant Eigen values (resulting from the decomposition) is used to transform the model from having many pilot point parameters to having a few super parameters. A synthetic case model is used to analyze and demonstrate the application of the presented method of model parameterization......A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeller from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...
Evolution of quantum-like modeling in decision making processes
Khrennikova, Polina
2012-12-01
The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schrödinger equation to describe the evolution of people's mental states. A shortcoming of Schrödinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.
A probabilistic model for the evolution of RNA structure
Directory of Open Access Journals (Sweden)
Holmes Ian
2004-10-01
Full Text Available Abstract Background For the purposes of finding and aligning noncoding RNA gene- and cis-regulatory elements in multiple-genome datasets, it is useful to be able to derive multi-sequence stochastic grammars (and hence multiple alignment algorithms systematically, starting from hypotheses about the various kinds of random mutation event and their rates. Results Here, we consider a highly simplified evolutionary model for RNA, called "The TKF91 Structure Tree" (following Thorne, Kishino and Felsenstein's 1991 model of sequence evolution with indels, which we have implemented for pairwise alignment as proof of principle for such an approach. The model, its strengths and its weaknesses are discussed with reference to four examples of functional ncRNA sequences: a riboswitch (guanine, a zipcode (nanos, a splicing factor (U4 and a ribozyme (RNase P. As shown by our visualisations of posterior probability matrices, the selected examples illustrate three different signatures of natural selection that are highly characteristic of ncRNA: (i co-ordinated basepair substitutions, (ii co-ordinated basepair indels and (iii whole-stem indels. Conclusions Although all three types of mutation "event" are built into our model, events of type (i and (ii are found to be better modeled than events of type (iii. Nevertheless, we hypothesise from the model's performance on pairwise alignments that it would form an adequate basis for a prototype multiple alignment and genefinding tool.
Rinderer, M.; McGlynn, B. L.; van Meerveld, I. H. J.
2015-12-01
Detailed groundwater measurements across a catchment can provide information on subsurface stormflow generation and hydrologic connectivity of hillslopes to the stream network. However, groundwater dynamics can be highly variable in space and time, especially in steep headwater catchments. Prediction of groundwater response patterns at non-monitored sites requires transferring point scale information to the catchment scale through analysis of continuous groundwater level time series and their relationships to covariates such as topographic indices or landscape position. We applied time series analysis to a 4 year dataset of continuous groundwater level data for 51 wells distributed across a 20 ha pre-alpine headwater catchment in Switzerland to address the following questions: 1) Is the similarity or difference between the groundwater time series related to landscape position? 2) How does the relationship between groundwater dynamics and landscape position change across long (seasonal) and shorter (event) time scales and varying antecedent wetness conditions? 3) How can time series modeling be used to predict groundwater responses at non-monitored sites? We employed hierarchical clustering of the observed groundwater time series using both dynamic time warping and correlation based distance matrices. Based on the common site characteristics of the members of each cluster, the time series models were transferred to all non-monitored sites. This categorical approach provided maps of spatio-temporal groundwater dynamics across the entire catchment. We further developed a continuous approach based on process-based hydrological modeling and water table dynamic similarity. We suggest that continuous measurements at representative points and subsequent time series analysis can shed light into groundwater dynamics at the landscape scale and provide new insights into space-time patterns of hydrologic connectivity and streamflow generation.
Troive, L.
2017-09-01
Friction-free 3-point bending has become a common test-method since the VDA 238-100 plate-bending test [1] was introduced. According to this test the criterion for failure is when the force suddenly drops. It was found by the author that the evolution of the cross-section moment is a more preferable measure regarding the real material response instead of the force. Beneficially, the cross-section moment gets more or less a constant maximum steady-state level when the cross-section becomes fully plastified. An expression for the moment M is presented that fulfils the criteria for energy of conservation at bending. Also an expression calculating the unit-free moment, M/Me, i.e. current moment to elastic-moment ratio, is demonstrated specifically proposed for detection of failures. The mathematical expressions are simple making it easy to transpose measured force F and stroke position S to the corresponding cross-section moment M. From that point of view it’s even possible to implement, e.g. into a conventional measurement system software, studying the cross-section moment in real-time during a test. It’s even possible to calculate other parameters such as flow-stress and shape of curvature at every stage. It has been tested on different thicknesses and grades within the range from 1.0 to 10 mm with very good results. In this paper the present model is applied on a 6.1 mm hot-rolled high strength steel from the same batch at three different conditions, i.e. directly quenched, quenched and tempered, and a third variant quench and tempered with levelling. It will be shown that very small differences in material-response can be predicted by this method.
Denardini, Clezio Marcos
2016-07-01
We have developed a tool for measuring the evolutional stage of the space weather regional warning centers using the approach of the innovative evolution starting from the perspective presented by Figueiredo (2009, Innovation Management: Concepts, metrics and experiences of companies in Brazil. Publisher LTC, Rio de Janeiro - RJ). It is based on measuring the stock of technological skills needed to perform a certain task that is (or should) be part of the scope of a space weather center. It also addresses the technological capacity for innovation considering the accumulation of technological and learning capabilities, instead of the usual international indices like number of registered patents. Based on this definition, we have developed a model for measuring the capabilities of the Brazilian Study and Monitoring Program Space Weather (Embrace), a program of the National Institute for Space Research (INPE), which has gone through three national stages of development and an international validation step. This program was created in 2007 encompassing competence from five divisions of INPE in order to carry out the data collection and maintenance of the observing system in space weather; to model processes of the Sun-Earth system; to provide real-time information and to forecast space weather; and provide diagnostic their effects on different technological systems. In the present work, we considered the issues related to the innovation of micro-processes inherent to the nature of the Embrace program, not the macro-economic processes, despite recognizing the importance of these. During the development phase, the model was submitted to five scientists/managers from five different countries member of the International Space Environment Service (ISES) who presented their evaluations, concerns and suggestions. It was applied to the Embrace program through an interview form developed to be answered by professional members of regional warning centers. Based on the returning
Network evolution model for supply chain with manufactures as the core
Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing
2018-01-01
Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201
Linking Experimental Characterization and Computational Modeling in Microstructural Evolution
Energy Technology Data Exchange (ETDEWEB)
Demirel, Melik Cumhar [Univ. of Pittsburgh, PA (United States)
2002-06-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity between grain growth experiments and anisotropic three-dimensional simulations.
Rapid evolution of mimicry following local model extinction.
Akcali, Christopher K; Pfennig, David W
2014-06-01
Batesian mimicry evolves when individuals of a palatable species gain the selective advantage of reduced predation because they resemble a toxic species that predators avoid. Here, we evaluated whether-and in which direction-Batesian mimicry has evolved in a natural population of mimics following extirpation of their model. We specifically asked whether the precision of coral snake mimicry has evolved among kingsnakes from a region where coral snakes recently (1960) went locally extinct. We found that these kingsnakes have evolved more precise mimicry; by contrast, no such change occurred in a sympatric non-mimetic species or in conspecifics from a region where coral snakes remain abundant. Presumably, more precise mimicry has continued to evolve after model extirpation, because relatively few predator generations have passed, and the fitness costs incurred by predators that mistook a deadly coral snake for a kingsnake were historically much greater than those incurred by predators that mistook a kingsnake for a coral snake. Indeed, these results are consistent with prior theoretical and empirical studies, which revealed that only the most precise mimics are favoured as their model becomes increasingly rare. Thus, highly noxious models can generate an 'evolutionary momentum' that drives the further evolution of more precise mimicry-even after models go extinct. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Leader's opinion priority bounded confidence model for network opinion evolution
Zhu, Meixia; Xie, Guangqiang
2017-08-01
Aiming at the weight of trust someone given to participate in the interaction in Hegselmann-Krause's type consensus model is the same and virtual social networks among individuals with different level of education, personal influence, etc. For differences between agents, a novelty bounded confidence model was proposed with leader's opinion considered priority. Interaction neighbors can be divided into two kinds. The first kind is made up of "opinion leaders" group, another kind is made up of ordinary people. For different groups to give different weights of trust. We also analyzed the related characteristics of the new model under the symmetrical bounded confidence parameters and combined with the classical HK model were analyzed. Simulation experiment results show that no matter the network size and initial view is subject to uniform distribution or discrete distribution. We can control the "opinion-leader" good change the number of views and values, and even improve the convergence speed. Experiment also found that the choice of "opinion leaders" is not the more the better, the model well explain how the "opinion leader" in the process of the evolution of the public opinion play the role of the leader.
Linking Experimental Characterization and Computational Modeling in Microstructural Evolution
Energy Technology Data Exchange (ETDEWEB)
Demirel, Melik Cumhur [Univ. of California, Berkeley, CA (United States)
2002-06-01
It is known that by controlling microstructural development, desirable properties of materials can be achieved. The main objective of our research is to understand and control interface dominated material properties, and finally, to verify experimental results with computer simulations. In order to accomplish this objective, we studied the grain growth in detail with experimental techniques and computational simulations. We obtained 5170-grain data from an Aluminum-film (120μm thick) with a columnar grain structure from the Electron Backscattered Diffraction (EBSD) measurements. Experimentally obtained starting microstructure and grain boundary properties are input for the three-dimensional grain growth simulation. In the computational model, minimization of the interface energy is the driving force for the grain boundary motion. The computed evolved microstructure is compared with the final experimental microstructure, after annealing at 550 ºC. Two different measures were introduced as methods of comparing experimental and computed microstructures. Modeling with anisotropic mobility explains a significant amount of mismatch between experiment and isotropic modeling. We have shown that isotropic modeling has very little predictive value. Microstructural evolution in columnar Aluminum foils can be correctly modeled with anisotropic parameters. We observed a strong similarity
A Monte Carlo model for 3D grain evolution during welding
Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena
2017-09-01
Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.
A method for automatic feature points extraction of human vertebrae three-dimensional model
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
DEFF Research Database (Denmark)
Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger
We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees, a...... likelihood based inference and tests for independence between the points and the marks.......We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees...
Point-coupling models from mesonic hyper massive limit and mean-field approaches
Energy Technology Data Exchange (ETDEWEB)
Lourenco, O.; Dutra, M., E-mail: odilon@ita.br [Departamento de Fisica, Instituto Tecnologico da Aeronautica - CTA, Sao Jose dos Campos, SP (Brazil); Delfino, Antonio, E-mail: delfino@if.uff.br [Instituto de Fisica, Universidade Federal Fluminense, Niteroi, RJ (Brazil); Amaral, R.L.P.G. [Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA (United States)
2012-08-15
t In this work, we show how nonlinear point coupling models, described by a Lagrangian density containing only terms up to fourth order in the fermion condensate ({Psi}-bar{Psi}), are derived from a modified meson exchange nonlinear Walecka model. We present two methods of derivation, namely the hyper massive meson limit within a functional integral approach and the mean-field approximation, in which equations of state at zero temperature of the nonlinear point-coupling models are directly obtained. (author)
Nelson, Jonathan M.; Shimizu, Yasuyuki; Giri, Sanjay; McDonald, Richard R.
2010-01-01
Uncertainties in flood stage prediction and bed evolution in rivers are frequently associated with the evolution of bedforms over a hydrograph. For the case of flood prediction, the evolution of the bedforms may alter the effective bed roughness, so predictions of stage and velocity based on assuming bedforms retain the same size and shape over a hydrograph will be incorrect. These same effects will produce errors in the prediction of the sediment transport and bed evolution, but in this latter case the errors are typically larger, as even small errors in the prediction of bedform form drag can make very large errors in predicting the rates of sediment motion and the associated erosion and deposition. In situations where flows change slowly, it may be possible to use empirical results that relate bedform morphology to roughness and effective form drag to avoid these errors; but in many cases where the bedforms evolve rapidly and are in disequilibrium with the instantaneous flow, these empirical methods cannot be accurately applied. Over the past few years, computational models for bedform development, migration, and adjustment to varying flows have been developed and tested with a variety of laboratory and field data. These models, which are based on detailed multidimensional flow modeling incorporating large eddy simulation, appear to be capable of predicting bedform dimensions during steady flows as well as their time dependence during discharge variations. In the work presented here, models of this type are used to investigate the impacts of bedform on stage and bed evolution in rivers during flood hydrographs. The method is shown to reproduce hysteresis in rating curves as well as other more subtle effects in the shape of flood waves. Techniques for combining the bedform evolution models with larger-scale models for river reach flow, sediment transport, and bed evolution are described and used to show the importance of including dynamic bedform effects in river
Cabomba as a model for studies of early angiosperm evolution
Vialette-Guiraud, Aurelie C. M.; Alaux, Michael; Legeai, Fabrice; Finet, Cedric; Chambrier, Pierre; Brown, Spencer C.; Chauvet, Aurelie; Magdalena, Carlos; Rudall, Paula J.; Scutt, Charles P.
2011-01-01
Background The angiosperms, or flowering plants, diversified in the Cretaceous to dominate almost all terrestrial environments. Molecular phylogenetic studies indicate that the orders Amborellales, Nymphaeales and Austrobaileyales, collectively termed the ANA grade, diverged as separate lineages from a remaining angiosperm clade at a very early stage in flowering plant evolution. By comparing these early diverging lineages, it is possible to infer the possible morphology and ecology of the last common ancestor of the extant angiosperms, and this analysis can now be extended to try to deduce the developmental mechanisms that were present in early flowering plants. However, not all species in the ANA grade form convenient molecular-genetic models. Scope The present study reviews the genus Cabomba (Nymphaeales), which shows a range of features that make it potentially useful as a genetic model. We focus on characters that have probably been conserved since the last common ancestor of the extant flowering plants. To facilitate the use of Cabomba as a molecular model, we describe methods for its cultivation to flowering in the laboratory, a novel Cabomba flower expressed sequence tag database, a well-adapted in situ hybridization protocol and a measurement of the nuclear genome size of C. caroliniana. We discuss the features required for species to become tractable models, and discuss the relative merits of Cabomba and other ANA-grade angiosperms in molecular-genetic studies aimed at understanding the origin of the flowering plants. PMID:21486926
Box models for the evolution of atmospheric oxygen: an update
Kasting, J. F.
1991-01-01
A simple 3-box model of the atmosphere/ocean system is used to describe the various stages in the evolution of atmospheric oxygen. In Stage I, which probably lasted until redbeds began to form about 2.0 Ga ago, the Earth's surface environment was generally devoid of free O2, except possibly in localized regions of high productivity in the surface ocean. In Stage II, which may have lasted for less than 150 Ma, the atmosphere and surface ocean were oxidizing, while the deep ocean remained anoxic. In Stage III, which commenced with the disappearance of banded iron formations around 1.85 Ga ago and has lasted until the present, all three surface reservoirs contained appreciable amounts of free O2. Recent and not-so-recent controversies regarding the abundance of oxygen in the Archean atmosphere are identified and discussed. The rate of O2 increase during the Middle and Late Proterozoic is identified as another outstanding question.
Thermal evolution of the Schwinger model with matrix product operators
Energy Technology Data Exchange (ETDEWEB)
Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik, Garching (Germany); Cichy, K. [Frankfurt am Main Univ. (Germany). Inst. fuer Theoretische Physik; Poznan Univ. (Poland). Faculty of Physics; DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC); Jansen, K.; Saito, H. [DESY Zeuthen (Germany). John von Neumann-Institut fuer Computing (NIC)
2015-10-15
We demonstrate the suitability of tensor network techniques for describing the thermal evolution of lattice gauge theories. As a benchmark case, we have studied the temperature dependence of the chiral condensate in the Schwinger model, using matrix product operators to approximate the thermal equilibrium states for finite system sizes with non-zero lattice spacings. We show how these techniques allow for reliable extrapolations in bond dimension, step width, system size and lattice spacing, and for a systematic estimation and control of all error sources involved in the calculation. The reached values of the lattice spacing are small enough to capture the most challenging region of high temperatures and the final results are consistent with the analytical prediction by Sachs and Wipf over a broad temperature range.
Environmental Effects On Galaxy Evolution In Semi-analytic Models
Lee, Jaehyun; Jung, I.; Yi, S.
2012-01-01
We have investigated the evolution of galaxy morphology and its mixture in various halo environments by taking advantages of N-body simulations and semi-analytic approach. Dark matter halos have different growth histories depending on the long-range density (voids vs clusters). Since dynamical properties of dark matter halos decide their merger timescales and galaxy properties residing in the halos, different dark matter halo assemblies make different galaxy merger histories. Thus, it is expected that galaxies in voids and clusters may show different evolutionary histories and morphology mixtures because galaxy mergers play a pivotal role in the galaxy morphology transformation. To examine it, dark matter halo merger trees in various density regions are extracted from N-body simulations, and the evolutionary histories of galaxies are computed with our semi-analytic model code based on the N-body backbones. We present the difference of evolutionary histories and morphology mixtures of galaxies that reside in voids and dense regions.
Tolson, Robert H.; Lugo, Rafael A.; Baird, Darren T.; Cianciolo, Alicia D.; Bougher, Stephen W.; Zurek, Richard M.
2017-01-01
The Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft is a NASA orbiter designed to explore the Mars upper atmosphere, typically from 140 to 160 km altitude. In addition to the nominal science mission, MAVEN has performed several Deep Dip campaigns in which the orbit's closest point of approach, also called periapsis, was lowered to an altitude range of 115 to 135 km. MAVEN accelerometer data were used during mission operations to estimate atmospheric parameters such as density, scale height, along-track gradients, and wave structures. Density and scale height estimates were compared against those obtained from the Mars Global Reference Atmospheric Model and used to aid the MAVEN navigation team in planning maneuvers to raise and lower periapsis during Deep Dip operations. This paper describes the processes used to reconstruct atmosphere parameters from accelerometers data and presents the results of their comparison to model and navigation-derived values.
A steady-state target calculation method based on "point" model for integrating processes.
Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei
2015-05-01
Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
On the asymptotic ergodic capacity of FSO links with generalized pointing error model
Al-Quwaiee, Hessa
2015-09-11
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.
National Research Council Canada - National Science Library
Castafieda, P
2000-01-01
Constitutive models were developed and implemented numerically to account for the evolution of microstructure and anisotropy in finite-deformation processes involving porous and composite materials...
Modeling the two-point correlation of the vector stream function
Oberlack, M.; Rogers, M. M.; Reynolds, W. C.
1994-01-01
A new model for the two-point vector stream function correlation has been developed using tensor invariant arguments and evaluated by the comparison of model predictions with DNS data for incompressible homogeneous turbulent shear flow. This two-point vector stream function model correlation can then be used to calculate the two-point velocity correlation function and other quantities useful in turbulence modeling. The model assumes that the two-point vector stream function correlation can be written in terms of the separation vector and a new tensor function that depends only on the magnitude of the separation vector. The model has a single free model coefficient, which has been chosen by comparison with the DNS data. The relative error of the model predictions of the two-point vector stream function correlation is only a few percent for a broad range of the model coefficient. Predictions of the derivatives of this correlation, which are of interest in turbulence modeling, may not be this accurate.
Land cover models to predict non-point nutrient inputs for selected ...
African Journals Online (AJOL)
WQSAM is a practical water quality model for use in guiding southern African water quality management. However, the estimation of non-point nutrient inputs within WQSAM is uncertain, as it is achieved through a combination of calibration and expert knowledge. Non-point source loads can be correlated to particular land ...
Facial plastic surgery area acquisition method based on point cloud mathematical model solution.
Li, Xuwu; Liu, Fei
2013-09-01
It is one of the hot research problems nowadays to find a quick and accurate method of acquiring the facial plastic surgery area to provide sufficient but irredundant autologous or in vitro skin source for covering extensive wound, trauma, and burnt area. At present, the acquisition of facial plastic surgery area mainly includes model laser scanning, point cloud data acquisition, pretreatment of point cloud data, three-dimensional model reconstruction, and computation of area. By using this method, the area can be computed accurately, but it is hard to control the random error, and it requires a comparatively longer computation period. In this article, a facial plastic surgery area acquisition method based on point cloud mathematical model solution is proposed. This method applies symmetric treatment to the point cloud based on the pretreatment of point cloud data, through which the comparison diagram color difference map of point cloud error before and after symmetry is obtained. The slicing mathematical model of facial plastic area is got through color difference map diagram. By solving the point cloud data in this area directly, the facial plastic area is acquired. The point cloud data are directly operated in this method, which can accurately and efficiently complete the surgery area computation. The result of the comparative analysis shows the method is effective in facial plastic surgery area.
Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups
Casas, Lluís; Estop, Euge`nia
2015-01-01
Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…
Evolution of complexity in a resource-based model
Fernández, Lenin; Campos, Paulo R. A.
2017-02-01
Through a resource-based modelling the evolution of organismal complexity is studied. In the model, the cells are characterized by their metabolic rates which, together with the availability of resource, determine the rate at which they divide. The population is structured in groups. Groups are also autonomous entities regarding reproduction and propagation, and so they correspond to a higher biological organization level. The model assumes reproductive altruism as there exists a fitness transfer from the cell level to the group level. Reproductive altruism comes about by inflicting a higher energetic cost to cells belonging to larger groups. On the other hand, larger groups are less prone to extinction. The strength of this benefit arising from group augmentation can be tuned by the synergistic parameter γ. Through extensive computer simulations we make a thorough exploration of the parameter space to find out the domain in which the formation of larger groups is allowed. We show that formation of small groups can be obtained for a low level of synergy. Larger group sizes can only be attained as synergistic interactions surpass a given level of strength. Although the total resource influx rate plays a key role in determining the number of groups coexisting at the equilibrium, its function on driving group size is minor. On the other hand, how the resource is seized by the groups matters.
DEFF Research Database (Denmark)
Utrilla, José; O'Brien, Edward J.; Chen, Ke
2016-01-01
Pleiotropic regulatory mutations affect diverse cellular processes, posing a challenge to our understanding of genotype-phenotype relationships across multiple biological scales. Adaptive laboratory evolution (ALE) allows for such mutations to be found and characterized in the context of clear se...
Hidden Markov models for evolution and comparative genomics analysis.
Bykova, Nadezda A; Favorov, Alexander V; Mironov, Andrey A
2013-01-01
The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM), where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores) for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score) appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.
Hidden Markov models for evolution and comparative genomics analysis.
Directory of Open Access Journals (Sweden)
Nadezda A Bykova
Full Text Available The problem of reconstruction of ancestral states given a phylogeny and data from extant species arises in a wide range of biological studies. The continuous-time Markov model for the discrete states evolution is generally used for the reconstruction of ancestral states. We modify this model to account for a case when the states of the extant species are uncertain. This situation appears, for example, if the states for extant species are predicted by some program and thus are known only with some level of reliability; it is common for bioinformatics field. The main idea is formulation of the problem as a hidden Markov model on a tree (tree HMM, tHMM, where the basic continuous-time Markov model is expanded with the introduction of emission probabilities of observed data (e.g. prediction scores for each underlying discrete state. Our tHMM decoding algorithm allows us to predict states at the ancestral nodes as well as to refine states at the leaves on the basis of quantitative comparative genomics. The test on the simulated data shows that the tHMM approach applied to the continuous variable reflecting the probabilities of the states (i.e. prediction score appears to be more accurate then the reconstruction from the discrete states assignment defined by the best score threshold. We provide examples of applying our model to the evolutionary analysis of N-terminal signal peptides and transcription factor binding sites in bacteria. The program is freely available at http://bioinf.fbb.msu.ru/~nadya/tHMM and via web-service at http://bioinf.fbb.msu.ru/treehmmweb.
Beans (Phaseolus ssp.) as a Model for Understanding Crop Evolution
Bitocchi, Elena; Rau, Domenico; Bellucci, Elisa; Rodriguez, Monica; Murgia, Maria L.; Gioia, Tania; Santo, Debora; Nanni, Laura; Attene, Giovanna; Papa, Roberto
2017-01-01
Here, we aim to provide a comprehensive and up-to-date overview of the most significant outcomes in the literature regarding the origin of Phaseolus genus, the geographical distribution of the wild species, the domestication process, and the wide spread out of the centers of origin. Phaseolus can be considered as a unique model for the study of crop evolution, and in particular, for an understanding of the convergent phenotypic evolution that occurred under domestication. The almost unique situation that characterizes the Phaseolus genus is that five of its ∼70 species have been domesticated (i.e., Phaseolus vulgaris, P. coccineus, P. dumosus, P. acutifolius, and P. lunatus), and in addition, for P. vulgaris and P. lunatus, the wild forms are distributed in both Mesoamerica and South America, where at least two independent and isolated episodes of domestication occurred. Thus, at least seven independent domestication events occurred, which provides the possibility to unravel the genetic basis of the domestication process not only among species of the same genus, but also between gene pools within the same species. Along with this, other interesting features makes Phaseolus crops very useful in the study of evolution, including: (i) their recent divergence, and the high level of collinearity and synteny among their genomes; (ii) their different breeding systems and life history traits, from annual and autogamous, to perennial and allogamous; and (iii) their adaptation to different environments, not only in their centers of origin, but also out of the Americas, following their introduction and wide spread through different countries. In particular for P. vulgaris this resulted in the breaking of the spatial isolation of the Mesoamerican and Andean gene pools, which allowed spontaneous hybridization, thus increasing of the possibility of novel genotypes and phenotypes. This knowledge that is associated to the genetic resources that have been conserved ex situ and in
Gene finding with a hidden Markov model of genome structure and evolution
DEFF Research Database (Denmark)
Pedersen, Jakob Skou; Hein, Jotun
2003-01-01
the model are linear in alignment length and genome number. The model is applied to the problem of gene finding. The benefit of modelling sequence evolution is demonstrated both in a range of simulations and on a set of orthologous human/mouse gene pairs. AVAILABILITY: Free availability over the Internet...... annotation. The modelling of evolution by the existing comparative gene finders leaves room for improvement. Results: A probabilistic model of both genome structure and evolution is designed. This type of model is called an Evolutionary Hidden Markov Model (EHMM), being composed of an HMM and a set of region...
An analytic model for the evolution of a close binary system of neutron (degenerate) stars
Imshennik, V. S.; Popov, D. V.
1998-03-01
The evolution of a close binary system of neutron stars is studied in the point-mass approximation with allowance for gravitational radiation and mass exchange between the components of the system. The calculation of mass transfer from the low-mass component of the system based on the known approximations for the radii of the Roche lobe and the low-mass component provides the reliable determination of the characteristics of the system by the end of its evolution, which are virtually independent of the initial ratio of the component masses. The evolution of the system is accompanied by the mass loss from the low-mass component and ends in the explosion of this component at the time when its mass reaches the lower limit for neutron stars (close to 0.1 M_solar). After the explosion, the second component of the system leaves the supernova remnant with the speed and rotation period which are determined almost entirely by the total mass of the system M_t. The assumption about the explosion of the low-mass component and subsequent escape of the high-mass component (pulsar or black hole) from the system have been made in the recently proposed scenario of the explosion of collapsing supernovae with allowance for rotational effects (Imshennik 1992; Imshennik and Nadezhin 1992; Imshennik and Popov 1996). We formulate and substantiate an analytic model for the evolution of the system under consideration, in which virtually all mass exchange between the components occurs under the assumption of quasi-stationary circular orbits with significant energy and angular momentum losses related to gravitational radiation. Such character of the evolution persists until the time the mass of the low-mass component reaches the value of order ~ 0.15 M_solar. The remaining mass (~0.05 M_solar) is lost by this component in the dynamical regime and the given analytic model takes on, strictly speaking, the character of a crude estimate. On the basis of this model, the main features of
Nespolo, Roberto F; Roff, Derek A
2014-01-01
The evolution of endothermy is one of the most puzzling events in vertebrate evolution, for which several hypotheses have been proposed. The most accepted model is the aerobic model, which assumes the existence of a genetic correlation between resting metabolic rate (RMR) and maximum aerobic capacity (whose standard measure is maximum metabolic rate, MMR). This model posits that directional selection acted on maximum aerobic capacity and resting metabolic rate increased as a correlated response, in turn increasing body temperature. To test this hypothesis we implemented a simple two-trait quantitative genetic model in which RMR and MMR are initially independent of each other and subject to stabilizing selection to two separate optima. We show mutations that arise that affect both traits can lead to the evolution of a genetic correlation between the traits without any significant shifting of the two trait means. Thus, the presence of a genetic correlation between RMR and MMR in living animals provides no support in and of itself for the past elevation of metabolic rate via selection on aerobic capacity. This result calls into question the testability of the hypothesis that RMR increased as a correlated response to directional selection on MMR, in turn increasing body temperature, using quantitative genetics. Given the difficulty in studying ancient physiological processes, we suggest that approaches such as this model are a valuable alternative for analyzing possible mechanisms of endothermy evolution.
Modelling the effects of water-point closure and fencing removal: a GIS approach.
Graz, F Patrick; Westbrooke, Martin E; Florentine, Singarayer K
2012-08-15
Artificial water-points in the form of troughs or ground tanks are used to augment natural water supplies within rangelands in many parts of the world. Access to such water-points leads to the development of a distinct ecological sub-system, the piosphere, where trampling and grazing impact modify the vegetation. This study aims to consolidate existing information in a GIS based model to investigate grazing patterns within the landscape. The model focuses on the closure of water-points and removal of fences on Nanya Station, New South Wales, Australia. We found that the manipulation of water-points and fences in one management intervention may change grazing activity in a way different to that which would be experienced if each had been modified separately. Such effects are further modified by the spatial distribution of the water-points and the underlying vegetation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Evolution-informed modeling improves outcome prediction for cancers.
Liu, Li; Chang, Yung; Yang, Tao; Noren, David P; Long, Byron; Kornblau, Steven; Qutub, Amina; Ye, Jieping
2017-01-01
Despite wide applications of high-throughput biotechnologies in cancer research, many biomarkers discovered by exploring large-scale omics data do not provide satisfactory performance when used to predict cancer treatment outcomes. This problem is partly due to the overlooking of functional implications of molecular markers. Here, we present a novel computational method that uses evolutionary conservation as prior knowledge to discover bona fide biomarkers. Evolutionary selection at the molecular level is nature's test on functional consequences of genetic elements. By prioritizing genes that show significant statistical association and high functional impact, our new method reduces the chances of including spurious markers in the predictive model. When applied to predicting therapeutic responses for patients with acute myeloid leukemia and to predicting metastasis for patients with prostate cancers, the new method gave rise to evolution-informed models that enjoyed low complexity and high accuracy. The identified genetic markers also have significant implications in tumor progression and embrace potential drug targets. Because evolutionary conservation can be estimated as a gene-specific, position-specific, or allele-specific parameter on the nucleotide level and on the protein level, this new method can be extended to apply to miscellaneous "omics" data to accelerate biomarker discoveries.
A numerical model for meltwater channel evolution in glaciers
Directory of Open Access Journals (Sweden)
A. H. Jarosch
2012-04-01
Full Text Available Meltwater channels form an integral part of the hydrological system of a glacier. Better understanding of how meltwater channels develop and evolve is required to fully comprehend supraglacial and englacial meltwater drainage. Incision of supraglacial stream channels and subsequent roof closure by ice deformation has been proposed in recent literature as a possible englacial conduit formation process. Field evidence for supraglacial stream incision has been found in Svalbard and Nepal. In Iceland, where volcanic activity provides meltwater with temperatures above 0 °C, rapid enlargement of supraglacial channels has been observed. Supraglacial channels provide meltwater through englacial passages to the subglacial hydrological systems of big ice sheets, which in turn affects ice sheet motion and their contribution to eustatic sea level change. By coupling, for the first time, a numerical ice dynamic model to a hydraulic model which includes heat transfer, we investigate the evolution of meltwater channels and their incision behaviour. We present results for different, constant meltwater fluxes, different channel slopes, different meltwater temperatures, different melt rate distributions in the channel as well as temporal variations in meltwater flux. The key parameters governing incision rate and depth are channel slope, meltwater temperature loss to the ice and meltwater flux. Channel width and geometry are controlled by melt rate distribution along the channel wall. Calculated Nusselt numbers suggest that turbulent mixing is the main heat transfer mechanism in the meltwater channels studied.
The evolution of menstruation: A new model for genetic assimilation
Emera, D.; Romero, R.; Wagner, G.
2012-01-01
Why do humans menstruate while most mammals do not? Here, we present our answer to this long-debated question, arguing that (i) menstruation occurs as a mechanistic consequence of hormone-induced differentiation of the endometrium (referred to as spontaneous decidualization, or SD); (ii) SD evolved because of maternal-fetal conflict; and (iii) SD evolved by genetic assimilation of the decidualization reaction, which is induced by the fetus in non-menstruating species. The idea that menstruation occurs as a consequence of SD has been proposed in the past, but here we present a novel hypothesis on how SD evolved. We argue that decidualization became genetically stabilized in menstruating lineages, allowing females to prepare for pregnancy without any signal from the fetus. We present three models for the evolution of SD by genetic assimilation, based on recent advances in our understanding of the mechanisms of endometrial differentiation and implantation. Testing these models will ultimately shed light on the evolutionary significance of menstruation, as well as on the etiology of human reproductive disorders like endometriosis and recurrent pregnancy loss. PMID:22057551
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying
2014-07-15
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
Analysis of the stochastic channel model by Saleh & Valenzuela via the theory of point processes
DEFF Research Database (Denmark)
Jakobsen, Morten Lomholt; Pedersen, Troels; Fleury, Bernard Henri
2012-01-01
In this paper we revisit the classical channel model by Saleh & Valenzuela via the theory of spatial point processes. By reformulating this model as a particular point process and by repeated application of Campbell’s Theorem we provide concise and elegant access to its overall structure...... and underlying features, like the intensity function of the component delays and the delaypower intensity. The flexibility and clarity of the mathematical instruments utilized to obtain these results lead us to conjecture that the theory of spatial point processes provides a unifying mathematical framework...
On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model
Al-Quwaiee, Hessa
2016-06-28
Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.
Challenge to Increase Confidence in Geological Evolution Models
Mizuno, T.; Iwatsuki, T.; Saegusa, H.; Kato, T.; Matsuoka, T.; Yasue, K.; Ohyama, T.; Sasao, E.
2014-12-01
The geological evolution models (GEMs) as well as site descriptive models (SDMs) are used to integrate investigation results and to support safety assessment. Even more, enhancing confidence in long-term stability of geological environment is required for geological disposal in Japan where is in active tectonic region. The aim of the study is to provide future direction for increasing GEMs confidence based on review of current GEMs. GEMs has been constructed in following three steps; 1) Features, Events and Processes (FEP) analysis, 2) Scenario development and 3) Numerical modeling. Base on the current status, we looked at the issues for developing GEMs with higher level of confidence. As the result, development of techniques and methodologies for; 1) validation of GEMs, 2) handling uncertainty and 3) digitalization/visualization are identified as open issues. To solve these issues, we specified three approaches. First approach is using multiple lines of evidence. Consistency between various study fields will be important information for validation of the GEMs. Second one is revealing the argument behind GEMs. Confidence/uncertainty of GEMs will be able to be confirmed by synthesizing the basic information behind the GEMs because GEMs are built on many evidences, hypothesis and assumptions. In addition, the optional cases will be needed for demonstrating the level of understanding. Third is development of elemental technology, such as the integrated system between numerical simulation and visualization which can take into account large size of model and composite phenomenon. In the future, we will focus on increasing GEMs confidence in keeping with this notion. This study was carried out under a contract with METI (Ministry of Economy, Trade and Industry) as part of its R&D supporting program for developing geological disposal technology.
Using many pilot points and singular value decomposition in groundwater model calibration
DEFF Research Database (Denmark)
Christensen, Steen; Doherty, John
2008-01-01
over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...... corresponding to significant Eigen values (resulting from the decomposition) is used to transform the model from having many pilot point parameters to having a few super parameters. A synthetic case model is used to analyze and demonstrate the application of the presented method of model parameterization...
Modelling the temperature evolution of bone under high intensity focused ultrasound
ten Eikelder, H. M. M.; Bošnački, D.; Elevelt, A.; Donato, K.; Di Tullio, A.; Breuer, B. J. T.; van Wijk, J. H.; van Dijk, E. V. M.; Modena, D.; Yeo, S. Y.; Grüll, H.
2016-02-01
Magnetic resonance-guided high intensity focused ultrasound (MR-HIFU) has been clinically shown to be effective for palliative pain management in patients suffering from skeletal metastasis. The underlying mechanism is supposed to be periosteal denervation caused by ablative temperatures reached through ultrasound heating of the cortex. The challenge is exact temperature control during sonication as MR-based thermometry approaches for bone tissue are currently not available. Thus, in contrast to the MR-HIFU ablation of soft tissue, a thermometry feedback to the HIFU is lacking, and the treatment of bone metastasis is entirely based on temperature information acquired in the soft tissue adjacent to the bone surface. However, heating of the adjacent tissue depends on the exact sonication protocol and requires extensive modelling to estimate the actual temperature of the cortex. Here we develop a computational model to calculate the spatial temperature evolution in bone and the adjacent tissue during sonication. First, a ray-tracing technique is used to compute the heat production in each spatial point serving as a source term for the second part, where the actual temperature is calculated as a function of space and time by solving the Pennes bio-heat equation. Importantly, our model includes shear waves that arise at the bone interface as well as all geometrical considerations of transducer and bone geometry. The model was compared with a theoretical approach based on the far field approximation and an MR-HIFU experiment using a bone phantom. Furthermore, we investigated the contribution of shear waves to the heat production and resulting temperatures in bone. The temperature evolution predicted by our model was in accordance with the far field approximation and agreed well with the experimental data obtained in phantoms. Our model allows the simulation of the HIFU treatments of bone metastasis in patients and can be extended to a planning tool prior to MR
Benchmark models, planes lines and points for future SUSY searches at the LHC
Energy Technology Data Exchange (ETDEWEB)
AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)
2012-03-15
We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.
Robust non-rigid point set registration using student's-t mixture model.
Directory of Open Access Journals (Sweden)
Zhiyong Zhou
Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.
On Religion and Language Evolutions Seen Through Mathematical and Agent Based Models
Ausloos, M.
Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also obvious characteristics of the human species. Religions, languages appear though also disappear. All religions and languages evolve and survive when they adapt to the society developments. On the other hand, the number of adherents of a given religion, or the number of persons speaking a language is not fixed in time, - nor space. Several questions can be raised. E.g. from a oscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve? From a "microscopic" view point: can one invent agent based models to describe oscopic aspects? Do simple evolution equations exist? How complicated must be a model? These aspects are considered in the present note. Basic evolution equations are outlined and critically, though briefly, discussed. Similarities and differences between religions and languages are summarized. Cases can be illustrated with historical facts and data. It is stressed that characteristic time scales are different. It is emphasized that "external fields" are historically very relevant in the case of religions, rending the study more " interesting" within a mechanistic approach based on parity and symmetry of clusters concepts. Yet the modern description of human societies through networks in reported simulations is still lacking some mandatory ingredients, i.e. the non scalar nature of the nodes, and the non binary aspects of nodes and links, though for the latter this is already often taken into account, including directions. From an analytical point of view one can consider a population independently of the others. It is intuitively accepted, but also found from the statistical analysis of the frequency distribution that an
IMAGE-BASED AIRBORNE LiDAR POINT CLOUD ENCODING FOR 3D BUILDING MODEL RETRIEVAL
Directory of Open Access Journals (Sweden)
Y.-C. Chen
2016-06-01
Full Text Available With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show
Incorporating Terrestrial Processes in Models of PETM Carbon Cycle Evolution
Bowen, G. J.
2016-12-01
Evidence for the massive, rapid release of carbon to the ocean/atmosphere/biosphere system at the onset of the PETM is unequivocal, but the sequence of feedbacks that governed the evolution and recovery of the carbon cycle over the subsequent 150,000 years of the event remain unclear. Sedimentological evidence suggests that much of the excess carbon was eventually sequestered as carbonate in marine sediments, but there is also significant and growing evidence for changes in continental carbon cycle processes, most of which have not been incorporated in models of the event. I describe several aspects of the observed or implied continental response to the PETM, including changes in ecosystem organic carbon storage, soil carbonate growth, and export of organic carbon to the marine margins. These processes, along with continental silicate weathering, have been incorporated in a terrestrial module for a simple box model of the PETM carbon cycle, which is being used to evaluate their potential impact on global carbon cycle response and recovery. Although changes in terrestrial organic carbon storage can help explain patterns of global carbon isotope change throughout the event, constraints from ocean pH records suggest that other mechanisms must have contributed to pacing the duration and recovery of the PETM. Model results suggest that enhanced soil carbonate formation and the provenance of organic carbon buried in continental margin sediments are two poorly constrained variables that could alter the interpretation and implications of the continental records. Given the strong potential for, and high uncertainty in, future changes in terrestrial carbon cycle processes, resolving the nature and long-term impacts of such changes during the PETM represents a major opportunity to leverage the geologic record of this hyperthermal to increase understanding of human-induced global change.
Improving the thin-disk models of circumstellar disk evolution. The 2+1-dimensional model
Vorobyov, Eduard I.; Pavlyuchenkov, Yaroslav N.
2017-09-01
Context. Circumstellar disks of gas and dust are naturally formed from contracting pre-stellar molecular cores during the star formation process. To study various dynamical and chemical processes that take place in circumstellar disks prior to their dissipation and transition to debris disks, the appropriate numerical models capable of studying the long-term disk chemodynamical evolution are required. Aims: We improve the frequently used 2D hydrodynamical model for disk evolution in the thin-disk limit by employing a better calculation of the disk thermal balance and adding a reconstruction of the disk vertical structure. Together with the hydrodynamical processes, the thermal evolution is of great importance since it influences the strength of gravitational instability and the chemical evolution of the disk. Methods: We present a new 2+1-dimensional numerical hydrodynamics model of circumstellar disk evolution, where the thin-disk model is complemented with the procedure for calculating the vertical distributions of gas volume density and temperature in the disk. The reconstruction of the disk vertical structure is performed at every time step via the solution of the time-dependent radiative transfer equations coupled to the equation of the vertical hydrostatic equilibrium. Results: We perform a detailed comparison between circumstellar disks produced with our previous 2D model and with the improved 2+1D approach. The structure and evolution of resulting disks, including the differences in temperatures, densities, disk masses, and protostellar accretion rates, are discussed in detail. Conclusions: The new 2+1D model yields systematically colder disks, while the in-falling parental clouds are warmer. Both effects act to increase the strength of disk gravitational instability and, as a result, the number of gravitationally bound fragments that form in the disk via gravitational fragmentation as compared to the purely 2D thin-disk simulations with a simplified
Model of climate evolution based on continental drift and polar wandering
Donn, W. L.; Shaw, D. M.
1977-01-01
The thermodynamic meteorologic model of Adem is used to trace the evolution of climate from Triassic to present time by applying it to changing geography as described by continental drift and polar wandering. Results show that the gross changes of climate in the Northern Hemisphere can be fully explained by the strong cooling in high latitudes as continents moved poleward. High-latitude mean temperatures in the Northern Hemisphere dropped below the freezing point 10 to 15 m.y. ago, thereby accounting for the late Cenozoic glacial age. Computed meridional temperature gradients for the Northern Hemisphere steepened from 20 to 40 C over the 200-m.y. period, an effect caused primarily by the high-latitude temperature decrease. The primary result of the work is that the cooling that has occurred since the warm Mesozoic period and has culminated in glaciation is explainable wholly by terrestrial processes.
Modelling the effect of time-dependent river dune evolution on bed roughness and stage
Paarlberg, Andries; Dohmen-Janssen, Catarine M.; Hulscher, Suzanne J.M.H.; Termes, A.P.P.; Schielen, Ralph Mathias Johannes
2010-01-01
This paper presents an approach to incorporate time-dependent dune evolution in the determination of bed roughness coefficients applied in hydraulic models. Dune roughness is calculated by using the process-based dune evolution model of Paarlberg et al. (2009) and the empirical dune roughness
Braun, J.
2003-04-01
In recent years much work has been devoted to improving our understanding of the coupling between surface processes, climate and tectonics. Thanks to improved computer power and state-of-the-art computational methods, numerical models of crustal deformation have been developed that allow for a fully-dynamical study of the coupling between tectonic processes and surface erosion in active mountain belts. These models have demonstrated that the large-scale morphology of orogenic belts may be strongly influenced by the nature and intensity of erosional processes which, in turn, are related to local climatic conditions. To properly understand this important feed back that arises from the large gravitational stresses generated by vertical movement of the Earth surface, we must obtain constraints on (a) the rate at which surface processes operate and (b) how rapidly tectonics processes adjust to temporal variations in erosion rates. I propose that numerical models are necessary tools to derive useful, quantitative information on the rate of Earth processes from a wide range of geological and geophysical observations. For example, thermochronological data can be used to determine the rate at which rocks are exhumed towards the surface. I will show how, by combining a landscape evolution model to a numerical model of heat transfer in the crust, one can use thermochronological datasets to derive direct information on the rate of landform evolution through geological times, as well as the rate of mean rock exhumation in a variety of tectonic settings. I will also demonstrate how numerical models can be used as spatial and temporal integrators to extract from spatially sparse datasets important information on Earth system behaviour. This point will be illustrated by showing how one can derive estimates of the relative importance of a variety of soil transport mechanisms from field measurements of soil thickness, surface curvature and rate of soil production at a small number
Three-dimensional point-cloud room model for room acoustics simulations
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...
Three-dimensional point-cloud room model in room acoustics simulations
DEFF Research Database (Denmark)
Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte
2013-01-01
acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...
Point vortex modelling of the wake dynamics behind asymmetric vortex generator arrays
Baldacchino, D.; Simao Ferreira, C.; Ragni, D.; van Bussel, G.J.W.
2016-01-01
In this work, we present a simple inviscid point vortex model to study the dynamics of asymmetric vortex rows, as might appear behind misaligned vortex generator vanes. Starting from the existing solution of the in_nite vortex cascade, a numerical model of four base-vortices is chosen to represent
DEFF Research Database (Denmark)
Barfod, Adrian; Vilhelmsen, Troels Norvin; Jørgensen, Flemming
2017-01-01
Accurately predicting the flow of groundwater requires a hydrostratigraphic model, which describes the structural architecture. State-of-the-art Multiple-Point Statistical (MPS) tools are readily available for creating models depicting subsurface geology. We present a study of the impact of key p...
Caulkins, J.P.; Feichtinger, G.; Grass, D.; Hartl, R.F.; Kort, P.M.; Novak, A.J.; Seidl, A.
We present a novel model of corruption dynamics in the form of a nonlinear optimal dynamic control problem. It has a tipping point, but one whose origins and character are distinct from that in the classic Schelling (1978) model. The decision maker choosing a level of corruption is the chief or some
Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus
2014-01-01
model. The multiple-point statistics of outcome realizations from this combined model has improved degree of match with the statistics from the training image. An efficient algorithm that samples this combined model is suggested. Finally, a tomographic cross-borehole inverse problem with prior......Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...
Nutrient-dependent/pheromone-controlled adaptive evolution: a model
Directory of Open Access Journals (Sweden)
James Vaughn Kohl
2013-06-01
Full Text Available Background: The prenatal migration of gonadotropin-releasing hormone (GnRH neurosecretory neurons allows nutrients and human pheromones to alter GnRH pulsatility, which modulates the concurrent maturation of the neuroendocrine, reproductive, and central nervous systems, thus influencing the development of ingestive behavior, reproductive sexual behavior, and other behaviors. Methods: This model details how chemical ecology drives adaptive evolution via: (1 ecological niche construction, (2 social niche construction, (3 neurogenic niche construction, and (4 socio-cognitive niche construction. This model exemplifies the epigenetic effects of olfactory/pheromonal conditioning, which alters genetically predisposed, nutrient-dependent, hormone-driven mammalian behavior and choices for pheromones that control reproduction via their effects on luteinizing hormone (LH and systems biology. Results: Nutrients are metabolized to pheromones that condition behavior in the same way that food odors condition behavior associated with food preferences. The epigenetic effects of olfactory/pheromonal input calibrate and standardize molecular mechanisms for genetically predisposed receptor-mediated changes in intracellular signaling and stochastic gene expression in GnRH neurosecretory neurons of brain tissue. For example, glucose and pheromones alter the hypothalamic secretion of GnRH and LH. A form of GnRH associated with sexual orientation in yeasts links control of the feedback loops and developmental processes required for nutrient acquisition, movement, reproduction, and the diversification of species from microbes to man. Conclusion: An environmental drive evolved from that of nutrient ingestion in unicellular organisms to that of pheromone-controlled socialization in insects. In mammals, food odors and pheromones cause changes in hormones such as LH, which has developmental affects on pheromone-controlled sexual behavior in nutrient-dependent reproductively
Agent-Based Model to Study and Quantify the Evolution Dynamics of Android Malware Infection
National Research Council Canada - National Science Library
Alegre-Sanahuja, Juan; Camacho, Javier; Cortés López, Juan Carlos; Santonja, Francisco-José; Villanueva Micó, Rafael Jacinto
2014-01-01
.... In this paper, we propose an agent-based model to quantify the Android malware infection evolution, modeling the behavior of the users and the different markets where the users may download Apps...
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
2010-01-01
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...
DEFF Research Database (Denmark)
Møller, Jesper; Diaz-Avalos, Carlos
Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...
Evolution of Neural Dynamics in an Ecological Model
Directory of Open Access Journals (Sweden)
Steven Williams
2017-07-01
Full Text Available What is the optimal level of chaos in a computational system? If a system is too chaotic, it cannot reliably store information. If it is too ordered, it cannot transmit information. A variety of computational systems exhibit dynamics at the “edge of chaos”, the transition between the ordered and chaotic regimes. In this work, we examine the evolved neural networks of Polyworld, an artificial life model consisting of a simulated ecology populated with biologically inspired agents. As these agents adapt to their environment, their initially simple neural networks become increasingly capable of exhibiting rich dynamics. Dynamical systems analysis reveals that natural selection drives these networks toward the edge of chaos until the agent population is able to sustain itself. After this point, the evolutionary trend stabilizes, with neural dynamics remaining on average significantly far from the transition to chaos.
Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks
Kyo, Koki
Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.
Advances in Modeling the initiation and evolution of CMEs through the Solar WInd
Riley, P.; Mikic, Z.; Linker, J. A.; Torok, T.; Lionello, R.; Titov, V. S.
2011-12-01
Over the last decade, several factors have led to remarkable gains in our ability to realistically model a coronal mass ejection (CME) all the way from the solar surface to 1 AU, or beyond. First, global models of the ambient solar corona and inner heliosphere have improved dramatically. The algorithms have transitioned from simple polytropic prescriptions to rich thermodynamic models that can reproduce the essential features of remote solar observations and in-situ measurements. Second, theories of CME initiation, and their implementation into numerical models, have developed to the point that a range of complex mechanisms can now be simulated with great fidelity. Third, the original serial codes are now fully parallelized allowing them to recruit thousands of processors, and with this, the ability to simulate events on unprecedented temporal and spatial scales. And fourth, successive NASA-led missions are returning ever-more resolved and accurate photospheric magnetic field observations from which boundary conditions can be derived. In this talk, we show how these factors have allowed us to produce event-specific simulations that provide genuine insight into the initiation and evolution of CMEs, and contrast these results with what was "state-of-the-art" only 10 years ago. We close by speculating on what the next advances in global CME models might be.
Exact Logarithmic Four-Point Functions in the Critical Two-Dimensional Ising Model
Gori, Giacomo; Viti, Jacopo
2017-11-01
Based on conformal symmetry we propose an exact formula for the four-point connectivities of Fortuin-Kasteleyn clusters in the critical Ising model when the four points are anchored to the boundary. The explicit solution we found displays logarithmic singularities. We check our prediction using Monte Carlo simulations on a triangular lattice, showing excellent agreement. Our findings could shed further light on the formidable task of the characterization of logarithmic conformal field theories and on their relevance in physics.
2016-08-04
5000 Frank J. Huston II1, Gale L. Zielinski1, Matthew P. Reed, PhD.2 1 US Army TARDEC, Warren, MI 2 University of Michigan Transportation Research...Institute, Ann Arbor, MI UNCLASSIFIED: Distribution Statement A Approved for Public Release Creation of the Driver Fixed Heel Point (FHP) CAD... Creation of the Driver Fixed Heel Point (FHP) CAD Accommodation Model for Military Ground Vehicle Design Page 2 of 9 UNCLASSIFIED: Distribution
An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians
Hughes, Ciaran; Mehta, Dhagash; Wales, David J.
2014-05-01
Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems.
Flash Point Measurements and Modeling for Ternary Partially Miscible AqueousOrganic Mixtures
Liaw, Horng-Jang; Gerbaud, Vincent; Wu, Hsuan-Ta
2010-01-01
Flash point is the most important variable used to characterize the fire and explosion hazard of liquids. This paper presents the first partially miscible aqueousorganic mixtures flash point measurements and modeling for the ternary type-I mixtures, water + ethanol + 1-butanol, water + ethanol + 2-butanol, and the type-II mixture, water + 1-butanol + 2-butanol. Results reveal that the flash points are constant in each tie line. Handling the non-ideality of the liquid phase through the use of...
Non-Linear Aeroelastic Analysis Using the Point Transformation Method, Part 1: Freeplay Model
LIU, L.; WONG, Y. S.; LEE, B. H. K.
2002-05-01
A point transformation technique is developed to investigate the non-linear behavior of a two-dimensional aeroelastic system with freeplay models. Two formulations of the point transformation method are presented, which can be applied to accurately predict the frequency and amplitude of limit cycle oscillations. Moreover, it is demonstrated that the developed formulations are capable of detecting complex aeroelastic responses such as periodic motions with harmonics, period doubling, chaotic motions and the coexistence of stable limit cycles. Applications of the point transformation method to several test examples are presented. It is concluded that the formulations developed in this paper are efficient and effective.
A Labeling Model Based on the Region of Movability for Point-Feature Label Placement
Directory of Open Access Journals (Sweden)
Lin Li
2016-09-01
Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.
Energy Technology Data Exchange (ETDEWEB)
Li, Dongsheng; Ahzi, Said; M' Guil, S. M.; Wen, Wei; Lavender, Curt A.; Khaleel, Mohammad A.
2014-01-06
The viscoplastic intermediate phi-model was applied in this work to predict the deformation behavior and texture evolution in a magnesium alloy, an HCP material. We simulated the deformation behavior with different intergranular interaction strengths and compared the predicted results with available experimental results. In this approach, elasticity is neglected and the plastic deformation mechanisms are assumed as a combination of crystallographic slip and twinning systems. Tests are performed for rolling (plane strain compression) of random textured Mg polycrystal as well as for tensile and compressive tests on rolled Mg sheets. Simulated texture evolutions agree well with experimental data. Activities of twinning and slip, predicted by the intermediate $\\phi$-model, reveal the strong anisotropic behavior during tension and compression of rolled sheets.
Fast calculation method of a CGH for a patch model using a point-based method.
Ogihara, Y; Sakamoto, Y
2015-01-01
Holography is three-dimensional display technology. Computer-generated holograms (CGHs) are created by simulating light propagation on a computer, and they are able to display a virtual object. There are mainly two types of calculation methods of CGHs, a point-based method and the fast Fourier-transform (FFT)-based method. The FFT-based method is based on a patch model, and it is suited to accelerating the calculations as it calculates the light propagation across a patch as a whole. The calculations with the point-based method are characterized by a high degree of parallelism, and it is suited to accelerating graphics processing units (GPUs). The point-based method is not suitable for calculation with the patch model. This paper proposes a fast calculation algorithm for a patch model with the point-based method. The proposed method calculates the line on a patch as a whole regardless of the number of points on the line. When the proposed method is implemented on a GPU, the calculation time of the proposed method is shorter than with the point-based method.
Weak gravity conjecture, multiple point principle and the standard model landscape
Hamada, Yuta; Shiu, Gary
2017-11-01
The requirement for an ultraviolet completable theory to be well-behaved upon compactification has been suggested as a guiding principle for distinguishing the landscape from the swampland. Motivated by the weak gravity conjecture and the multiple point principle, we investigate the vacuum structure of the standard model compactified on S 1 and T 2. The measured value of the Higgs mass implies, in addition to the electroweak vacuum, the existence of a new vacuum where the Higgs field value is around the Planck scale. We explore two- and three-dimensional critical points of the moduli potential arising from compactifications of the electroweak vacuum as well as this high scale vacuum, in the presence of Majorana/Dirac neutrinos and/or axions. We point out potential sources of instability for these lower dimensional critical points in the standard model landscape. We also point out that a high scale AdS4 vacuum of the Standard Model, if exists, would be at odd with the conjecture that all non-supersymmetric AdS vacua are unstable. We argue that, if we require a degeneracy between three- and four-dimensional vacua as suggested by the multiple point principle, the neutrinos are predicted to be Dirac, with the mass of the lightest neutrino ≈ O(1-10) meV, which may be tested by future CMB, large scale structure and 21cm line observations.
Search for a liquid-liquid critical point in models of silica
Lascaris, Erik; Hemmati, Mahin; Buldyrev, Sergey V.; Stanley, H. Eugene; Angell, C. Austen
2014-01-01
Previous research has indicated the possible existence of a liquid-liquid critical point (LLCP) in models of silica at high pressure. To clarify this interesting question we run extended molecular dynamics simulations of two different silica models (WAC and BKS) and perform a detailed analysis of the liquid at temperatures much lower than those previously simulated. We find no LLCP in either model within the accessible temperature range, although it is closely approached in the case of the WA...
Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint
Energy Technology Data Exchange (ETDEWEB)
Li, Y.; Yu, Y. H.
2012-05-01
During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.
Energy Technology Data Exchange (ETDEWEB)
Dan, Ho Jin; Lee, Joon Sik [Seoul National University, Seoul (Korea, Republic of)
2016-03-15
Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation.
Empiric model for mean generation time adjustment factor for classic point kinetics equations
Energy Technology Data Exchange (ETDEWEB)
Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear
2017-11-01
Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)
Obliquity variation in a Mars climate evolution model
Tyler, D.; Haberle, Robert M.
1993-01-01
The existence of layered terrain in both polar regions of Mars is strong evidence supporting a cyclic variation in climate. It has been suggested that periods of net deposition have alternated with periods of net erosion in creating the layered structure that is seen today. The cause for this cyclic climatic behavior is variation in the annually averaged latitudinal distribution of solar insolation in response to obliquity cycles. For Mars, obliquity variation leads to major climatological excursion due to the condensation and sublimation of the major atmospheric constituent, CO2. The atmosphere will collapse into the polar caps, or existing caps will rapidly sublimate into the atmosphere, dependent upon the polar surface heat balance and the direction of the change in obliquity. It has been argued that variations in the obliquity of Mars cause substantial departures from the current climatological values of the surface pressure and the amount of CO2 stored in both the planetary regolith and polar caps. In this new work we have modified the Haberle et al. model to incorporate variable obliquity by allowing the polar and equatorial insolation to become functions of obliquity, which we assume to vary sinusoidally in time. As obliquity varies in the model, there can be discontinuities in the time evolution of the model equilibrium values for surface pressure, regolith, and polar cap storage. The time constant, tau r, for the regolith to find equilibrium with the climate is estimated--depending on the depth, thermal conductivity, and porosity of the regolith--between 10(exp 4) and 10(exp 6) yr. Thus, using 2000-yr timesteps to move smoothly through the 0.1250 m.y. obliquity cycles, we have an atmosphere/regolith system that cannot be assumed in equilibrium. We have dealt with this problem by limiting the rate at which CO2, can move between the atmosphere and regolith, mimicking the diffusive nature and effects of the temperature and pressure waves, by setting the time
Generalized coarse-grained model based on point multipole and Gay-Berne potentials.
Golubkov, Pavel A; Ren, Pengyu
2006-08-14
This paper presents a general coarse-grained molecular mechanics model based on electric point multipole expansion and Gay-Berne [J. Chem. Phys. 74, 3316 (1981)] potential. Coarse graining of van der Waals potential is achieved by treating molecules as soft uniaxial ellipsoids interacting via a generalized anisotropic Gay-Berne function. The charge distribution is represented by point multipole expansion, including point charge, dipole, and quadrupole moments placed at the center of mass. The Gay-Berne and point multipole potentials are combined in the local reference frame defined by the inertial frame of the all-atom counterpart. The coarse-grained model has been applied to rigid-body molecular dynamics simulations of molecular liquids including benzene and methanol. The computational efficiency is improved by several orders of magnitude, while the results are in reasonable agreement with all-atom models and experimental data. We also discuss the implications of using point multipole for polar molecules capable of hydrogen bonding and the applicability of this model to a broad range of molecular systems including highly charged biopolymers.
Directory of Open Access Journals (Sweden)
Lei Jia
Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.
Energy Technology Data Exchange (ETDEWEB)
Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
A high-resolution model for soft tissue deformation based on point primitives.
Zou, Yanni; Liu, Peter X
2017-09-01
In order to achieve a high degree of visual realism in surgery simulation, we propose a new model, which is based on point primitives and continuous elastic mechanics theory, for soft tissue deformation, tearing and/or cutting. The model can be described as a two-step local high-resolution strategy. First, appropriate volumetric data are sampled and assigned with proper physical properties. Second, sparsely sampled points in non-deformed regions and densely-sampled points in the deformed zone are selected and evaluated. By using a meshless deformation model based on point primitives for all volumetric data, the affine transform matrix of collision points can be computed. The new positions of neighboring points in the collided surface can be then calculated, and more details in the local deformed zone can be obtained for rendering. Technical details about the derivations of the proposed model as well as its implementation are given. The visual effects and computation cost of the proposed model are evaluated and compared with conventional primitives-based methods. Experimental results show that the proposed model provides users (trainees) with improved visual feedback while the computational cost is at the same magnitude of other similar methods. The proposed method is especially suitable for the simulation of soft tissue deformation and tearing because no grid information needs to be maintained. It can simulate soft tissue deformation in a high degree of authenticity with real-time performance. It could be considered implemented in the development of a mixed reality application of neurosurgery simulators in the future. Copyright © 2017 Elsevier B.V. All rights reserved.
Wu, Wei; Jia, Fan; Kinai, Richard; Little, Todd D.
2017-01-01
Spline growth modelling is a popular tool to model change processes with distinct phases and change points in longitudinal studies. Focusing on linear spline growth models with two phases and a fixed change point (the transition point from one phase to the other), we detail how to find optimal data collection designs that maximize the efficiency…
Monti, Jonathan
2017-03-01
Advances in technology and increased affordability of machines have allowed ultrasound to become ubiquitous across the spectrum of medical care. Increasing portability has brought ultrasound to the point of care in multiple medical specialties. Formal ultrasound training is rapidly being incorporated into multispecialty residency programs and undergraduate medical education curricula, yet little formal training exists for physician assistants (PAs) on this emerging clinical adjunct. This article outlines recommendations for and barriers to the incorporation of bedside ultrasound into PA clinical practice.
Markov Random Field Restoration of Point Correspondences for Active Shape Modelling
DEFF Research Database (Denmark)
Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus
2004-01-01
In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...... model mesh to the target shapes. When this is done by a nearest neighbour projection it can result in folds and inhomogeneities in the correspondence vector field. The novelty in this paper is the use and extension of a Markov random field regularisation of the correspondence field. The correspondence...... model that produces highly homogeneous polygonised shapes with improved reconstruction capabilities of the training data. Furthermore, the method leads to an overall reduction in the total variance of the resulting point distribution model. The method is demonstrated on a set of human ear canals...
Modeling Continental Growth and Mantle Hydration in Earth's Evolution and the Impact of Life
Höning, Dennis; Spohn, Tilman
2016-04-01
The evolution of planets with plate tectonics is significantly affected by several intertwined feedback cycles. On Earth, interactions between atmosphere, hydrosphere, biosphere, crust, and interior determine its present day state. We here focus on the feedback cycles including the evolutions of mantle water budget and continental crust, and investigate possible effects of the Earth's biosphere. The first feedback loop includes cycling of water into the mantle at subduction zones and outgassing at volcanic chains and mid-ocean ridges. Water is known to reduce the viscosity of mantle rock, and therefore the speed of mantle convection and plate subduction will increase with the water concentration, eventually enhancing the rates of mantle water regassing and outgassing. A second feedback loop includes the production and erosion of continental crust. Continents are formed above subduction zones, whose total length is determined by the total size of the continents. Furthermore, the total surface area of continental crust determines the amount of eroded sediments per unit time. Subducted sediments affect processes in subduction zones, eventually enhancing the production rate of new continental crust. Both feedback loops affect each other: As a wet mantle increases the speed of subduction, continental production also speeds up. On the other hand, the total length of subduction zones and the rate at which sediments are subducted (both being functions of continental coverage) affect the rate of mantle water regassing. We here present a model that includes both cycles and show how the system develops stable and unstable fixed points in a plane defined by mantle water concentration and surface of continents. We couple these feedback cycles to a parameterized thermal evolution model that reproduces present day observations. We show how Earth has been affected by these feedback cycles during its evolution, and argue that Earth's present day state regarding its mantle water
An interpretation model of GPR point data in tunnel geological prediction
He, Yu-yao; Li, Bao-qi; Guo, Yuan-shu; Wang, Teng-na; Zhu, Ya
2017-02-01
GPR (Ground Penetrating Radar) point data plays an absolutely necessary role in the tunnel geological prediction. However, the research work on the GPR point data is very little and the results does not meet the actual requirements of the project. In this paper, a GPR point data interpretation model which is based on WD (Wigner distribution) and deep CNN (convolutional neural network) is proposed. Firstly, the GPR point data is transformed by WD to get the map of time-frequency joint distribution; Secondly, the joint distribution maps are classified by deep CNN. The approximate location of geological target is determined by observing the time frequency map in parallel; Finally, the GPR point data is interpreted according to the classification results and position information from the map. The simulation results show that classification accuracy of the test dataset (include 1200 GPR point data) is 91.83% at the 200 iteration. Our model has the advantages of high accuracy and fast training speed, and can provide a scientific basis for the development of tunnel construction and excavation plan.
Directory of Open Access Journals (Sweden)
Nengcheng Chen
2017-02-01
Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.
Point model equations for neutron correlation counting: Extension of Böhnel's equations to any order
Favalli, Andrea; Croft, Stephen; Santi, Peter
2015-09-01
Various methods of autocorrelation neutron analysis may be used to extract information about a measurement item containing spontaneously fissioning material. The two predominant approaches being the time correlation analysis (that make use of a coincidence gate) methods of multiplicity shift register logic and Feynman sampling. The common feature is that the correlated nature of the pulse train can be described by a vector of reduced factorial multiplet rates. We call these singlets, doublets, triplets etc. Within the point reactor model the multiplet rates may be related to the properties of the item, the parameters of the detector, and basic nuclear data constants by a series of coupled algebraic equations - the so called point model equations. Solving, or inverting, the point model equations using experimental calibration model parameters is how assays of unknown items is performed. Currently only the first three multiplets are routinely used. In this work we develop the point model equations to higher order multiplets using the probability generating functions approach combined with the general derivative chain rule, the so called Faà di Bruno Formula. Explicit expression up to 5th order are provided, as well the general iterative formula to calculate any order. This work represents the first necessary step towards determining if higher order multiplets can add value to nondestructive measurement practice for nuclear materials control and accountancy.
3D MODELING OF COMPONENTS OF A GARDEN BY USING POINT CLOUD DATA
Directory of Open Access Journals (Sweden)
R. Kumazakia
2016-06-01
Full Text Available Laser measurement is currently applied to several tasks such as plumbing management, road investigation through mobile mapping systems, and elevation model utilization through airborne LiDAR. Effective laser measurement methods have been well-documented in civil engineering, but few attempts have been made to establish equally effective methods in landscape engineering. By using point cloud data acquired through laser measurement, the aesthetic landscaping of Japanese gardens can be enhanced. This study focuses on simple landscape simulations for pruning and rearranging trees as well as rearranging rocks, lanterns, and other garden features by using point cloud data. However, such simulations lack concreteness. Therefore, this study considers the construction of a library of garden features extracted from point cloud data. The library would serve as a resource for creating new gardens and simulating gardens prior to conducting repairs. Extracted garden features are imported as 3ds Max objects, and realistic 3D models are generated by using a material editor system. As further work toward the publication of a 3D model library, file formats for tree crowns and trunks should be adjusted. Moreover, reducing the size of created models is necessary. Models created using point cloud data are informative because simply shaped garden features such as trees are often seen in the 3D industry.
Yang, Yang; Lichtenwalter, Ryan N; Dong, Yuxiao
2016-01-01
What drives the propensity for the social network dynamics? Social influence is believed to drive both off-line and on-line human behavior, however it has not been considered as a driver of social network evolution. Our analysis suggest that, while the network structure affects the spread of influence in social networks, the network is in turn shaped by social influence activity (i.e., the process of social influence wherein one person's attitudes and behaviors affect another's). To that end, we develop a novel model of network evolution where the dynamics of network follow the mechanism of influence propagation, which are not captured by the existing network evolution models. Our experiments confirm the predictions of our model and demonstrate the important role that social influence can play in the process of network evolution. As well exploring the reason of social network evolution, different genres of social influence have been spotted having different effects on the network dynamics. These findings and ...
Potential energy landscape of the two-dimensional XY model: Higher-index stationary points
Mehta, D.; Hughes, C.; Kastner, M.; Wales, D. J.
2014-06-01
The application of numerical techniques to the study of energy landscapes of large systems relies on sufficient sampling of the stationary points. Since the number of stationary points is believed to grow exponentially with system size, we can only sample a small fraction. We investigate the interplay between this restricted sample size and the physical features of the potential energy landscape for the two-dimensional XY model in the absence of disorder with up to N = 100 spins. Using an eigenvector-following technique, we numerically compute stationary points with a given Hessian index I for all possible values of I. We investigate the number of stationary points, their energy and index distributions, and other related quantities, with particular focus on the scaling with N. The results are used to test a number of conjectures and approximate analytic results for the general properties of energy landscapes.
De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan
2016-11-01
A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.
An Efficient Operator for the Change Point Estimation in Partial Spline Model.
Han, Sung Won; Zhong, Hua; Putt, Mary
2015-05-01
In bio-informatics application, the estimation of the starting and ending points of drop-down in the longitudinal data is important. One possible approach to estimate such change times is to use the partial spline model with change points. In order to use estimate change time, the minimum operator in terms of a smoothing parameter has been widely used, but we showed that the minimum operator causes large MSE of change point estimates. In this paper, we proposed the summation operator in terms of a smoothing parameter, and our simulation study showed that the summation operator gives smaller MSE for estimated change points than the minimum one. We also applied the proposed approach to the experiment data, blood flow during photodynamic cancer therapy.
3D-map modelling for the melting points prediction of intumescent flame-retardant coatings.
Korotkov, A S; Gravit, M
2017-08-01
The applicability of 3D map modelling for melting point prediction was studied. The melting points in the ammonium polyphosphate-pentaerythritol-melamine chemical system of intumescent flame-retardant coatings over a wide range of concentrations were collected. The ternary diagram (triangle) of the melting points was plotted and an approximated 3D map was built for the range 205-345°C. The present work contains the thermal data for the observed ternary system and provides a new graphic system for making predictions for intumescent flame-retardant coatings. The applicability of the calculated 3D map for obtaining experimental samples of fire-retardant paints with a low melting point for thin steel constructions was shown.
A Bayesian finite mixture change-point model for assessing the risk of novice teenage drivers.
Li, Qing; Guo, Feng; Kim, Inyoung; Klauer, Sheila G; Simons-Morton, Bruce G
2018-01-01
The driving risk during the initial period after licensure for novice teenage drivers is typically the highest but decreases rapidly right after. The change-point of driving risk is a critical parameter for evaluating teenage driving risk, which also varies substantially among drivers. This paper presents latent class recurrent-event change-point models for detecting the change-points. The proposed model is applied to the Naturalist Teenage Driving Study, which continuously recorded the driving data of 42 novice teenage drivers for 18 months using advanced in-vehicle instrumentation. We propose a hierarchical BFMM to estimate the change-points by clusters of drivers with similar risk profiles. The model is based on a non-homogeneous Poisson process with piecewise-constant intensity functions. Latent variables which identify the membership of the subjects are used to detect potential clusters among subjects. Application to the Naturalistic Teenage Driving Study identifies three distinct clusters with change-points at 52.30, 108.99 and 150.20 hours of driving after first licensure, respectively. The overall intensity rate and the pattern of change also differ substantially among clusters. The results of this research provide more insight in teenagers' driving behaviour and will be critical to improve young drivers' safety education and parent management programs, as well as provide crucial reference for the GDL regulations to encourage safer driving.
Diploid biological evolution models with general smooth fitness landscapes and recombination.
Saakian, David B; Kirakosyan, Zara; Hu, Chin-Kun
2008-06-01
Using a Hamilton-Jacobi equation approach, we obtain analytic equations for steady-state population distributions and mean fitness functions for Crow-Kimura and Eigen-type diploid biological evolution models with general smooth hypergeometric fitness landscapes. Our numerical solutions of diploid biological evolution models confirm the analytic equations obtained. We also study the parallel diploid model for the simple case of recombination and calculate the variance of distribution, which is consistent with numerical results.
Predictive modelling of post-harvest quality evolution in perishables, applied to mushrooms
Lukasse, L.J.S.; Polderdijk, J.J.
2003-01-01
A large number of models on post-harvest quality evolution of perishables is available. Yet the number of applications is limited. The most common application is the comparison of shelf-life predictions for different constant temperature scenarios. Application of post-harvest quality evolution
A Model for Gas Dynamics and Chemical Evolution of the Fornax Dwarf Spheroidal Galaxy
Yuan, Zhen
We present an empirical model for the halo evolution, global gas dynamics and chemical evolution of Fornax, the brightest Milky Way (MW) dwarf spheroidal galaxy (dSph). Assuming a global star formation rate psi(t) = lambda*(t)[Mg( t)/M[solar masses
Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.
2017-07-01
Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.
Non-Linear Numerical Modeling and Experimental Testing of a Point Absorber Wave Energy Converter
DEFF Research Database (Denmark)
Zurkinden, Andrew Stephen; Ferri, Francesco; Beatty, S.
2014-01-01
A time domain model is applied to a three-dimensional point absorber wave energy converter. The dynamical properties of a semi-submerged hemisphere oscillating around a pivot point where the vertical height of this point is above the mean water level are investigated. The numerical model includes.......e. H/λ≤0.02. For steep waves, H/λ≥0.04 however, the relative velocities between the body and the waves increase thus requiring inclusion of the non-linear hydrostatic restoring moment to effectively predict the dynamics of the wave energy converter. For operation of the device with a passively damping......-linear effect is investigated by a simplified formulation proportional to the quadratic velocity. Results from experiments are shown in order to validate the numerical calculations. All the experimental results are in good agreement with the linear potential theory as long as the waves are sufficiently mild i...
A Model for Designing Adaptive Laboratory Evolution Experiments
DEFF Research Database (Denmark)
LaCroix, Ryan A.; Palsson, Bernhard O.; Feist, Adam M.
2017-01-01
The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite...... increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes...... adaptive laboratory evolution can achieve....
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its...
MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES
Directory of Open Access Journals (Sweden)
Viktor Beneš
2011-05-01
Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.
Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode
Directory of Open Access Journals (Sweden)
Bonie J. Restrepo-Cuestas
2013-11-01
Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.
Numerical Solution of Fractional Neutron Point Kinetics Model in Nuclear Reactor
Directory of Open Access Journals (Sweden)
Nowak Tomasz Karol
2014-06-01
Full Text Available This paper presents results concerning solutions of the fractional neutron point kinetics model for a nuclear reactor. Proposed model consists of a bilinear system of fractional and ordinary differential equations. Three methods to solve the model are presented and compared. The first one entails application of discrete Grünwald-Letnikov definition of the fractional derivative in the model. Second involves building an analog scheme in the FOMCON Toolbox in MATLAB environment. Third is the method proposed by Edwards. The impact of selected parameters on the model’s response was examined. The results for typical input were discussed and compared.
Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José
2017-05-01
The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational
Raykov, Tenko; Marcoulides, George A.
2015-01-01
A direct approach to point and interval estimation of Cronbach's coefficient alpha for multiple component measuring instruments is outlined. The procedure is based on a latent variable modeling application with widely circulated software. As a by-product, using sample data the method permits ascertaining whether the population discrepancy…
Implementation of the critical points model in a SFM-FDTD code working in oblique incidence
Energy Technology Data Exchange (ETDEWEB)
Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)
2011-06-22
We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.
DEFF Research Database (Denmark)
Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.
2018-01-01
The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however...
Accuracy of the Gaussian Point Spread Function model in 2D localization microscopy
Stallinga, S.; Rieger, B.
2010-01-01
The Gaussian function is simple and easy to implement as Point Spread Function (PSF) model for fitting the position of fluorescent emitters in localization microscopy. Despite its attractiveness the appropriateness of the Gaussian is questionable as it is not based on the laws of optics. Here we
Thermodynamic stability of ice models in the vicinity of a critical point
Galdina, Alexandra; Soldatova, Eugenia
2010-01-01
The properties of the two-dimensional exactly solvable Lieb and Baxter models in the critical region are considered based on the thermodynamic method of investigation of a one-component system critical state. From the point of view of the thermodynamic stability the behaviour of adiabatic and isodynamic parameters for these models is analyzed and the types of their critical behaviour are determined. The reasons for the violation of the scaling law hypothesis and the universality hypothesis fo...
PV System with Maximum Power Point Tracking: Modeling, Simulation and Experimental Results
PEREIRA, R. J.; Melício, R.; Mendes, V.M.F.; Joyce, A.
2014-01-01
This paper focuses on the five parameters modeling, consisting on a current controlled generator, single-diode, a shunt and series resistances. Also, a simulation, identification of the parameters for a photovoltaic system and the maximum power point tracking based on VP„g„g is presented. The identification of parameters and the performance of the equivalent circuit model for a solar module simulation are validated by data measured on the photovoltaic modules.
Speakman, John R; Levitsky, David A; Allison, David B; Bray, Molly S; de Castro, John M; Clegg, Deborah J; Clapham, John C; Dulloo, Abdul G; Gruer, Laurence; Haw, Sally; Hebebrand, Johannes; Hetherington, Marion M; Higgs, Susanne; Jebb, Susan A; Loos, Ruth J F; Luckman, Simon; Luke, Amy; Mohammed-Ali, Vidya; O'Rahilly, Stephen; Pereira, Mark; Perusse, Louis; Robinson, Tom N; Rolls, Barbara; Symonds, Michael E; Westerterp-Plantenga, Margriet S
2011-11-01
The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy) to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the 'obesity epidemic'--the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models--the general intake model and the dual intervention point model--that address this issue and might offer better ways to understand how body fatness is controlled.
Directory of Open Access Journals (Sweden)
John R. Speakman
2011-11-01
Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.
Modeling the time evolution of the nanoparticle-protein corona in a body fluid.
Directory of Open Access Journals (Sweden)
Daniele Dell'Orco
Full Text Available BACKGROUND: Nanoparticles in contact with biological fluids interact with proteins and other biomolecules, thus forming a dynamic corona whose composition varies over time due to continuous protein association and dissociation events. Eventually equilibrium is reached, at which point the continued exchange will not affect the composition of the corona. RESULTS: We developed a simple and effective dynamic model of the nanoparticle protein corona in a body fluid, namely human plasma. The model predicts the time evolution and equilibrium composition of the corona based on affinities, stoichiometries and rate constants. An application to the interaction of human serum albumin, high density lipoprotein (HDL and fibrinogen with 70 nm N-iso-propylacrylamide/N-tert-butylacrylamide copolymer nanoparticles is presented, including novel experimental data for HDL. CONCLUSIONS: The simple model presented here can easily be modified to mimic the interaction of the nanoparticle protein corona with a novel biological fluid or compartment once new data will be available, thus opening novel applications in nanotoxicity and nanomedicine.
Modeling the Time Evolution of the Nanoparticle-Protein Corona in a Body Fluid
Dell'Orco, Daniele; Lundqvist, Martin; Oslakovic, Cecilia; Cedervall, Tommy; Linse, Sara
2010-01-01
Background Nanoparticles in contact with biological fluids interact with proteins and other biomolecules, thus forming a dynamic corona whose composition varies over time due to continuous protein association and dissociation events. Eventually equilibrium is reached, at which point the continued exchange will not affect the composition of the corona. Results We developed a simple and effective dynamic model of the nanoparticle protein corona in a body fluid, namely human plasma. The model predicts the time evolution and equilibrium composition of the corona based on affinities, stoichiometries and rate constants. An application to the interaction of human serum albumin, high density lipoprotein (HDL) and fibrinogen with 70 nm N-iso-propylacrylamide/N-tert-butylacrylamide copolymer nanoparticles is presented, including novel experimental data for HDL. Conclusions The simple model presented here can easily be modified to mimic the interaction of the nanoparticle protein corona with a novel biological fluid or compartment once new data will be available, thus opening novel applications in nanotoxicity and nanomedicine. PMID:20532175
A Semantic Modelling Framework-Based Method for Building Reconstruction from Point Clouds
Directory of Open Access Journals (Sweden)
Qingdong Wang
2016-09-01
Full Text Available Over the past few years, there has been an increasing need for semantic information in automatic city modelling. However, due to the complexity of building structure, the semantic reconstruction of buildings is still a challenging task because it is difficult to extract architectural rules and semantic information from the data. To improve the insufficiencies, we present a semantic modelling framework-based approach for automated building reconstruction using the semantic information extracted from point clouds or images. In this approach, a semantic modelling framework is designed to describe and generate the building model, and a workflow is established for extracting the semantic information of buildings from an unorganized point cloud and converting the semantic information into the semantic modelling framework. The technical feasibility of our method is validated using three airborne laser scanning datasets, and the results are compared with other related works comprehensively, which indicate that our approach can simplify the reconstruction process from a point cloud and generate 3D building models with high accuracy and rich semantic information.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
A stochastic model for the evolution of metabolic networks with neighbor dependence.
Mithani, Aziz; Preston, Gail M; Hein, Jotun
2009-06-15
Most current research in network evolution focuses on networks that follow a Duplication Attachment model where the network is only allowed to grow. The evolution of metabolic networks, however, is characterized by gain as well as loss of reactions. It would be desirable to have a biologically relevant model of network evolution that could be used to calculate the likelihood of homologous metabolic networks. We describe metabolic network evolution as a discrete space continuous time Markov process and introduce a neighbor-dependent model for the evolution of metabolic networks where the rates with which reactions are added or removed depend on the fraction of neighboring reactions present in the network. We also present a Gibbs sampler for estimating the parameters of evolution without exploring the whole search space by iteratively sampling from the conditional distributions of the paths and parameters. A Metropolis-Hastings algorithm for sampling paths between two networks and calculating the likelihood of evolution is also presented. The sampler is used to estimate the parameters of evolution of metabolic networks in the genus Pseudomonas. An implementation of the Gibbs sampler in Java is available at http://www.stats.ox.ac.uk/ approximately mithani/networkGibbs/. Supplementary data are available at the Bioinformatics online.
Tectonic Models for the Evolution of Sedimentary Basins
Cloetingh, S.|info:eu-repo/dai/nl/069161836; Ziegler, P.A.; Beekman, F.|info:eu-repo/dai/nl/123556856; Burov, E.B.; Garcia-Castellanos, D.; Matenco, L.|info:eu-repo/dai/nl/163604592
2015-01-01
The tectonic evolution of sedimentary basins is the intrinsic result of the interplay between lithospheric stresses, lithospheric rheology, and thermal perturbations of the lithosphere–upper mantle system. The thermomechanical structure of the lithosphere exerts a prime control on its response to
The Path of the Blind Watchmaker: A Model of Evolution
2011-04-06
early, primitive organism (the Last Universal Common Ancestor of all life, LUCA) to Homo sapiens is the most dramatic biological process that has taken...forms. Evolution from an early, primitive organism (the Last Universal Common Ancestor of all life, LUCA) to Homo sapiens is the most dramatic...Last universal common ancestor of Mammalia (LUCAMammalia) ........ 23 4.6 Homo sapiens
The Jukes-Cantor Model of Molecular Evolution
Erickson, Keith
2010-01-01
The material in this module introduces students to some of the mathematical tools used to examine molecular evolution. This topic is standard fare in many mathematical biology or bioinformatics classes, but could also be suitable for classes in linear algebra or probability. While coursework in matrix algebra, Markov processes, Monte Carlo…
Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin
2017-01-01
In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve.
A Modelling Investigation of The Evolution of Hygroscopicity In Tropospheric Aerosol Populations.
McFiggans, G.; Bower, K.
The solubility of atmospheric aerosols and the associated liquid water content has im- portant implications on both the degree of direct and indirect scattering and on atmo- spheric aqueous chemistry. Hygroscopicity measurements of ambient aerosol provide useful information about the water activity of the aqueous material in the particles (and hence their associated liquid water) and the mixing state of the population. Recent ex- perimental evidence showing the hygroscopicity of atmospheric aerosol distributions evolving by addition of soluble material will be summarised and the difficulties in understanding the processing mechanisms briefly discussed. A suite of modelling tools has been used to investigate the processing of recently emit- ted continental aerosol as it moves away from source, mixing into clean background air. A multicomponent aerosol coagulation and condensation model has been designed to directly investigate aerosol evolution as indicated by the hygroscopicity changes. Multiple size distributions of different diameter growth factor are assembled from the hygroscopicity-segregated size spectra derived directly from measurements downwind of an urban source region. Each size bin in each distribution has a characteristic solu- ble material fraction corresponding to the measured growth factor. Particles from each size bin in each distribution are allowed to coagulate and grow by condensation of soluble material from the gas phase. The distributions are followed using a volume ratio bin separation in a moving-centre sectional approach after Jacobson, 1999, and growth factors recalculated using water activity derived from the new soluble material content. At various points in the model simulation, the distributions may be allowed to pass into cloud using the UMIST cloud-processing model (Bower et al., 1997). The redis- tribution and addition of soluble material by cloud results in a modified distribution which is used to reinitialise the aerosol model
Evolution of perturbations in distinct classes of canonical scalar field models of dark energy
Jassal, H. K.
2009-01-01
Dark energy must cluster in order to be consistent with the equivalence principle. The background evolution can be effectively modelled by either a scalar field or by a barotropic fluid.The fluid model can be used to emulate perturbations in a scalar field model of dark energy, though this model breaks down at large scales. In this paper we study evolution of dark energy perturbations in canonical scalar field models: the classes of thawing and freezing models.The dark energy equation of stat...
Sayers, Ken
2013-01-01
Considerations of primate behavioral evolution often proceed by assuming the ecological and competitive milieus of particular taxa via their relative exploitation of gross food types, such as fruits versus leaves. Although this “fruit/leaf dichotomy” has been repeatedly criticized, it continues to be implicitly invoked in discussions of primate socioecology and female social relationships, and explicitly invoked in models of brain evolution. An expanding literature suggests that such views have severely limited our knowledge of the social and ecological complexities of primate folivory. This paper examines the behavior of primate folivore-frugivores, with particular emphasis on gray langurs (traditionally, Semnopithecus entellus) within the broader context of evolutionary ecology. Although possessing morphological characters that have been associated with folivory and constrained activity patterns, gray langurs are known for remarkable plasticity in ecology and behavior. Their diets are generally quite broad and can be discussed in relation to “Liem’s paradox,” the odd coupling of anatomical feeding specializations with a generalist foraging strategy. Gray langurs, not coincidentally, inhabit arguably the widest range of habitats for a nonhuman primate, including high elevations in the Himalayas. They provide an excellent focal point for examining the assumptions and predictions of behavioral, socioecological, and cognitive evolutionary models. Contrary to the classical descriptions of the primate folivore, Himalayan and other gray langurs—and, in actuality, many leaf eating primates—range widely and engage in resource competition (both of which have previously been noted for primate folivores) as well as solve ecological problems rivaling those of more frugivorous primates (which has rarely been argued for primate folivores). It is maintained that questions of primate folivore adaptation, temperate primate adaptation, and primate evolution more
Sayers, Ken
2013-04-01
Considerations of primate behavioral evolution often proceed by assuming the ecological and competitive milieus of particular taxa via their relative exploitation of gross food types, such as fruits versus leaves. Although this "fruit/leaf dichotomy" has been repeatedly criticized, it continues to be implicitly invoked in discussions of primate socioecology and female social relationships and is explicitly invoked in models of brain evolution. An expanding literature suggests that such views have severely limited our knowledge of the social and ecological complexities of primate folivory. This paper examines the behavior of primate folivore-frugivores, with particular emphasis on gray langurs (traditionally, Semnopithecus entellus) within the broader context of evolutionary ecology. Although possessing morphological characteristics that have been associated with folivory and constrained activity patterns, gray langurs are known for remarkable plasticity in ecology and behavior. Their diets are generally quite broad and can be discussed in relation to Liem's Paradox, the odd coupling of anatomical feeding specializations with a generalist foraging strategy. Gray langurs, not coincidentally, inhabit arguably the widest range of habitats for a nonhuman primate, including high elevations in the Himalayas. They provide an excellent focal point for examining the assumptions and predictions of behavioral, socioecological, and cognitive evolutionary models. Contrary to the classical descriptions of the primate folivore, Himalayan and other gray langurs-and, in actuality, many leaf-eating primates-range widely, engage in resource competition (both of which have previously been noted for primate folivores), and solve ecological problems rivaling those of more frugivorous primates (which has rarely been argued for primate folivores). It is maintained that questions of primate folivore adaptation, temperate primate adaptation, and primate evolution more generally cannot be
An extensive catalog of operators for the coupled evolution of metamodels and models
Herrmannnsdoerfer, M.; Vermolen, S.D.; Wachsmuth, G.
2010-01-01
Modeling languages and thus their metamodels are subject to change. When a metamodel is evolved, existing models may no longer conform to it. Manual migration of these models in response to metamodel evolution is tedious and error-prone. To significantly automate model migration, operator-based
PIV study of the wake of a model wind turbine transitioning between operating set points
Houck, Dan; Cowen, Edwin (Todd)
2016-11-01
Wind turbines are ideally operated at their most efficient tip speed ratio for a given wind speed. There is increasing interest, however, in operating turbines at other set points to increase the overall power production of a wind farm. Specifically, Goit and Meyers (2015) used LES to examine a wind farm optimized by unsteady operation of its turbines. In this study, the wake of a model wind turbine is measured in a water channel using PIV. We measure the wake response to a change in operational set point of the model turbine, e.g., from low to high tip speed ratio or vice versa, to examine how it might influence a downwind turbine. A modified torque transducer after Kang et al. (2010) is used to calibrate in situ voltage measurements of the model turbine's generator operating across a resistance to the torque on the generator. Changes in operational set point are made by changing the resistance or the flow speed, which change the rotation rate measured by an encoder. Single camera PIV on vertical planes reveals statistics of the wake at various distances downstream as the turbine transitions from one set point to another. From these measurements, we infer how the unsteady operation of a turbine may affect the performance of a downwind turbine as its incoming flow. National Science Foundation and the Atkinson Center for a Sustainable Future.
Sparsified-dynamics modeling of discrete point vortices with graph theory
Taira, Kunihiko; Nair, Aditya
2014-11-01
We utilize graph theory to derive a sparsified interaction-based model that captures unsteady point vortex dynamics. The present model builds upon the Biot-Savart law and keeps the number of vortices (graph nodes) intact and reduces the number of inter-vortex interactions (graph edges). We achieve this reduction in vortex interactions by spectral sparsification of graphs. This approach drastically reduces the computational cost to predict the dynamical behavior, sharing characteristics of reduced-order models. Sparse vortex dynamics are illustrated through an example of point vortex clusters interacting amongst themselves. We track the centroids of the individual vortex clusters to evaluate the error in bulk motion of the point vortices in the sparsified setup. To further improve the accuracy in predicting the nonlinear behavior of the vortices, resparsification strategies are employed for the sparsified interaction-based models. The model retains the nonlinearity of the interaction and also conserves the invariants of discrete vortex dynamics; namely the Hamiltonian, linear impulse, and angular impulse as well as circulation. Work supported by US Army Research Office (W911NF-14-1-0386) and US Air Force Office of Scientific Research (YIP: FA9550-13-1-0183).
Occupancy Modelling for Moving Object Detection from LIDAR Point Clouds: a Comparative Study
Xiao, W.; Vallet, B.; Xiao, Y.; Mills, J.; Paparoditis, N.
2017-09-01
Lidar technology has been widely used in both robotics and geomatics for environment perception and mapping. Moving object detection is important in both fields as it is a fundamental step for collision avoidance, static background extraction, moving pattern analysis, etc. A simple method involves checking directly the distance between nearest points from the compared datasets. However, large distances may be obtained when two datasets have different coverages. The use of occupancy grids is a popular approach to overcome this problem. There are two common theories employed to model occupancy and to interpret the measurements, Dempster- Shafer theory and probability. This paper presents a comparative study of these two theories for occupancy modelling with the aim of moving object detection from lidar point clouds. Occupancy is modelled using both approaches and their implementations are explained and compared in details. Two lidar datasets are tested to illustrate the moving object detection results.
OCCUPANCY MODELLING FOR MOVING OBJECT DETECTION FROM LIDAR POINT CLOUDS: A COMPARATIVE STUDY
Directory of Open Access Journals (Sweden)
W. Xiao
2017-09-01
Full Text Available Lidar technology has been widely used in both robotics and geomatics for environment perception and mapping. Moving object detection is important in both fields as it is a fundamental step for collision avoidance, static background extraction, moving pattern analysis, etc. A simple method involves checking directly the distance between nearest points from the compared datasets. However, large distances may be obtained when two datasets have different coverages. The use of occupancy grids is a popular approach to overcome this problem. There are two common theories employed to model occupancy and to interpret the measurements, Dempster- Shafer theory and probability. This paper presents a comparative study of these two theories for occupancy modelling with the aim of moving object detection from lidar point clouds. Occupancy is modelled using both approaches and their implementations are explained and compared in details. Two lidar datasets are tested to illustrate the moving object detection results.
Yang, Bo; Tong, Yuting
2017-04-01
With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.
Kenney, Jason A; Hwang, Gyeong S
2005-07-01
A two-dimensional computational model is developed to describe electrochemical nanostructuring of conducting materials with ultrashort voltage pulses. The model consists of (1) a transient charging simulation to describe the evolution of the overpotentials at the tool and workpiece surfaces and the resulting dissolution currents and (2) a feature profile evolution tool which uses the level set method to describe either vertical or lateral etching of the workpiece. Results presented include transient currents at different separations between tool and workpiece, evolution of overpotentials and dissolution currents as a function of position along the workpiece, and etch profiles as a function of pulse duration.
A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems
DEFF Research Database (Denmark)
Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John
2017-01-01
model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems......This paper presents an algorithm for Model Predictive Control of SISO systems. Based on a quadratic objective in addition to (hard) input constraints it features soft upper as well as lower constraints on the output and an input rate-of-change penalty term. It keeps the deterministic and stochastic...
[Determination of Virtual Surgery Mass Point Spring Model Parameters Based on Genetic Algorithms].
Chen, Ying; Hu, Xuyi; Zhu, Qiguang
2015-12-01
Mass point-spring model is one of the commonly used models in virtual surgery. However, its model parameters have no clear physical meaning, and it is hard to set the parameter conveniently. We, therefore, proposed a method based on genetic algorithm to determine the mass-spring model parameters. Computer-aided tomography (CAT) data were used to determine the mass value of the particle, and stiffness and damping coefficient were obtained by genetic algorithm. We used the difference between the reference deformation and virtual deformation as the fitness function to get the approximate optimal solution of the model parameters. Experimental results showed that this method could obtain an approximate optimal solution of spring parameters with lower cost, and could accurately reproduce the effect of the actual deformation model as well.
DEFF Research Database (Denmark)
Barfod, Adrian; Straubhaar, Julien; Høyer, Anne-Sophie
2017-01-01
Creating increasingly realistic hydrological models involves the inclusion of additional geological and geophysical data in the hydrostratigraphic modelling procedure. Using Multiple Point Statistics (MPS) for stochastic hydrostratigraphic modelling provides a degree of flexibility that allows......2. The comparison of the stochastic hydrostratigraphic MPS models is carried out in an elaborate scheme of visual inspection, mathematical similarity and consistency with boreholes. Using the Kasted survey data, a practical example for modelling new survey areas is presented. A cognitive...... soft data variable. The computation time of 2-3 h for snesim was in between DS and iqsim. The snesim implementation used here is part of the Stanford Geostatistical Modeling Software, or SGeMS. The snesim setup was not trivial, with numerous parameter settings, usage of multiple grids and a search tree...
Interior Point Methods on GPU with application to Model Predictive Control
DEFF Research Database (Denmark)
Gade-Nielsen, Nicolai Fog
The goal of this thesis is to investigate the application of interior point methods to solve dynamical optimization problems, using a graphical processing unit (GPU) with a focus on problems arising in Model Predictice Control (MPC). Multi-core processors have been available for over ten years now......, and manycore processors, such as GPUs, have also become a standard component in any consumer computer. The GPU offers faster floating point operations and higher memory bandwidth than the CPU, but requires algorithms to be redesigned and implemented, to match the underlying architecture. A large number...... software package called GPUOPT, available under the non-restrictive MIT license. GPUOPT includes includes a primal-dual interior-point method, which supports both the CPU and the GPU. It is implemented as multiple components, where the matrix operations and solver for the Newton directions is separated...
Bayesian Estimation Of Shift Point In Poisson Model Under Asymmetric Loss Functions
Directory of Open Access Journals (Sweden)
uma srivastava
2012-01-01
Full Text Available The paper deals with estimating shift point which occurs in any sequence of independent observations of Poisson model in statistical process control. This shift point occurs in the sequence when i.e. m life data are observed. The Bayes estimator on shift point 'm' and before and after shift process means are derived for symmetric and asymmetric loss functions under informative and non informative priors. The sensitivity analysis of Bayes estimators are carried out by simulation and numerical comparisons with R-programming. The results shows the effectiveness of shift in sequence of Poisson disribution .
The biodiversity and ecology of Antarctic lakes: models for evolution.
Laybourn-Parry, Johanna; Pearce, David A
2007-12-29
Antarctic lakes are characterised by simplified, truncated food webs. The lakes range from freshwater to hypersaline with a continuum of physical and chemical conditions that offer a natural laboratory in which to study evolution. Molecular studies on Antarctic lake communities are still in their infancy, but there is clear evidence from some taxonomic groups, for example the Cyanobacteria, that there is endemicity. Moreover, many of the bacteria have considerable potential as sources of novel biochemicals such as low temperature enzymes and anti-freeze proteins. Among the eukaryotic organisms survival strategies have evolved, among which dependence on mixotrophy in phytoflagellates and some ciliates is common. There is also some evidence of evolution of new species of flagellate in the marine derived saline lakes of the Vestfold Hills. Recent work on viruses in polar lakes demonstrates high abundance and high rates of infection, implying that they may play an important role in genetic exchange in these extreme environments.
CADASTER QSPR Models for Predictions of Melting and Boiling Points of Perfluorinated Chemicals.
Bhhatarai, Barun; Teetz, Wolfram; Liu, Tao; Öberg, Tomas; Jeliazkova, Nina; Kochev, Nikolay; Pukalov, Ognyan; Tetko, Igor V; Kovarich, Simona; Papa, Ester; Gramatica, Paola
2011-03-14
Quantitative structure property relationship (QSPR) studies on per- and polyfluorinated chemicals (PFCs) on melting point (MP) and boiling point (BP) are presented. The training and prediction chemicals used for developing and validating the models were selected from Syracuse PhysProp database and literatures. The available experimental data sets were split in two different ways: a) random selection on response value, and b) structural similarity verified by self-organizing-map (SOM), in order to propose reliable predictive models, developed only on the training sets and externally verified on the prediction sets. Individual linear and non-linear approaches based models developed by different CADASTER partners on 0D-2D Dragon descriptors, E-state descriptors and fragment based descriptors as well as consensus model and their predictions are presented. In addition, the predictive performance of the developed models was verified on a blind external validation set (EV-set) prepared using PERFORCE database on 15 MP and 25 BP data respectively. This database contains only long chain perfluoro-alkylated chemicals, particularly monitored by regulatory agencies like US-EPA and EU-REACH. QSPR models with internal and external validation on two different external prediction/validation sets and study of applicability-domain highlighting the robustness and high accuracy of the models are discussed. Finally, MPs for additional 303 PFCs and BPs for 271 PFCs were predicted for which experimental measurements are unknown. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A mixture model for robust point matching under multi-layer motion.
Directory of Open Access Journals (Sweden)
Jiayi Ma
Full Text Available This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers. Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation. We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer/large discontinuous motion.
Genetic optimization of neural network and fuzzy logic for oil bubble point pressure modeling
Energy Technology Data Exchange (ETDEWEB)
Afshar, Mohammad [Islamic Azad University, Kharg (Iran, Islamic Republic of); Gholami, Amin [Petroleum University of Technology, Abadan (Iran, Islamic Republic of); Asoodeh, Mojtaba [Islamic Azad University, Birjand (Iran, Islamic Republic of)
2014-03-15
Bubble point pressure is a critical pressure-volume-temperature (PVT) property of reservoir fluid, which plays an important role in almost all tasks involved in reservoir and production engineering. We developed two sophisticated models to estimate bubble point pressure from gas specific gravity, oil gravity, solution gas oil ratio, and reservoir temperature. Neural network and adaptive neuro-fuzzy inference system are powerful tools for extracting the underlying dependency of a set of input/output data. However, the mentioned tools are in danger of sticking in local minima. The present study went further by optimizing fuzzy logic and neural network models using the genetic algorithm in charge of eliminating the risk of being exposed to local minima. This strategy is capable of significantly improving the accuracy of both neural network and fuzzy logic models. The proposed methodology was successfully applied to a dataset of 153 PVT data points. Results showed that the genetic algorithm can serve the neural network and neuro-fuzzy models from local minima trapping, which might occur through back-propagation algorithm.
A Point-Set-Based Footprint Model and Spatial Ranking Method for Geographic Information Retrieval
Directory of Open Access Journals (Sweden)
Yong Gao
2016-07-01
Full Text Available In the recent big data era, massive spatial related data are continuously generated and scrambled from various sources. Acquiring accurate geographic information is also urgently demanded. How to accurately retrieve desired geographic information has become the prominent issue, needing to be resolved in high priority. The key technologies in geographic information retrieval are modeling document footprints and ranking documents based on their similarity evaluation. The traditional spatial similarity evaluation methods are mainly performed using a MBR (Minimum Bounding Rectangle footprint model. However, due to its nature of simplification and roughness, the results of traditional methods tend to be isotropic and space-redundant. In this paper, a new model that constructs the footprints in the form of point-sets is presented. The point-set-based footprint coincides the nature of place names in web pages, so it is redundancy-free, consistent, accurate, and anisotropic to describe the spatial extents of documents, and can handle multi-scale geographic information. The corresponding spatial ranking method is also presented based on the point-set-based model. The new similarity evaluation algorithm of this method firstly measures multiple distances for the spatial proximity across different scales, and then combines the frequency of place names to improve the accuracy and precision. The experimental results show that the proposed method outperforms the traditional methods with higher accuracies under different searching scenarios.
Modeling watershed non-point source pollution: complexity, uncertainty and future directions
Zheng, Y.; Han, F.; Luo, X.; Wu, B.
2012-12-01
Non-point source pollution (NPSP) is a major cause of surface water quality degradation. Watershed models (e.g. the Soil and Water Assessment Tool, SWAT) have been increasingly used to simulate NPSP and support the pollution prevention. These models originated from hydrologic models but add significant complexity. Their simulations usually involve substantial uncertainty especially when observational data are scarce, which largely limits the models application. Based on our past and ongoing studies, this presentation discusses the following issues: 1) effective and efficient methods to quantify the uncertainty associated with the model simulations; 2) cost-effective strategies to reduce the uncertainty through data acquisition and assimilation; 3) directions to improve the current NPSP models. While discussing the first issue, Probabilistic Collocation Method (PCM) based approaches of uncertainty analysis (UA) and data assimilation will be presented, and the important role of management concerns in the UA will be discussed. Regarding the second issue, approaches to optimize data acquisition and assimilation, based on the concept of value of information (VOI), will be introduced, and the tradeoff between uncertainty and cost will be discussed. While addressing the last issue, two key points will be made. First, the complexity of the NPSP models does not necessarily lead to good simulation results, but is very likely to introduce significant uncertainty and the parameter identifiability issue. Thus, the model complexity has to be tailored to the data condition. Second, some core modeling assumptions should be re-examined through further studies on the physical process of NPSP. For example, our recent experimental studies showed that the enrichment theory widely adopted in NPSP models has significant limitations. This presentation calls for more efforts on developing a new generation of watershed NPSP models.
Performance Analysis of Several GPS/Galileo Precise Point Positioning Models.
Afifi, Akram; El-Rabbany, Ahmed
2015-06-19
This paper examines the performance of several precise point positioning (PPP) models, which combine dual-frequency GPS/Galileo observations in the un-differenced and between-satellite single-difference (BSSD) modes. These include the traditional un-differenced model, the decoupled clock model, the semi-decoupled clock model, and the between-satellite single-difference model. We take advantage of the IGS-MGEX network products to correct for the satellite differential code biases and the orbital and satellite clock errors. Natural Resources Canada's GPSPace PPP software is modified to handle the various GPS/Galileo PPP models. A total of six data sets of GPS and Galileo observations at six IGS stations are processed to examine the performance of the various PPP models. It is shown that the traditional un-differenced GPS/Galileo PPP model, the GPS decoupled clock model, and the semi-decoupled clock GPS/Galileo PPP model improve the convergence time by about 25% in comparison with the un-differenced GPS-only model. In addition, the semi-decoupled GPS/Galileo PPP model improves the solution precision by about 25% compared to the traditional un-differenced GPS/Galileo PPP model. Moreover, the BSSD GPS/Galileo PPP model improves the solution convergence time by about 50%, in comparison with the un-differenced GPS PPP model, regardless of the type of BSSD combination used. As well, the BSSD model improves the precision of the estimated parameters by about 50% and 25% when the loose and the tight combinations are used, respectively, in comparison with the un-differenced GPS-only model. Comparable results are obtained through the tight combination when either a GPS or a Galileo satellite is selected as a reference.
Asymptotic behaviour of two-point functions in multi-species models
Directory of Open Access Journals (Sweden)
Karol K. Kozlowski
2016-05-01
Full Text Available We extract the long-distance asymptotic behaviour of two-point correlation functions in massless quantum integrable models containing multi-species excitations. For such a purpose, we extend to these models the method of a large-distance regime re-summation of the form factor expansion of correlation functions. The key feature of our analysis is a technical hypothesis on the large-volume behaviour of the form factors of local operators in such models. We check the validity of this hypothesis on the example of the SU(3-invariant XXX magnet by means of the determinant representations for the form factors of local operators in this model. Our approach confirms the structure of the critical exponents obtained previously for numerous models solvable by the nested Bethe Ansatz.
Modeling of Aerobrake Ballute Stagnation Point Temperature and Heat Transfer to Inflation Gas
Bahrami, Parviz A.
2012-01-01
A trailing Ballute drag device concept for spacecraft aerocapture is considered. A thermal model for calculation of the Ballute membrane temperature and the inflation gas temperature is developed. An algorithm capturing the most salient features of the concept is implemented. In conjunction with the thermal model, trajectory calculations for two candidate missions, Titan Explorer and Neptune Orbiter missions, are used to estimate the stagnation point temperature and the inflation gas temperature. Radiation from both sides of the membrane at the stagnation point and conduction to the inflating gas is included. The results showed that the radiation from the membrane and to a much lesser extent conduction to the inflating gas, are likely to be the controlling heat transfer mechanisms and that the increase in gas temperature due to aerodynamic heating is of secondary importance.
Modeling of the long-time asymptotic dynamics of a point-like object
Ribaric, Marijan
2012-01-01
We introduce the first-ever mathematical framework for modeling of the long-time asymptotic behavior of acceleration of such a point-like object whose velocity eventually stops changing after the cessations of the external force. For the small and slowly changing external force we approximate its long-time asymptotic acceleration by a relativistic polynomial in time-derivatives of the external force. Without knowing the equation of motion for such a point-like object, an approximation of this kind enables us to model the long-time asymptotic behavior of its dynamics, and access its long-time asymptotic kinetic constants, which supplement mass and charge. We give various examples.
Kulikov, D V; Trushin, Y V; Veber, K V; Khumer, K; Bitner, R; Shternberg, A R
2001-01-01
The physical model of evolution of the oxygen subsystem defects of the ferroelectric PLZT-ceramics by the neutron irradiation and isochrone annealing is proposed. The model accounts for the effect the lanthanum content on the material properties. The changes in the oxygen vacancies concentration, calculated by the proposed model, agree well with the polarization experimental behavior by the irradiated material annealing
Testing gradual and speciational models of evolution in extant taxa: the example of ratites
Laurin, M.; Gussekloo, S.W.S.; Marjanovic, D.; Legendre, L.; Cubo, J.
2012-01-01
Ever since Eldredge and Gould proposed their model of punctuated equilibria, evolutionary biologists have debated how often this model is the best description of nature and how important it is compared to the more gradual models of evolution expected from natural selection and the neo-Darwinian
Directory of Open Access Journals (Sweden)
Jae Joon Hwang
Full Text Available Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT, evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23% by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
Hwang, Jae Joon; Kim, Kee-Deog; Park, Hyok; Park, Chang Seo; Jeong, Ho-Gul
2014-01-01
Superimposition has been used as a method to evaluate the changes of orthodontic or orthopedic treatment in the dental field. With the introduction of cone beam CT (CBCT), evaluating 3 dimensional changes after treatment became possible by superimposition. 4 point plane orientation is one of the simplest ways to achieve superimposition of 3 dimensional images. To find factors influencing superimposition error of cephalometric landmarks by 4 point plane orientation method and to evaluate the reproducibility of cephalometric landmarks for analyzing superimposition error, 20 patients were analyzed who had normal skeletal and occlusal relationship and took CBCT for diagnosis of temporomandibular disorder. The nasion, sella turcica, basion and midpoint between the left and the right most posterior point of the lesser wing of sphenoidal bone were used to define a three-dimensional (3D) anatomical reference co-ordinate system. Another 15 reference cephalometric points were also determined three times in the same image. Reorientation error of each landmark could be explained substantially (23%) by linear regression model, which consists of 3 factors describing position of each landmark towards reference axes and locating error. 4 point plane orientation system may produce an amount of reorientation error that may vary according to the perpendicular distance between the landmark and the x-axis; the reorientation error also increases as the locating error and shift of reference axes viewed from each landmark increases. Therefore, in order to reduce the reorientation error, accuracy of all landmarks including the reference points is important. Construction of the regression model using reference points of greater precision is required for the clinical application of this model.
PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance
Energy Technology Data Exchange (ETDEWEB)
Vondy, D.R.
1979-10-01
The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.
Light diffraction by a slit and grooves with a point source model based on wave dynamics
Hong, Jian-Shiung; Chen, Kuan-Ren
2017-10-01
A point source model based on wave dynamics is proposed to study the fundamental light diffraction physics by a subwavelength slit and grooves in a metallic film. In this model, two opposite traveling waves are considered in each indentation; the resultant outgoing wave can propagate along the film surface to couple each other or radiate into free space as a point source. With small-system simulations, the tangential electric field at each opening determines its source temporal phase; then the energy conservation of each point source radiation and of the total radiant wave determine the source amplitudes. Besides these, this model reveals more physics regarding the wave interactions. In the strong-wave-coupling case studied, the surface waves created by the grooves flow into the slit and delay the Fabry-Pérot-like resonance. When adding the grooves concentrates the light field into a directional beam, the total transmitted energy through the slit significantly decreases. However, the energy in the original nearby grooves increases so that the groove radiation increasingly shares the transmitted energy. As the total transmitted energy decreases, the slit radiation energy decreases further due to the energy conservation. In the weak-wave-coupling cases, the groove radiation still interferes with that from the slit; as a result, the diffracted light is split into two beams. It is interesting to find that, due to the groove radiation, the slit radiation energy is enhanced to become larger than that transmitted through it. Detailed physical interpretations will be given.
PCTO-SIM: Multiple-point geostatistical modeling using parallel conditional texture optimization
Pourfard, Mohammadreza; Abdollahifard, Mohammad J.; Faez, Karim; Motamedi, Sayed Ahmad; Hosseinian, Tahmineh
2017-05-01
Multiple-point Geostatistics is a well-known general statistical framework by which complex geological phenomena have been modeled efficiently. Pixel-based and patch-based are two major categories of these methods. In this paper, the optimization-based category is used which has a dual concept in texture synthesis as texture optimization. Our extended version of texture optimization uses the energy concept to model geological phenomena. While honoring the hard point, the minimization of our proposed cost function forces simulation grid pixels to be as similar as possible to training images. Our algorithm has a self-enrichment capability and creates a richer training database from a sparser one through mixing the information of all surrounding patches of the simulation nodes. Therefore, it preserves pattern continuity in both continuous and categorical variables very well. It also shows a fuzzy result in its every realization similar to the expected result of multi realizations of other statistical models. While the main core of most previous Multiple-point Geostatistics methods is sequential, the parallel main core of our algorithm enabled it to use GPU efficiently to reduce the CPU time. One new validation method for MPS has also been proposed in this paper.
From point to spatial information - integrating highly resolved sensor observations into crop models
Wallor, Evelyn; Kersebaum, Kurt-Christian; Lorenz, Karsten; Gebbers, Robin
2017-04-01
High spatial variability of soil properties restricts the benefits of process-oriented modelling for management recommendations on field scale due to rare information about the soil inventory and its distribution. On the other hand, sensor measurements as geo-electrical mapping provide with a certain spatial pattern, but interpretation of the results is diverse and influenced by local conditions. In the present study, the model HERMES was applied to 60 soil sampling points of a well-documented field in North-Rhine Westphalia characterised by a wide range of soil texture. Validation of HERMES resulted in satisfactory root mean square errors (RMSE) for yield, water and nitrogen in soil. Values of conducted measurements (n = 5,000) of electrical conductivity (ECa) at the same field ranged from 20 to 90 mS/m and were assigned to the soil sampling points. Subsequent regression analyses resulted in a high correlation of clay and sand contents with measured ECa values and justified the calculation of soil texture at mapping points of ECa. Hence, an improved resolution of the key value soil texture was produced to initialize model simulation and to finally generate spatial patterns of simulated state variables (e.g. water and nitrogen content in soil).
Modeling evolution of crosstalk in noisy signal transduction networks
Tareen, Ammar; Wingreen, Ned S.; Mukhopadhyay, Ranjan
2018-02-01
Signal transduction networks can form highly interconnected systems within cells due to crosstalk between constituent pathways. To better understand the evolutionary design principles underlying such networks, we study the evolution of crosstalk for two parallel signaling pathways that arise via gene duplication. We use a sequence-based evolutionary algorithm and evolve the network based on two physically motivated fitness functions related to information transmission. We find that one fitness function leads to a high degree of crosstalk while the other leads to pathway specificity. Our results offer insights on the relationship between network architecture and information transmission for noisy biomolecular networks.
Energy Technology Data Exchange (ETDEWEB)
Morrow, B.M., E-mail: morrow@lanl.gov [The Ohio State University, 2041 College Rd., 477 Watts Hall, Columbus, OH 43210 (United States); Los Alamos National Laboratory, P.O. Box 1663, MS G755, Los Alamos, NM 87545 (United States); Kozar, R.W.; Anderson, K.R. [Bettis Laboratory, Bechtel Marine Propulsion Corp., West Mifflin, PA 15122 (United States); Mills, M.J., E-mail: millsmj@mse.osu.edu [The Ohio State University, 2041 College Rd., 477 Watts Hall, Columbus, OH 43210 (United States)
2016-05-17
Several specimens of Zircaloy-4 were creep tested at a single stress-temperature condition, and interrupted at different accumulated strain levels. Substructural observations were performed using bright field scanning transmission electron microscopy (BF STEM). The dislocation substructure was characterized to ascertain how creep strain evolution impacts the Modified Jogged-Screw (MJS) model, which has previously been utilized to predict steady-state strain rates in Zircaloy-4. Special attention was paid to the evolution of individual model parameters with increasing strain. Results of model parameter measurements are reported and discussed, along with possible extensions to the MJS model.
Interactive Cosmetic Makeup of a 3D Point-Based Face Model
Kim, Jeong-Sik; Choi, Soo-Mi
We present an interactive system for cosmetic makeup of a point-based face model acquired by 3D scanners. We first enhance the texture of a face model in 3D space using low-pass Gaussian filtering, median filtering, and histogram equalization. The user is provided with a stereoscopic display and haptic feedback, and can perform simulated makeup tasks including the application of foundation, color makeup, and lip gloss. Fast rendering is achieved by processing surfels using the GPU, and we use a BSP tree data structure and a dynamic local refinement of the facial surface to provide interactive haptics. We have implemented a prototype system and evaluated its performance.
Pairwise-interaction extended point-particle model for particle-laden flows
Akiki, G.; Moore, W. C.; Balachandar, S.
2017-12-01
In this work we consider the pairwise interaction extended point-particle (PIEP) model for Euler-Lagrange simulations of particle-laden flows. By accounting for the precise location of neighbors the PIEP model goes beyond local particle volume fraction, and distinguishes the influence of upstream, downstream and laterally located neighbors. The two main ingredients of the PIEP model are (i) the undisturbed flow at any particle is evaluated as a superposition of the macroscale flow and a microscale flow that is approximated as a pairwise superposition of perturbation fields induced by each of the neighboring particles, and (ii) the forces and torque on the particle are then calculated from the undisturbed flow using the Faxén form of the force relation. The computational efficiency of the standard Euler-Lagrange approach is retained, since the microscale perturbation fields induced by a neighbor are pre-computed and stored as PIEP maps. Here we extend the PIEP force model of Akiki et al. [3] with a corresponding torque model to systematically include the effect of perturbation fields induced by the neighbors in evaluating the net torque. Also, we use DNS results from a uniform flow over two stationary spheres to further improve the PIEP force and torque models. We then test the PIEP model in three different sedimentation problems and compare the results against corresponding DNS to assess the accuracy of the PIEP model and improvement over the standard point-particle approach. In the case of two sedimenting spheres in a quiescent ambient the PIEP model is shown to capture the drafting-kissing-tumbling process. In cases of 5 and 80 sedimenting spheres a good agreement is obtained between the PIEP simulation and the DNS. For all three simulations, the DEM-PIEP was able to recreate, to a good extent, the results from the DNS, while requiring only a negligible fraction of the numerical resources required by the fully-resolved DNS.
Testing gradual and speciational models of evolution in extant taxa: the example of ratites.
Laurin, M; Gussekloo, S W S; Marjanović, D; Legendre, L; Cubo, J
2012-02-01
Ever since Eldredge and Gould proposed their model of punctuated equilibria, evolutionary biologists have debated how often this model is the best description of nature and how important it is compared to the more gradual models of evolution expected from natural selection and the neo-Darwinian paradigm. Recently, Cubo proposed a method to test whether morphological data in extant ratites are more compatible with a gradual or with a speciational model (close to the punctuated equilibrium model). As shown by our simulations, a new method to test the mode of evolution of characters (involving regression of standardized contrasts on their expected standard deviation) is easier to implement and more powerful than the previously proposed method, but the Mesquite module comet (aimed at investigating evolutionary models using comparative data) performs better still. Uncertainties in branch length estimates are probably the largest source of potential error. Cubo hypothesized that heterochronic mechanisms may underlie morphological changes in bone shape during the evolution of ratites. He predicted that the outcome of these changes may be consistent with a speciational model of character evolution because heterochronic changes can be instantaneous in terms of geological time. Analysis of a more extensive data set confirms his prediction despite branch length uncertainties: evolution in ratites has been mostly speciational for shape-related characters. However, it has been mostly gradual for size-related ones. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
Assimilating point snow water equivalent data into a distributed snow cover model
Magnusson, Jan; Gustafsson, David; Hüsler, Fabia; Jonas, Tobias
2014-05-01
In Switzerland, snow melt dominates the runoff in many watersheds and the total snow storage contributing to discharge can vary largely from year to year. Accurately quantifying snow storage and subsequent runoff is important for regulating lake levels throughout the country. Additionally, melting snow can contribute to floods imposing large damages on infrastructure. To better quantify the snow storage, we examine whether the performance of a distributed snow model improves when applying different methods for assimilating point snow water equivalent (SWE) data. We update the model results by using either the ensemble Kalman filter or a combination of the ensemble Kalman filter and statistical interpolation. The filter performance was assessed by comparing the simulation results against observed SWE and snow covered fraction. We show that a method which assimilates daily changes in SWE performs better than an approach for updating the model using the SWE data directly. Both assimilation methods showed higher model performance than a control simulation not utilizing data assimilation. Both filter simulations also showed better agreements with the SWE observations than an interpolation method optimized for snow data. The results show that the three-dimensional data assimilation methods were useful for transferring the information in the point snow observations across the domain simulated by the distributed snow model.
ReMoFP: A Tool for Counting Function Points from UML Requirement Models
Directory of Open Access Journals (Sweden)
Vitor A. Batista
2011-01-01
Full Text Available Function Point Analysis (FPA is a widely used technique for measuring software size. It measures software functionality from the user's perspective, usually based on a requirements description. In many software processes, these requirements are represented by UML models. Although there have been attempts to automate the measurement process, FPA counting requires a considerable amount of interpretation which, to be reliable, should be made by experts. On the other hand, fully manual counting methods usually fail to keep synchronized with the requirements model, since requirements frequently change during the development cycle. This paper describes an approach for counting FPA and a compliant tool. This approach makes use of UML requirement models. The tool, called ReMoFP (Requirement Model Function Point counter, leaves all the counting decisions to the analyst, but supports him by ensuring consistency with the requirements represented in the models. The ReMoFP was developed by a software development laboratory in Brazil, and helped it to improve counting productivity, consistency, and maintainability.
A GRAPH BASED MODEL FOR THE DETECTION OF TIDAL CHANNELS USING MARKED POINT PROCESSES
Directory of Open Access Journals (Sweden)
A. Schmidt
2015-08-01
Full Text Available In this paper we propose a new method for the automatic extraction of tidal channels in digital terrain models (DTM using a sampling approach based on marked point processes. In our model, the tidal channel system is represented by an undirected, acyclic graph. The graph is iteratively generated and fitted to the data using stochastic optimization based on a Reversible Jump Markov Chain Monte Carlo (RJMCMC sampler and simulated annealing. The nodes of the graph represent junction points of the channel system and the edges straight line segments with a certain width in between. In each sampling step, the current configuration of nodes and edges is modified. The changes are accepted or rejected depending on the probability density function for the configuration which evaluates the conformity of the current status with a pre-defined model for tidal channels. In this model we favour high DTM gradient magnitudes at the edge borders and penalize a graph configuration consisting of non-connected components, overlapping segments and edges with atypical intersection angles. We present the method of our graph based model and show results for lidar data, which serve of a proof of concept of our approach.
Akyol, Gulsum; Tekkaya, Ceren; Sungur, Semra; Traynor, Anne
2012-01-01
This study proposed a path model of relationships among understanding and acceptance of evolution, views on nature of science, and self-efficacy beliefs regarding teaching evolution. A total of 415 pre-service science teachers completed a series of self-report instruments for the specified purpose. After the estimation of scale scores using…
Pacheco, Maria Pires; John, Elisabeth; Kaoma, Tony; Heinäniemi, Merja; Nicot, Nathalie; Vallar, Laurent; Bueb, Jean-Luc; Sinkkonen, Lasse; Sauter, Thomas
2015-10-19
The reconstruction of context-specific metabolic models from easily and reliably measurable features such as transcriptomics data will be increasingly important in research and medicine. Current reconstruction methods suffer from high computational effort and arbitrary threshold setting. Moreover, understanding the underlying epigenetic regulation might allow the identification of putative intervention points within metabolic networks. Genes under high regulatory load from multiple enhancers or super-enhancers are known key genes for disease and cell identity. However, their role in regulation of metabolism and their placement within the metabolic networks has not been studied. Here we present FASTCORMICS, a fast and robust workflow for the creation of high-quality metabolic models from transcriptomics data. FASTCORMICS is devoid of arbitrary parameter settings and due to its low computational demand allows cross-validation assays. Applying FASTCORMICS, we have generated models for 63 primary human cell types from microarray data, revealing significant differences in their metabolic networks. To understand the cell type-specific regulation of the alternative metabolic pathways we built multiple models during differentiation of primary human monocytes to macrophages and performed ChIP-Seq experiments for histone H3 K27 acetylation (H3K27ac) to map the active enhancers in macrophages. Focusing on the metabolic genes under high regulatory load from multiple enhancers or super-enhancers, we found these genes to show the most cell type-restricted and abundant expression profiles within their respective pathways. Importantly, the high regulatory load genes are associated to reactions enriched for transport reactions and other pathway entry points, suggesting that they are critical regulatory control points for cell type-specific metabolism. By integrating metabolic modelling and epigenomic analysis we have identified high regulatory load as a common feature of metabolic
Ideal point error for model assessment in data-driven river flow forecasting
Directory of Open Access Journals (Sweden)
C. W. Dawson
2012-08-01
Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.
Fong, Stephen S; Marciniak, Jennifer Y; Palsson, Bernhard Ø
2003-11-01
Genome-scale in silico metabolic networks of Escherichia coli have been reconstructed. By using a constraint-based in silico model of a reconstructed network, the range of phenotypes exhibited by E. coli under different growth conditions can be computed, and optimal growth phenotypes can be predicted. We hypothesized that the end point of adaptive evolution of E. coli could be accurately described a priori by our in silico model since adaptive evolution should lead to an optimal phenotype. Adaptive evolution of E. coli during prolonged exponential growth was performed with M9 minimal medium supplemented with 2 g of alpha-ketoglutarate per liter, 2 g of lactate per liter, or 2 g of pyruvate per liter at both 30 and 37 degrees C, which produced seven distinct strains. The growth rates, substrate uptake rates, oxygen uptake rates, by-product secretion patterns, and growth rates on alternative substrates were measured for each strain as a function of evolutionary time. Three major conclusions were drawn from the experimental results. First, adaptive evolution leads to a phenotype characterized by maximized growth rates that may not correspond to the highest biomass yield. Second, metabolic phenotypes resulting from adaptive evolution can be described and predicted computationally. Third, adaptive evolution on a single substrate leads to changes in growth characteristics on other substrates that could signify parallel or opposing growth objectives. Together, the results show that genome-scale in silico metabolic models can describe the end point of adaptive evolution a priori and can be used to gain insight into the adaptive evolutionary process for E. coli.
Origin and Evolution of the Moon: Apollo 2000 Model
Schmitt, H. H.
1999-01-01
A descriptive formulation of the stages of lunar evolution as an augmentation of the traditional time-stratigraphic approach [21 enables broadened multidisciplinary discussions of issues related to the Moon and planets. An update of this descriptive formulation [3], integrating Apollo and subsequently acquired data, provides additional perspectives on many of the outstanding issues in lunar science. (Stage 1): Beginning (Pre-Nectarian) - 4.57 Ga; (Stage 2): Magma Ocean (Pre-Nectarian) - 4.57-4.2(?) Ga; (Stage 3:) Cratered Highlands (Pre-Nectarian) - 4.4(?) 4.2(?) Ga (Stage 4:) Large Basins - (Pre-Nectarian - Upper Imbrium) 4.3(?)-3.8 Ga; (Stage 4A:) Old Large Basins and Crustal Strengthening (Pre Nectarian) - 4.3(?)-3.92 Ga; (Stage 4B): Young Large Basins (Nectarian - Lower Imbrium) 3.92-3.80 Ga; (Stage 5): Basaltic Maria (Upper Imbrium) - 4.3(?)- 1.0(?) Ga; (Stage 6): Mature Surface (Copernican and Eratosthenian) - 3.80 Ga to Present. Increasingly strong indications of a largely undifferentiated lower lunar mantle and increasingly constrained initial conditions for models of an Earth-impact origin for the Moon suggest that lunar origin by capture of an independently evolved planet should be investigated more vigorously. Capture appears to better explain the geochemical and geophysical details related to the lower mantle of the Moon and to the distribution of elements and their isotopes. For example, the source of the volatile components of the Apollo 17 orange glass apparently would have lain below the degassed and differentiated magma ocean (3) in a relatively undifferentiated primordial lower mantle. Also, a density reversal from 3.7 gm/cubic cm to approximately 3.3 gm/cubic cm is required at the base of the upper mantle to be consistent with the overall density of the Moon. Finally, Hf/W systematics allow only a very narrow window, if any at all for a giant impact to form the Moon. Continued accretionary impact activity during the crystallization of the magma
DEFF Research Database (Denmark)
Lavancier, Frédéric; Møller, Jesper
We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties ...
The impact of mobile point defect clusters in a kinetic model of pressure vessel embrittlement
Energy Technology Data Exchange (ETDEWEB)
Stoller, R.E.
1998-05-01
The results of recent molecular dynamics simulations of displacement cascades in iron indicate that small interstitial clusters may have a very low activation energy for migration, and that their migration is 1-dimensional, rather than 3-dimensional. The mobility of these clusters can have a significant impact on the predictions of radiation damage models, particularly at the relatively low temperatures typical of commercial, light water reactor pressure vessels (RPV) and other out-of-core components. A previously-developed kinetic model used to investigate RPV embrittlement has been modified to permit an evaluation of the mobile interstitial clusters. Sink strengths appropriate to both 1- and 3-dimensional motion of the clusters were evaluated. High cluster mobility leads to a reduction in the amount of predicted embrittlement due to interstitial clusters since they are lost to sinks rather than building up in the microstructure. The sensitivity of the predictions to displacement rate also increases. The magnitude of this effect is somewhat reduced if the migration is 1-dimensional since the corresponding sink strengths are lower than those for 3-dimensional diffusion. The cluster mobility can also affect the evolution of copper-rich precipitates in the model since the radiation-enhanced diffusion coefficient increases due to the lower interstitial cluster sink strength. The overall impact of the modifications to the model is discussed in terms of the major irradiation variables and material parameter uncertainties.
Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model
Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi
2018-02-01
Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.
Attenuation modelling of bulk waves generated by a point source in an isotropic medium
Energy Technology Data Exchange (ETDEWEB)
Ramadas, C. [Composites Research Center, R and D, Pune (India)
2016-10-15
Attenuation of a bulk wave, generated by a point source, propagating in an isotropic medium, is due to the geometry and nature of the material involved. In numerical simulations, if the complete domain of propagation is modeled, then it captures the attenuation of a wave caused due to its geometry. To model the attenuation of the wave caused due to the nature of the material, it is required to know the material'attenuation coefficient. Since experimental measurement on attenuation of a wave involves both the effects of geometry and material, a method based on curve fitting to estimate the material'attenuation coefficient from effective attenuation coefficient, is proposed. Using the material'attenuation coefficient in the framework of Rayleigh damping model, numerical modeling on attenuation of both the bulk waves - longitudinal and shear excited by a point source was carried out. It was shown that the proposed method captures the attenuation of bulk waves caused on account of geometry as well as nature of the material.
The monodromy property for K3 surfaces allowing a triple-point-free model
DEFF Research Database (Denmark)
Jaspers, Annelies Kristien J
2017-01-01
The aim of this thesis is to study under which conditions K3 surfaces allowing a triple-point-free model satisfy the monodromy property. This property is a quantitative relation between the geometry of the degeneration of a Calabi-Yau variety X and the monodromy action on the cohomology of...... X: a Calabi- Yau variety X satisfies the monodromy property if poles of the motivic zeta function ZX,ω(T) induce monodromy eigenvalues on the cohomology of X. Let k be an algebraically closed field of characteristic 0, and set K = k((t)). In this thesis, we focus on K3 surfaces over K allowing a triple......-point-free model, i.e., K3 surfaces allowing a strict normal crossings model such that three irreducible components of the special fiber never meet simultaneously. Crauder and Morrison classified these models into two main classes: so-called flowerpot degenerations and chain degenerations. This classification...
AUTOMATED VOXEL MODEL FROM POINT CLOUDS FOR STRUCTURAL ANALYSIS OF CULTURAL HERITAGE
Directory of Open Access Journals (Sweden)
G. Bitelli
2016-06-01
Full Text Available In the context of cultural heritage, an accurate and comprehensive digital survey of a historical building is today essential in order to measure its geometry in detail for documentation or restoration purposes, for supporting special studies regarding materials and constructive characteristics, and finally for structural analysis. Some proven geomatic techniques, such as photogrammetry and terrestrial laser scanning, are increasingly used to survey buildings with different complexity and dimensions; one typical product is in form of point clouds. We developed a semi-automatic procedure to convert point clouds, acquired from laserscan or digital photogrammetry, to a filled volume model of the whole structure. The filled volume model, in a voxel format, can be useful for further analysis and also for the generation of a Finite Element Model (FEM of the surveyed building. In this paper a new approach is presented with the aim to decrease operator intervention in the workflow and obtain a better description of the structure. In order to achieve this result a voxel model with variable resolution is produced. Different parameters are compared and different steps of the procedure are tested and validated in the case study of the North tower of the San Felice sul Panaro Fortress, a monumental historical building located in San Felice sul Panaro (Modena, Italy that was hit by an earthquake in 2012.
Perturbation modeling of the long term dynamics of a point-like object
Ribarič, Marijan
2013-01-01
We consider classical real objects whose response to an external force is specified solely by the trajectory of a single point, whose velocity eventually stops changing after the cessation of the external force. We name them point like objects (POs). To study the interaction between the PO movement and the surrounding medium we consider the long term dynamics of a PO (LT dynamics) in the case of a small and slowly changing external force. To this end we introduce the perturbation modeling of LT dynamics at a given time instant by novel models (LT models), which are polynomials in time derivatives of the external force at the same time instant. Given a possibly nonlinear differential equation of motion for PO, we can calculate iteratively the corresponding LT models. Thus we obtain approximations to the acceleration of the long term PO trajectory by polynomials in time derivatives of the external force, and so determine the relative significance of the constants of the PO equation of motion for LT dynamics. To...
The periglacial engine of mountain erosion – Part 2: Modelling large-scale landscape evolution
Directory of Open Access Journals (Sweden)
D. L. Egholm
2015-10-01
Full Text Available There is growing recognition of strong periglacial control on bedrock erosion in mountain landscapes, including the shaping of low-relief surfaces at high elevations (summit flats. But, as yet, the hypothesis that frost action was crucial to the assumed Late Cenozoic rise in erosion rates remains compelling and untested. Here we present a landscape evolution model incorporating two key periglacial processes – regolith production via frost cracking and sediment transport via frost creep – which together are harnessed to variations in temperature and the evolving thickness of sediment cover. Our computational experiments time-integrate the contribution of frost action to shaping mountain topography over million-year timescales, with the primary and highly reproducible outcome being the development of flattish or gently convex summit flats. A simple scaling of temperature to marine δ18O records spanning the past 14 Myr indicates that the highest summit flats in mid- to high-latitude mountains may have formed via frost action prior to the Quaternary. We suggest that deep cooling in the Quaternary accelerated mechanical weathering globally by significantly expanding the area subject to frost. Further, the inclusion of subglacial erosion alongside periglacial processes in our computational experiments points to alpine glaciers increasing the long-term efficiency of frost-driven erosion by steepening hillslopes.
Peromyscus burrowing: A model system for behavioral evolution.
Hu, Caroline K; Hoekstra, Hopi E
2017-01-01
A major challenge to understanding the genetic basis of complex behavioral evolution is the quantification of complex behaviors themselves. Deer mice of the genus Peromyscus vary in their burrowing behavior, which leaves behind a physical trace that is easily preserved and measured. Moreover, natural burrowing behaviors are recapitulated in the lab, and there is a strong heritable component. Here we discuss potential mechanisms driving variation in burrows with an emphasis on two sister species: P. maniculatus, which digs a simple, short burrow, and P. polionotus, which digs a long burrow with a complex architecture. A forward-genetic cross between these two species identified several genomic regions associated with burrow traits, suggesting this complex behavior has evolved in a modular fashion. Because burrow differences are most likely due to differences in behavioral circuits, Peromyscus burrowing offers an exciting opportunity to link genetic variation between natural populations to evolutionary changes in neural circuits. Copyright © 2016 Elsevier Ltd. All rights reserved.
Analysis of vertebrate genomes suggests a new model for clade B serpin evolution
Directory of Open Access Journals (Sweden)
Bird Phillip I
2005-11-01
Full Text Available Abstract Background The human genome contains 13 clade B serpin genes at two loci, 6p25 and 18q21. The three genes at 6p25 all conform to a 7-exon gene structure with conserved intron positioning and phasing, however, at 18q21 there are two 7-exon genes and eight genes with an additional exon yielding an 8-exon structure. Currently, it is not known how these two loci evolved, nor which gene structure arose first – did the 8-exon genes gain an exon, or did the 7-exon genes lose one? Here we use the genomes of diverse vertebrate species to plot the emergence of clade B serpin genes and to identify the point at which the two genomic structures arose. Results Analysis of the chicken genome indicated the presence of a single clade B serpin gene locus, containing orthologues of both human loci and both genomic structures. The frog genome and the genomes of three fish species presented progressively simpler loci, although only the 7-exon structure could be identified. The Serpinb12 gene contains seven exons in the frog genome, but eight exons in chickens and humans, indicating that the additional exon evolved in this gene. Conclusion We propose a new model for clade B serpin evolution from a single 7-exon gene (either Serpinb1 or Serpinb6. An additional exon was gained in the Serpinb12 gene between the tetrapoda and amniota radiations to produce the 8-exon structure. Both structures were then duplicated at a single locus until a chromosomal breakage occurred at some point along the mammalian lineage resulting in the two modern loci.
Analysis of vertebrate genomes suggests a new model for clade B serpin evolution.
Kaiserman, Dion; Bird, Phillip I
2005-11-23
The human genome contains 13 clade B serpin genes at two loci, 6p25 and 18q21. The three genes at 6p25 all conform to a 7-exon gene structure with conserved intron positioning and phasing, however, at 18q21 there are two 7-exon genes and eight genes with an additional exon yielding an 8-exon structure. Currently, it is not known how these two loci evolved, nor which gene structure arose first--did the 8-exon genes gain an exon, or did the 7-exon genes lose one? Here we use the genomes of diverse vertebrate species to plot the emergence of clade B serpin genes and to identify the point at which the two genomic structures arose. Analysis of the chicken genome indicated the presence of a single clade B serpin gene locus, containing orthologues of both human loci and both genomic structures. The frog genome and the genomes of three fish species presented progressively simpler loci, although only the 7-exon structure could be identified. The Serpinb12 gene contains seven exons in the frog genome, but eight exons in chickens and humans, indicating that the additional exon evolved in this gene. We propose a new model for clade B serpin evolution from a single 7-exon gene (either Serpinb1 or Serpinb6). An additional exon was gained in the Serpinb12 gene between the tetrapoda and amniota radiations to produce the 8-exon structure. Both structures were then duplicated at a single locus until a chromosomal breakage occurred at some point along the mammalian lineage resulting in the two modern loci.
A Two-State Model of Tree Evolution and its Applications to Alu Retrotransposition.
Moshiri, Niema; Mirarab, Siavash
2017-11-20
Models of tree evolution have mostly focused on capturing the cladogenesis processes behind speciation. Processes that derive the evolution of genomic elements, such as repeats, are not necessarily captured by these existing models. In this paper, we design a model of tree evolution that we call the dual-birth model, and we show how it can be useful in studying the evolution of short Alu repeats found in the human genome in abundance. The dual-birth model extends the traditional birth- only model to have two rates of propagation, one for active nodes that propagate often, and another for inactive nodes, that with a lower rate, activate and start propagating. Adjusting the ratio of the rates controls the expected tree balance. We present several theoretical results under the dual-birth model, introduce parameter estimation techniques, and study the properties of the model in simulations. We then use the dual-birth model to estimate the number of active Alu elements and their rates of propagation and activation in the human genome based on a large phylogenetic tree that we build from close to one million Alu sequences. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Ground states and critical points for generalized Frenkel-Kontorova models
De La Llave, R
2006-01-01
We consider a multidimensional model of Frenkel-Kontorova type but we allownon-nearest neighbor interactions.For every possible frequancy vector, weshow that there are quasi-periodicground states which enjoy further geometric properties.The gound states we produce are either bigger orsmaller than the state. They are are at bounded distance fromthe plane wave with the given frequency.The comparison property above implies that theground states and the translations are organized intolaminations. If these leave a gap, we showthat there are critical points inside the gap whichalso satisfy the comparizon properties.In particular, given any frequency, we showthat either there is a continuous parameter ofground states or there is a ground state andanother critical point which is not a ground state.
IMPLICIT SHAPE MODELS FOR OBJECT DETECTION IN 3D POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Velizhev
2012-07-01
Full Text Available We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.
Is zero-point energy physical? A toy model for Casimir-like effect
Nikolić, Hrvoje
2017-08-01
Zero-point energy is generally known to be unphysical. Casimir effect, however, is often presented as a counterexample, giving rise to a conceptual confusion. To resolve the confusion we study foundational aspects of Casimir effect at a qualitative level, but also at a quantitative level within a simple toy model with only 3 degrees of freedom. In particular, we point out that Casimir vacuum is not a state without photons, and not a ground state for a Hamiltonian that can describe Casimir force. Instead, Casimir vacuum can be related to the photon vacuum by a non-trivial Bogoliubov transformation, and it is a ground state only for an effective Hamiltonian describing Casimir plates at a fixed distance. At the fundamental microscopic level, Casimir force is best viewed as a manifestation of van der Waals forces.
Classical dynamics of the Abelian Higgs model from the critical point and beyond
Directory of Open Access Journals (Sweden)
G.C. Katsimiga
2015-09-01
Full Text Available We present two different families of solutions of the U(1-Higgs model in a (1+1 dimensional setting leading to a localization of the gauge field. First we consider a uniform background (the usual vacuum, which corresponds to the fully higgsed-superconducting phase. Then we study the case of a non-uniform background in the form of a domain wall which could be relevantly close to the critical point of the associated spontaneous symmetry breaking. For both cases we obtain approximate analytical nodeless and nodal solutions for the gauge field resulting as bound states of an effective Pöschl–Teller potential created by the scalar field. The two scenaria differ only in the scale of the characteristic localization length. Numerical simulations confirm the validity of the obtained analytical solutions. Additionally we demonstrate how a kink may be used as a mediator driving the dynamics from the critical point and beyond.
Arrighi, Chiara; Campo, Lorenzo
2017-04-01
In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.
Energy Technology Data Exchange (ETDEWEB)
Tikare, Veena [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hernandez-Rivera, Efrain [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Madison, Jonathan D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Holm, Elizabeth Ann [Carnegie Mellon Univ., Pittsburgh, PA (United States); Patterson, Burton R. [Univ. of Florida, Gainesville, FL (United States). Dept. of Materials Science and Engineering; Homer, Eric R. [Brigham Young Univ., Provo, UT (United States). Dept. of Mechanical Engineering
2013-09-01
Most materials microstructural evolution processes progress with multiple processes occurring simultaneously. In this work, we have concentrated on the processes that are active in nuclear materials, in particular, nuclear fuels. These processes are coarsening, nucleation, differential diffusion, phase transformation, radiation-induced defect formation and swelling, often with temperature gradients present. All these couple and contribute to evolution that is unique to nuclear fuels and materials. Hybrid model that combines elements from the Potts Monte Carlo, phase-field models and others have been developed to address these multiple physical processes. These models are described and applied to several processes in this report. An important feature of the models developed are that they are coded as applications within SPPARKS, a Sandiadeveloped framework for simulation at the mesoscale of microstructural evolution processes by kinetic Monte Carlo methods. This makes these codes readily accessible and adaptable for future applications.
Turn-based evolution in a simplified model of artistic creative process
DEFF Research Database (Denmark)
Dahlstedt, Palle
2015-01-01
Evolutionary computation has often been presented as a possible model for creativity in computers. In this paper, evolution is discussed in the light of a theoretical model of human artistic process, recently presented by the author. Some crucial differences between human artistic creativity...... and natural evolution are observed and discussed, also in the light of other creative processes occurring in nature. As a tractable way to overcome these limitations, a new kind of evolutionary implementation of creativity is proposed, based on a simplified version of the previously presented model...
GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling.
Wang, Fuhong; Chen, Xinghan; Guo, Fei
2015-06-30
Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1-7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%-40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the
Directory of Open Access Journals (Sweden)
ALi Hassan Abuzaid
2013-12-01
Full Text Available If the interest is to calibrate two instruments then the functional relationship model is more appropriate than regression models. Fitting a straight line when both variables are circular and subject to errors has not received much attention. In this paper, we consider the problem of detecting influential points in two functional relationship models for circular variables. The first is based on the simple circular regression the (SC, while the last is derived from the complex linear regression the (CL. The covariance matrices are derived and then the COVRATIO statistics are formulated for both models. The cut-off points are obtained and the power of performance is assessed via simulation studies. The performance of COVRATIO statistics depends on the concentration of error, sample size and level of contamination. In the case of linear relationship between two circular variables COVRATIO statistics of the (SC model performs better than the (CL. On the other hand, a novel diagram, the so-called spoke plot, is utilized to detect possible influential points For illustration purposes, the proposed procedures are applied on real data of wind directions measured by two different instruments. COVRATIO statistics and the spoke plot were able to identify two observations as influential points. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"جدول عادي"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;}
Modelling the large-scale redshift-space 3-point correlation function of galaxies
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ∼1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Liu, Kai; Balachandar, S.
2017-11-01
We perform a series of Euler-Lagrange direct numerical simulations (DNS) for multiphase jets and sedimenting particles. The forces the flow exerts on the particles in these two-way coupled simulations are computed using the Basset-Bousinesq-Oseen (BBO) equations. These forces do not explicitly account for particle-particle interactions, even though such pairwise interactions induced by the perturbations from neighboring particles may be important especially when the particle volume fraction is high. Such effects have been largely unaddressed in the literature. Here, we implement the Pairwise Interaction Extended Point-Particle (PIEP) model to simulate the effect of neighboring particle pairs. A simple collision model is also applied to avoid unphysical overlapping of solid spherical particles. The simulation results indicate that the PIEP model provides a more elaborative and complicated movement of the dispersed phase (droplets and particles). Office of Naval Research (ONR) Multidisciplinary University Research Initiative (MURI) project N00014-16-1-2617.
Finite size scaling of the Higgs-Yukawa model near the Gaussian fixed point
Energy Technology Data Exchange (ETDEWEB)
Chu, David Y.J.; Lin, C.J. David [National Chiao-Tung Univ., Hsinchu, Taiwan (China); Jansen, Karl [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Knippschild, Bastian [HISKP, Bonn (Germany); Nagy, Attila [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Humboldt-Univ. Berlin (Germany)
2016-12-15
We study the scaling properties of Higgs-Yukawa models. Using the technique of Finite-Size Scaling, we are able to derive scaling functions that describe the observables of the model in the vicinity of a Gaussian fixed point. A feasibility study of our strategy is performed for the pure scalar theory in the weak-coupling regime. Choosing the on-shell renormalisation scheme gives us an advantage to fit the scaling functions against lattice data with only a small number of fit parameters. These formulae can be used to determine the universality of the observed phase transitions, and thus play an essential role in future investigations of Higgs-Yukawa models, in particular in the strong Yukawa coupling region.