WorldWideScience

Sample records for evolution modeller point

  1. Understanding and Modeling the Evolution of Critical Points under Gaussian Blurring

    NARCIS (Netherlands)

    Kuijper, A.; Florack, L.M.J.; Heyden, A.; Sparr, G.; Nielsen, M.; Johansen, P.

    2002-01-01

    In order to investigate the deep structure of Gaussian scale space images, one needs to understand the behaviour of critical points under the influence of parameter-driven blurring. During this evolution two different types of special points are encountered, the so-called scale space saddles and the

  2. Modelling of pavement materials on steel decks using the five-point bending test: Thermo mechanical evolution and fatigue damage

    International Nuclear Information System (INIS)

    Arnaud, L; Houel, A

    2010-01-01

    This paper deals with the modelling of wearing courses on steel orthotropic decks such as the Millau viaduct in France. This is of great importance when dealing with durability: due to the softness of such a support, the pavement is subjected to considerable strains that may generate top-down cracks in the layer at right angles of the orthotropic plate stiffeners and shear cracks at the interface between pavement and steel. Therefore, a five-point bending fatigue test was developed and improved since 2003 at the ENTPE laboratory, to test different asphalt concrete mixes. This study aims at modelling the mechanical behavior of the wearing course throughout the fatigue test by a finite element method (Comsol Multiphysics software). Each material - steel, sealing sheet, asphalt concrete layer - is considered and modelled. The modelling of asphalt concrete is complex since it is a heterogeneous material, a viscoelastic medium and it thermosensitive. The actual characteristics of the asphalt concrete (thermo physical parameter and viscoelastic complex modulus) are determined experimentally on cylindrical cores. Moreover, a damage law based on Miner's damage is included in the model. The modelling of the fatigue test leads to encouraging results. Finally, results from the model are compared to the experimental data obtained from the five-point bending fatigue test device. The experimental data are very consistent with the numerical simulation.

  3. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  4. NEW ATLAS9 AND MARCS MODEL ATMOSPHERE GRIDS FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT (APOGEE)

    International Nuclear Information System (INIS)

    Mészáros, Sz.; Allende Prieto, C.; De Vicente, A.; Edvardsson, B.; Gustafsson, B.; Castelli, F.; García Pérez, A. E.; Majewski, S. R.; Plez, B.; Schiavon, R.; Shetrone, M.

    2012-01-01

    We present a new grid of model photospheres for the SDSS-III/APOGEE survey of stellar populations of the Galaxy, calculated using the ATLAS9 and MARCS codes. New opacity distribution functions were generated to calculate ATLAS9 model photospheres. MARCS models were calculated based on opacity sampling techniques. The metallicity ([M/H]) spans from –5 to 1.5 for ATLAS and –2.5 to 0.5 for MARCS models. There are three main differences with respect to previous ATLAS9 model grids: a new corrected H 2 O line list, a wide range of carbon ([C/M]) and α element [α/M] variations, and solar reference abundances from Asplund et al. The added range of varying carbon and α-element abundances also extends the previously calculated MARCS model grids. Altogether, 1980 chemical compositions were used for the ATLAS9 grid and 175 for the MARCS grid. Over 808,000 ATLAS9 models were computed spanning temperatures from 3500 K to 30,000 K and log g from 0 to 5, where larger temperatures only have high gravities. The MARCS models span from 3500 K to 5500 K, and log g from 0 to 5. All model atmospheres are publicly available online.

  5. TMDs: Evolution, modeling, precision

    Directory of Open Access Journals (Sweden)

    D’Alesio Umberto

    2015-01-01

    Full Text Available The factorization theorem for qT spectra in Drell-Yan processes, boson production and semi-inclusive deep inelastic scattering allows for the determination of the non-perturbative parts of transverse momentum dependent parton distribution functions. Here we discuss the fit of Drell-Yan and Z-production data using the transverse momentum dependent formalism and the resummation of the evolution kernel. We find a good theoretical stability of the results and a final χ2/points ≲ 1. We show how the fixing of the non-perturbative pieces of the evolution can be used to make predictions at present and future colliders.

  6. Modeling shoreface profile evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    Current knowledge of hydro-, sediment and morpho-dynamics in the shoreface environment is insufficient to undertake shoreface-profile evolution modelling on the basis of first physical principles. We propose a simple, panel-type model to map observed behaviour. The internal dynamics are determined

  7. Modelling shoreface profile evolution

    NARCIS (Netherlands)

    Stive, Marcel J.F.; de Vriend, Huib J.

    1995-01-01

    Current knowledge of hydro-, sediment and morpho-dynamics in the shoreface environment is insufficient to undertake shoreface-profile evolution modelling on the basis of first physical principles. We propose a simple, panel-type model to map observed behaviour. The internal dynamics are determined

  8. Evolution of Business Models

    DEFF Research Database (Denmark)

    Antero, Michelle C.; Hedman, Jonas; Henningsson, Stefan

    2013-01-01

    The ERP industry has undergone dramatic changes over the past decades due to changing market demands, thereby creating new challenges and opportunities, which have to be managed by ERP vendors. This paper inquires into the necessary evolution of business models in a technology-intensive industry (e...

  9. Model plant Key Measurement Points

    International Nuclear Information System (INIS)

    Schneider, R.A.

    1984-01-01

    For IAEA safeguards a Key Measurement Point is defined as the location where nuclear material appears in such a form that it may be measured to determine material flow or inventory. This presentation describes in an introductory manner the key measurement points and associated measurements for the model plant used in this training course

  10. The Apache Point Observatory Galactic Evolution Experiment (APOGEE)

    DEFF Research Database (Denmark)

    Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.

    2017-01-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the...

  11. Smooth random change point models.

    Science.gov (United States)

    van den Hout, Ardo; Muniz-Terrera, Graciela; Matthews, Fiona E

    2011-03-15

    Change point models are used to describe processes over time that show a change in direction. An example of such a process is cognitive ability, where a decline a few years before death is sometimes observed. A broken-stick model consists of two linear parts and a breakpoint where the two lines intersect. Alternatively, models can be formulated that imply a smooth change between the two linear parts. Change point models can be extended by adding random effects to account for variability between subjects. A new smooth change point model is introduced and examples are presented that show how change point models can be estimated using functions in R for mixed-effects models. The Bayesian inference using WinBUGS is also discussed. The methods are illustrated using data from a population-based longitudinal study of ageing, the Cambridge City over 75 Cohort Study. The aim is to identify how many years before death individuals experience a change in the rate of decline of their cognitive ability. Copyright © 2010 John Wiley & Sons, Ltd.

  12. An integral constraint for the evolution of the galaxy two-point correlation function

    International Nuclear Information System (INIS)

    Peebles, P.J.E.; Groth, E.J.

    1976-01-01

    Under some conditions an integral over the galaxy two-point correlation function, xi(x,t), evolves with the expansion of the universe in a simple manner easily computed from linear perturbation theory.This provides a useful constraint on the possible evolution of xi(x,t) itself. We test the integral constraint with both an analytic model and numerical N-body simulations for the evolution of irregularities in an expanding universe. Some applications are discussed. (orig.) [de

  13. Modeling of Landslides with the Material Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren Mikkel; Andersen, Lars

    2008-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  14. Modelling of Landslides with the Material-point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2009-01-01

    A numerical model for studying the dynamic evolution of landslides is presented. The numerical model is based on the Generalized Interpolation Material Point Method. A simplified slope with a house placed on top is analysed. An elasto-plastic material model based on the Mohr-Coulomb yield criterion...

  15. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2009-01-01

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  16. Modelling point patterns with linear structures

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....

  17. Landscape Evolution Modelling-LAPSUS

    Energy Technology Data Exchange (ETDEWEB)

    Baartman, J. E. M.; Temme, A. J. A. M.; Schoorl, J. M.; Claessens, L.; Viveen, W.; Gorp, W. van; Veldkamp, A.

    2009-07-01

    Landscape evolution modelling can make the consequences of landscape evolution hypotheses explicit and theoretically allows for their falsification and improvement. ideally, landscape evolution models (LEMs) combine the results of all relevant landscape forming processes into an ever-adapting digital landscape (e.g. DEM). These processes may act on different spatial and temporal scales. LAPSUS is such a LEM. Processes that have in different studies been included in LAPSUS are water erosion and deposition, landslide activity, creep, solidification, weathering, tectonics and tillage. Process descriptions are as simple and generic as possible, ensuring wide applicability. (Author) 25 refs.

  18. Landscape Evolution Modelling-LAPSUS

    International Nuclear Information System (INIS)

    Baartman, J. E. M.; Temme, A. J. A. M.; Schoorl, J. M.; Claessens, L.; Viveen, W.; Gorp, W. van; Veldkamp, A.

    2009-01-01

    Landscape evolution modelling can make the consequences of landscape evolution hypotheses explicit and theoretically allows for their falsification and improvement. ideally, landscape evolution models (LEMs) combine the results of all relevant landscape forming processes into an ever-adapting digital landscape (e.g. DEM). These processes may act on different spatial and temporal scales. LAPSUS is such a LEM. Processes that have in different studies been included in LAPSUS are water erosion and deposition, landslide activity, creep, solidification, weathering, tectonics and tillage. Process descriptions are as simple and generic as possible, ensuring wide applicability. (Author) 25 refs.

  19. Modeling Protein Evolution

    Science.gov (United States)

    Goldstein, Richard; Pollock, David

    The study of biology is fundamentally different from many other scientific pursuits, such as geology or astrophysics. This difference stems from the ubiquitous questions that arise about function and purpose. These are questions concerning why biological objects operate the way they do: what is the function of a polymerase? What is the role of the immune system? No one, aside from the most dedicated anthropist or interventionist theist, would attempt to determine the purpose of the earth's mantle or the function of a binary star. Among the sciences, it is only biology in which the details of what an object does can be said to be part of the reason for its existence. This is because the process of evolution is capable of improving an object to better carry out a function; that is, it adapts an object within the constraints of mechanics and history (i.e., what has come before). Thus, the ultimate basis of these biological questions is the process of evolution; generally, the function of an enzyme, cell type, organ, system, or trait is the thing that it does that contributes to the fitness (i.e., reproductive success) of the organism of which it is a part or characteristic. Our investigations cannot escape the simple fact that all things in biology (including ourselves) are, ultimately, the result of an evolutionary process.

  20. Modelling Geomorphic Systems: Landscape Evolution

    OpenAIRE

    Valters, Declan

    2016-01-01

    Landscape evolution models (LEMs) present the geomorphologist with a means of investigating how landscapes evolve in response to external forcings, such as climate and tectonics, as well as internal process laws. LEMs typically incorporate a range of different geomorphic transport laws integrated in a way that simulates the evolution of a 3D terrain surface forward through time. The strengths of LEMs as research tools lie in their ability to rapidly test many different hypotheses of landscape...

  1. Modelling microstructural evolution under irradiation

    International Nuclear Information System (INIS)

    Tikare, V.

    2015-01-01

    Microstructural evolution of materials under irradiation is characterised by some unique features that are not typically present in other application environments. While much understanding has been achieved by experimental studies, the ability to model this microstructural evolution for complex materials states and environmental conditions not only enhances understanding, it also enables prediction of materials behaviour under conditions that are difficult to duplicate experimentally. Furthermore, reliable models enable designing materials for improved engineering performance for their respective applications. Thus, development and application of mesoscale microstructural model are important for advancing nuclear materials technologies. In this chapter, the application of the Potts model to nuclear materials will be reviewed and demonstrated, as an example of microstructural evolution processes. (author)

  2. Towards an alternative evolution model.

    Science.gov (United States)

    van Waesberghe, H

    1982-01-01

    Lamarck and Darwin agreed on the inconstancy of species and on the exclusive gradualism of evolution (nature does not jump). Darwinism, revived as neo-Darwinism, was almost generally accepted from about 1930 till 1960. In the sixties the evolutionary importance of selection has been called in question by the neutralists. The traditional conception of the gene is disarranged by recent molecular-biological findings. Owing to the increasing confusion about the concept of genotype, this concept is reconsidered. The idea of the genotype as a cluster of genes is replaced by a cybernetical interpretation of the genotype. As nature does jump, exclusive gradualism is dismissed. Saltatory evolution is a natural phenomenon, provided by a sudden collapse of the thresholds which resist against evolution. The fossil record and the taxonomic system call for a macromutational interpretation. As Lamarck and Darwin overlooked the resistance of evolutionary thresholds, an alternative evolution model is needed, the first to be constructed on a palaeontological and taxonomic basis.

  3. Modeling of microstructural evolution under irradiation

    International Nuclear Information System (INIS)

    Odette, G.R.

    1979-01-01

    Microstructural evolution under irradiation is an extremely complex phenomenon involving numerous interacting mechanisms which alter both the microstructure and microchemistry of structural alloys. Predictive procedures which correlate primary irradiation and material variables to microstructural response are needed to extrapolate from the imperfect data base, which will be available, to fusion reactor conditions. Clearly, a marriage between models and experiments is needed. Specific steps to achieving such a marriage in the form of composite correlation model analysis are outlined and some preliminary results presented. The strongly correlated nature of microstructural evolution is emphasized and it is suggested that rate theory models, resting on the principle of material balances and focusing on coupled point defect-microchemical segregation processes, may be a practical approach to correlation model development. (orig.)

  4. CMS computing model evolution

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Colling, D; Fisk, I; Girone, M

    2014-01-01

    The CMS Computing Model was developed and documented in 2004. Since then the model has evolved to be more flexible and to take advantage of new techniques, but many of the original concepts remain and are in active use. In this presentation we will discuss the changes planned for the restart of the LHC program in 2015. We will discuss the changes planning in the use and definition of the computing tiers that were defined with the MONARC project. We will present how we intend to use new services and infrastructure to provide more efficient and transparent access to the data. We will discuss the computing plans to make better use of the computing capacity by scheduling more of the processor nodes, making better use of the disk storage, and more intelligent use of the networking.

  5. Biodiversity and models of evolution

    Directory of Open Access Journals (Sweden)

    S. L. Podvalny

    2016-01-01

    Full Text Available Summary. The paper discusses the evolutionary impact of biodiversity, the backbone of noosphere, which status has been fixed by a UN convention. The examples and role of such diversity are considered the various levels of life arrangement. On the level of standalone organisms, the diversity in question manifests itself in the differentiation and separation of the key physiologic functions which significantly broaden the eco-niche for the species with the consummate type of such separation. However, the organismic level of biodiversity does not work for building any developmental models since the starting point of genetic inheritance and variability processes emerges on the minimum structural unit of the living world only, i.e. the population. It is noted that the sufficient gene pool for species development may accumulate in fairly large populations only, where the general rate of mutation does not yield to the rate of ambient variations. The paper shows that the known formal models of species development based on the Fisher theorem about the impact of genodispersion on species adjustment are not in keeping with the actual existence of the species due to the conventionally finite and steady number of genotypes within a population. On the ecosystem level of life arrangement, the key role pertains to the taxonomic diversity supporting the continuous food chain in the system against any adverse developmental conditions of certain taxons. Also, the progressive evolution of an ecosystem is largely stabilized by its multilayer hierarchic structure and the closed circle of matter and energy. The developmental system models based on the Lotka-Volterra equations describing the interaction of the open-loop ecosystem elements only insufficiently represent the position of biodiversity in the evolutionary processes. The paper lays down the requirements to such models which take into account the mass balance within a system; its trophic structure; the

  6. The Apache Point Observatory Galactic Evolution Experiment (APOGEE)

    Science.gov (United States)

    Majewski, Steven R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.; Allende Prieto, Carlos; Barkhouser, Robert; Bizyaev, Dmitry; Blank, Basil; Brunner, Sophia; Burton, Adam; Carrera, Ricardo; Chojnowski, S. Drew; Cunha, Kátia; Epstein, Courtney; Fitzgerald, Greg; García Pérez, Ana E.; Hearty, Fred R.; Henderson, Chuck; Holtzman, Jon A.; Johnson, Jennifer A.; Lam, Charles R.; Lawler, James E.; Maseman, Paul; Mészáros, Szabolcs; Nelson, Matthew; Nguyen, Duy Coung; Nidever, David L.; Pinsonneault, Marc; Shetrone, Matthew; Smee, Stephen; Smith, Verne V.; Stolberg, Todd; Skrutskie, Michael F.; Walker, Eric; Wilson, John C.; Zasowski, Gail; Anders, Friedrich; Basu, Sarbani; Beland, Stephane; Blanton, Michael R.; Bovy, Jo; Brownstein, Joel R.; Carlberg, Joleen; Chaplin, William; Chiappini, Cristina; Eisenstein, Daniel J.; Elsworth, Yvonne; Feuillet, Diane; Fleming, Scott W.; Galbraith-Frew, Jessica; García, Rafael A.; García-Hernández, D. Aníbal; Gillespie, Bruce A.; Girardi, Léo; Gunn, James E.; Hasselquist, Sten; Hayden, Michael R.; Hekker, Saskia; Ivans, Inese; Kinemuchi, Karen; Klaene, Mark; Mahadevan, Suvrath; Mathur, Savita; Mosser, Benoît; Muna, Demitri; Munn, Jeffrey A.; Nichol, Robert C.; O'Connell, Robert W.; Parejko, John K.; Robin, A. C.; Rocha-Pinto, Helio; Schultheis, Matthias; Serenelli, Aldo M.; Shane, Neville; Silva Aguirre, Victor; Sobeck, Jennifer S.; Thompson, Benjamin; Troup, Nicholas W.; Weinberg, David H.; Zamora, Olga

    2017-09-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution (R ˜ 22,500), high signal-to-noise ratio (>100), infrared (1.51-1.70 μm) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.

  7. The Apache Point Observatory Galactic Evolution Experiment (APOGEE)

    International Nuclear Information System (INIS)

    Majewski, Steven R.; Brunner, Sophia; Burton, Adam; Chojnowski, S. Drew; Pérez, Ana E. García; Hearty, Fred R.; Lam, Charles R.; Schiavon, Ricardo P.; Frinchaboy, Peter M.; Prieto, Carlos Allende; Carrera, Ricardo; Barkhouser, Robert; Bizyaev, Dmitry; Blank, Basil; Henderson, Chuck; Cunha, Kátia; Epstein, Courtney; Johnson, Jennifer A.; Fitzgerald, Greg; Holtzman, Jon A.

    2017-01-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution ( R  ∼ 22,500), high signal-to-noise ratio (>100), infrared (1.51–1.70 μ m) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.

  8. The Apache Point Observatory Galactic Evolution Experiment (APOGEE)

    Energy Technology Data Exchange (ETDEWEB)

    Majewski, Steven R.; Brunner, Sophia; Burton, Adam; Chojnowski, S. Drew; Pérez, Ana E. García; Hearty, Fred R.; Lam, Charles R. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 N. A’Ohoku Place, Hilo, HI 96720 (United States); Frinchaboy, Peter M. [Department of Physics and Astronomy, Texas Christian University, Fort Worth, TX 76129 (United States); Prieto, Carlos Allende; Carrera, Ricardo [Instituto de Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Barkhouser, Robert [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Bizyaev, Dmitry [Apache Point Observatory and New Mexico State University, P.O. Box 59, Sunspot, NM, 88349-0059 (United States); Blank, Basil; Henderson, Chuck [Pulse Ray Machining and Design, 4583 State Route 414, Beaver Dams, NY 14812 (United States); Cunha, Kátia [Observatório Nacional, Rio de Janeiro, RJ 20921-400 (Brazil); Epstein, Courtney; Johnson, Jennifer A. [The Ohio State University, Columbus, OH 43210 (United States); Fitzgerald, Greg [New England Optical Systems, 237 Cedar Hill Street, Marlborough, MA 01752 (United States); Holtzman, Jon A. [New Mexico State University, Las Cruces, NM 88003 (United States); and others

    2017-09-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), one of the programs in the Sloan Digital Sky Survey III (SDSS-III), has now completed its systematic, homogeneous spectroscopic survey sampling all major populations of the Milky Way. After a three-year observing campaign on the Sloan 2.5 m Telescope, APOGEE has collected a half million high-resolution ( R  ∼ 22,500), high signal-to-noise ratio (>100), infrared (1.51–1.70 μ m) spectra for 146,000 stars, with time series information via repeat visits to most of these stars. This paper describes the motivations for the survey and its overall design—hardware, field placement, target selection, operations—and gives an overview of these aspects as well as the data reduction, analysis, and products. An index is also given to the complement of technical papers that describe various critical survey components in detail. Finally, we discuss the achieved survey performance and illustrate the variety of potential uses of the data products by way of a number of science demonstrations, which span from time series analysis of stellar spectral variations and radial velocity variations from stellar companions, to spatial maps of kinematics, metallicity, and abundance patterns across the Galaxy and as a function of age, to new views of the interstellar medium, the chemistry of star clusters, and the discovery of rare stellar species. As part of SDSS-III Data Release 12 and later releases, all of the APOGEE data products are publicly available.

  9. MODEL FOR SEMANTICALLY RICH POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    F. Poux

    2017-10-01

    Full Text Available This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  10. Model for Semantically Rich Point Cloud Data

    Science.gov (United States)

    Poux, F.; Neuville, R.; Hallot, P.; Billen, R.

    2017-10-01

    This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.

  11. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    Science.gov (United States)

    Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri

    2015-11-01

    There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  12. FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.

    Directory of Open Access Journals (Sweden)

    David Bednar

    2015-11-01

    Full Text Available There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.

  13. Direct approach for solving nonlinear evolution and two-point

    Indian Academy of Sciences (India)

    Time-delayed nonlinear evolution equations and boundary value problems have a wide range of applications in science and engineering. In this paper, we implement the differential transform method to solve the nonlinear delay differential equation and boundary value problems. Also, we present some numerical examples ...

  14. A simple model for binary star evolution

    International Nuclear Information System (INIS)

    Whyte, C.A.; Eggleton, P.P.

    1985-01-01

    A simple model for calculating the evolution of binary stars is presented. Detailed stellar evolution calculations of stars undergoing mass and energy transfer at various rates are reported and used to identify the dominant physical processes which determine the type of evolution. These detailed calculations are used to calibrate the simple model and a comparison of calculations using the detailed stellar evolution equations and the simple model is made. Results of the evolution of a few binary systems are reported and compared with previously published calculations using normal stellar evolution programs. (author)

  15. Modeling SOL evolution during disruptions

    International Nuclear Information System (INIS)

    Rognlien, T.D.; Cohen, R.H.; Crotinger, J.A.

    1996-01-01

    We present the status of our models and transport simulations of the 2-D evolution of the scrape-off layer (SOL) during tokamak disruptions. This evolution is important for several reasons: It determines how the power from the core plasma is distributed on material surfaces, how impurities from those surfaces or from gas injection migrate back to the core region, and what are the properties of the SOL for carrying halo currents. We simulate this plasma in a time-dependent fashion using the SOL transport code UEDGE. This code models the SOL plasma using fluid equations of plasma density, parallel momentum (along the magnetic field), electron energy, ion energy, and neutral gas density. A multispecies model is used to follow the density of different charge-states of impurities. The parallel transport is classical but with kinetic modifications; these are presently treated by flux limits, but we have initiated more sophisticated models giving the correct long-mean-free path limit. The cross-field transport is anomalous, and one of the results of this work is to determine reasonable values to characterize disruptions. Our primary focus is on the initial thermal quench phase when most of the core energy is lost, but the total current is maintained. The impact of edge currents on the MHD equilibrium will be discussed

  16. Modeling Temporal Evolution and Multiscale Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2013-01-01

    Many real-world networks exhibit both temporal evolution and multiscale structure. We propose a model for temporally correlated multifurcating hierarchies in complex networks which jointly capture both effects. We use the Gibbs fragmentation tree as prior over multifurcating trees and a change......-point model to account for the temporal evolution of each vertex. We demonstrate that our model is able to infer time-varying multiscale structure in synthetic as well as three real world time-evolving complex networks. Our modeling of the temporal evolution of hierarchies brings new insights...

  17. The fast debris evolution model

    Science.gov (United States)

    Lewis, H. G.; Swinerd, G. G.; Newland, R. J.; Saunders, A.

    2009-09-01

    The 'particles-in-a-box' (PIB) model introduced by Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] removed the need for computer-intensive Monte Carlo simulation to predict the gross characteristics of an evolving debris environment. The PIB model was described using a differential equation that allows the stability of the low Earth orbit (LEO) environment to be tested by a straightforward analysis of the equation's coefficients. As part of an ongoing research effort to investigate more efficient approaches to evolutionary modelling and to develop a suite of educational tools, a new PIB model has been developed. The model, entitled Fast Debris Evolution (FADE), employs a first-order differential equation to describe the rate at which new objects ⩾10 cm are added and removed from the environment. Whilst Talent [Talent, D.L. Analytic model for orbital debris environmental management. J. Spacecraft Rocket, 29 (4), 508-513, 1992.] based the collision theory for the PIB approach on collisions between gas particles and adopted specific values for the parameters of the model from a number of references, the form and coefficients of the FADE model equations can be inferred from the outputs of future projections produced by high-fidelity models, such as the DAMAGE model. The FADE model has been implemented as a client-side, web-based service using JavaScript embedded within a HTML document. Due to the simple nature of the algorithm, FADE can deliver the results of future projections immediately in a graphical format, with complete user-control over key simulation parameters. Historical and future projections for the ⩾10 cm LEO debris environment under a variety of different scenarios are possible, including business as usual, no future launches, post-mission disposal and remediation. A selection of results is presented with comparisons with predictions made using the DAMAGE environment model

  18. From malaria parasite point of view – Plasmodium falciparum evolution

    Directory of Open Access Journals (Sweden)

    Agata Zerka

    2015-12-01

    Full Text Available Malaria is caused by infection with protozoan parasites belonging to the genus Plasmodium, which have arguably exerted the greatest selection pressure on humans in the history of our species. Besides humans, different Plasmodium parasites infect a wide range of animal hosts, from marine invertebrates to primates. On the other hand, individual Plasmodium species show high host specificity. The extraordinary evolution of Plasmodium probably began when a free-living red algae turned parasitic, and culminated with its ability to thrive inside a human red blood cell. Studies on the African apes generated new data on the evolution of malaria parasites in general and the deadliest human-specific species, Plasmodium falciparum, in particular. Initially, it was hypothesized that P. falciparum descended from the chimpanzee malaria parasite P. reichenowi, after the human and the chimp lineage diverged about 6 million years ago. However, a recently identified new species infecting gorillas, unexpectedly showed similarity to P. falciparum and was therefore named P. praefalciparum. That finding spurred an alternative hypothesis, which proposes that P. falciparum descended from its gorilla rather than chimp counterpart. In addition, the gorilla-to-human host shift may have occurred more recently (about 10 thousand years ago than the theoretical P. falciparum-P. reichenowi split. One of the key aims of the studies on Plasmodium evolution is to elucidate the mechanisms that allow the incessant host shifting and retaining the host specificity, especially in the case of human-specific species. Thorough understanding of these phenomena will be necessary to design effective malaria treatment and prevention strategies.

  19. Pointing and the Evolution of Language: An Applied Evolutionary Epistemological Approach

    Directory of Open Access Journals (Sweden)

    Nathalie Gontier

    2013-07-01

    Full Text Available Numerous evolutionary linguists have indicated that human pointing behaviour might be associated with the evolution of language. At an ontogenetic level, and in normal individuals, pointing develops spontaneously and the onset of human pointing precedes as well as facilitates phases in speech and language development. Phylogenetically, pointing behaviour might have preceded and facilitated the evolutionary origin of both gestural and vocal language. Contrary to wild non-human primates, captive and human-reared nonhuman primates also demonstrate pointing behaviour. In this article, we analyse the debates on pointing and its role it might have played in language evolution from a meta-level. From within an Applied Evolutionary Epistemological approach, we examine how exactly we can determine whether pointing has been a unit, a level or a mechanism in language evolution.

  20. Two-point model for divertor transport

    International Nuclear Information System (INIS)

    Galambos, J.D.; Peng, Y.K.M.

    1984-04-01

    Plasma transport along divertor field lines was investigated using a two-point model. This treatment requires considerably less effort to find solutions to the transport equations than previously used one-dimensional (1-D) models and is useful for studying general trends. It also can be a valuable tool for benchmarking more sophisticated models. The model was used to investigate the possibility of operating in the so-called high density, low temperature regime

  1. Modelling occupants’ heating set-point prefferences

    DEFF Research Database (Denmark)

    Andersen, Rune Vinther; Olesen, Bjarne W.; Toftum, Jørn

    2011-01-01

    consumption. Simultaneous measurement of the set-point of thermostatic radiator valves (trv), and indoor and outdoor environment characteristics was carried out in 15 dwellings in Denmark in 2008. Linear regression was used to infer a model of occupants’ interactions with trvs. This model could easily...... be implemented in most simulation software packages to increase the validity of the simulation outcomes....

  2. Comparison of sparse point distribution models

    DEFF Research Database (Denmark)

    Erbou, Søren Gylling Hemmingsen; Vester-Christensen, Martin; Larsen, Rasmus

    2010-01-01

    This paper compares several methods for obtaining sparse and compact point distribution models suited for data sets containing many variables. These are evaluated on a database consisting of 3D surfaces of a section of the pelvic bone obtained from CT scans of 33 porcine carcasses. The superior m...

  3. Quantitative interface models for simulating microstructure evolution

    International Nuclear Information System (INIS)

    Zhu, J.Z.; Wang, T.; Zhou, S.H.; Liu, Z.K.; Chen, L.Q.

    2004-01-01

    To quantitatively simulate microstructural evolution in real systems, we investigated three different interface models: a sharp-interface model implemented by the software DICTRA and two diffuse-interface models which use either physical order parameters or artificial order parameters. A particular example is considered, the diffusion-controlled growth of a γ ' precipitate in a supersaturated γ matrix in Ni-Al binary alloys. All three models use the thermodynamic and kinetic parameters from the same databases. The temporal evolution profiles of composition from different models are shown to agree with each other. The focus is on examining the advantages and disadvantages of each model as applied to microstructure evolution in alloys

  4. Can fisheries-induced evolution shift reference points for fisheries management?

    DEFF Research Database (Denmark)

    Heino, Mikko; Baulier, Loїc; Boukal, David S.

    2013-01-01

    Biological reference points are important tools for fisheries management. Reference points are not static, but may change when a population's environment or the population itself changes. Fisheries-induced evolution is one mechanism that can alter population characteristics, leading to “shifting...

  5. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    defined on Sd × Sd . We review the appealing properties of such processes, including their specific moment properties, density expressions and simulation procedures. Particularly, we characterize and construct isotropic DPPs models on Sd , where it becomes essential to specify the eigenvalues......We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  6. Evolution families of conformal mappings with fixed points and the Löwner-Kufarev equation

    International Nuclear Information System (INIS)

    Goryainov, V V

    2015-01-01

    The paper is concerned with evolution families of conformal mappings of the unit disc to itself that fix an interior point and a boundary point. Conditions are obtained for the evolution families to be differentiable, and an existence and uniqueness theorem for an evolution equation is proved. A convergence theorem is established which describes the topology of locally uniform convergence of evolution families in terms of infinitesimal generating functions. The main result in this paper is the embedding theorem which shows that any conformal mapping of the unit disc to itself with two fixed points can be embedded into a differentiable evolution family of such mappings. This result extends the range of the parametric method in the theory of univalent functions. In this way the problem of the mutual change of the derivative at an interior point and the angular derivative at a fixed point on the boundary is solved for a class of mappings of the unit disc to itself. In particular, the rotation theorem is established for this class of mappings. Bibliography: 27 titles

  7. Zero-point energy in bag models

    International Nuclear Information System (INIS)

    Milton, K.A.

    1979-01-01

    The zero-point (Casimir) energy of free vector (gluon) fields confined to a spherical cavity (bag) is computed. With a suitable renormalization the result for eight gluons is E = + 0.51/a. This result is substantially larger than that for a spherical shell (where both interior and exterior modes are present), and so affects Johnson's model of the QCD vacuum. It is also smaller than, and of opposite sign to, the value used in bag model phenomenology, so it will have important implications there. 1 figure

  8. SYNTHETIC AGB EVOLUTION .1. A NEW MODEL

    NARCIS (Netherlands)

    GROENEWEGEN, MAT; DEJONG, T

    We have constructed a model to calculate in a synthetic way the evolution of stars on the asymptotic giant branch (AGB). The evolution is started at the first thermal pulse (TP) and is terminated when the envelope mass has been lost due to mass loss or when the core mass reaches the Chandrasekhar

  9. Jump diffusion models and the evolution of financial prices

    International Nuclear Information System (INIS)

    Figueiredo, Annibal; Castro, Marcio T. de; Silva, Sergio da; Gleria, Iram

    2011-01-01

    We analyze a stochastic model to describe the evolution of financial prices. We consider the stochastic term as a sum of the Wiener noise and a jump process. We point to the effects of the jumps on the return time evolution, a central concern of the econophysics literature. The presence of jumps suggests that the process can be described by an infinitely divisible characteristic function belonging to the De Finetti class. We then extend the De Finetti functions to a generalized nonlinear model and show the model to be capable of explaining return behavior. -- Highlights: → We analyze a stochastic model to describe the evolution of financial prices. → The stochastic term is considered as a sum of the Wiener noise and a jump process. → The process can be described by an infinitely divisible characteristic function belonging to the De Finetti class. → We extend the De Finetti functions to a generalized nonlinear model.

  10. Spatial Stochastic Point Models for Reservoir Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Syversveen, Anne Randi

    1997-12-31

    The main part of this thesis discusses stochastic modelling of geology in petroleum reservoirs. A marked point model is defined for objects against a background in a two-dimensional vertical cross section of the reservoir. The model handles conditioning on observations from more than one well for each object and contains interaction between objects, and the objects have the correct length distribution when penetrated by wells. The model is developed in a Bayesian setting. The model and the simulation algorithm are demonstrated by means of an example with simulated data. The thesis also deals with object recognition in image analysis, in a Bayesian framework, and with a special type of spatial Cox processes called log-Gaussian Cox processes. In these processes, the logarithm of the intensity function is a Gaussian process. The class of log-Gaussian Cox processes provides flexible models for clustering. The distribution of such a process is completely characterized by the intensity and the pair correlation function of the Cox process. 170 refs., 37 figs., 5 tabs.

  11. MATHEMATICAL MODELING OF AC ELECTRIC POINT MOTOR

    Directory of Open Access Journals (Sweden)

    S. YU. Buryak

    2014-03-01

    Full Text Available Purpose. In order to ensure reliability, security, and the most important the continuity of the transportation process, it is necessary to develop, implement, and then improve the automated methods of diagnostic mechanisms, devices and rail transport systems. Only systems that operate in real time mode and transmit data on the instantaneous state of the control objects can timely detect any faults and thus provide additional time for their correction by railway employees. Turnouts are one of the most important and responsible components, and therefore require the development and implementation of such diagnostics system.Methodology. Achieving the goal of monitoring and control of railway automation objects in real time is possible only with the use of an automated process of the objects state diagnosing. For this we need to know the diagnostic features of a control object, which determine its state at any given time. The most rational way of remote diagnostics is the shape and current spectrum analysis that flows in the power circuits of railway automatics. Turnouts include electric motors, which are powered by electric circuits, and the shape of the current curve depends on both the condition of the electric motor, and the conditions of the turnout maintenance. Findings. For the research and analysis of AC electric point motor it was developed its mathematical model. The calculation of parameters and interdependencies between the main factors affecting the operation of the asynchronous machine was conducted. The results of the model operation in the form of time dependences of the waveform curves of current on the load on engine shaft were obtained. Originality. During simulation the model of AC electric point motor, which satisfies the conditions of adequacy was built. Practical value. On the basis of the constructed model we can study the AC motor in various mode of operation, record and analyze current curve, as a response to various changes

  12. The dimensionality of stellar chemical space using spectra from the Apache Point Observatory Galactic Evolution Experiment

    Science.gov (United States)

    Price-Jones, Natalie; Bovy, Jo

    2018-03-01

    Chemical tagging of stars based on their similar compositions can offer new insights about the star formation and dynamical history of the Milky Way. We investigate the feasibility of identifying groups of stars in chemical space by forgoing the use of model derived abundances in favour of direct analysis of spectra. This facilitates the propagation of measurement uncertainties and does not pre-suppose knowledge of which elements are important for distinguishing stars in chemical space. We use ˜16 000 red giant and red clump H-band spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and perform polynomial fits to remove trends not due to abundance-ratio variations. Using expectation maximized principal component analysis, we find principal components with high signal in the wavelength regions most important for distinguishing between stars. Different subsamples of red giant and red clump stars are all consistent with needing about 10 principal components to accurately model the spectra above the level of the measurement uncertainties. The dimensionality of stellar chemical space that can be investigated in the H band is therefore ≲10. For APOGEE observations with typical signal-to-noise ratios of 100, the number of chemical space cells within which stars cannot be distinguished is approximately 1010±2 × (5 ± 2)n - 10 with n the number of principal components. This high dimensionality and the fine-grained sampling of chemical space are a promising first step towards chemical tagging based on spectra alone.

  13. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  14. Rethinking the evolution of specialization: A model for the evolution of phenotypic heterogeneity.

    Science.gov (United States)

    Rubin, Ilan N; Doebeli, Michael

    2017-12-21

    Phenotypic heterogeneity refers to genetically identical individuals that express different phenotypes, even when in the same environment. Traditionally, "bet-hedging" in fluctuating environments is offered as the explanation for the evolution of phenotypic heterogeneity. However, there are an increasing number of examples of microbial populations that display phenotypic heterogeneity in stable environments. Here we present an evolutionary model of phenotypic heterogeneity of microbial metabolism and a resultant theory for the evolution of phenotypic versus genetic specialization. We use two-dimensional adaptive dynamics to track the evolution of the population phenotype distribution of the expression of two metabolic processes with a concave trade-off. Rather than assume a Gaussian phenotype distribution, we use a Beta distribution that is capable of describing genotypes that manifest as individuals with two distinct phenotypes. Doing so, we find that environmental variation is not a necessary condition for the evolution of phenotypic heterogeneity, which can evolve as a form of specialization in a stable environment. There are two competing pressures driving the evolution of specialization: directional selection toward the evolution of phenotypic heterogeneity and disruptive selection toward genetically determined specialists. Because of the lack of a singular point in the two-dimensional adaptive dynamics and the fact that directional selection is a first order process, while disruptive selection is of second order, the evolution of phenotypic heterogeneity dominates and often precludes speciation. We find that branching, and therefore genetic specialization, occurs mainly under two conditions: the presence of a cost to maintaining a high phenotypic variance or when the effect of mutations is large. A cost to high phenotypic variance dampens the strength of selection toward phenotypic heterogeneity and, when sufficiently large, introduces a singular point into

  15. A distributed snow-evolution modeling system (SnowModel)

    Science.gov (United States)

    Glen E. Liston; Kelly. Elder

    2006-01-01

    SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...

  16. Temperature evolution of subharmonic gap structures in MgB{sub 2}/Nb point-contacts

    Energy Technology Data Exchange (ETDEWEB)

    Giubileo, F. [CNR-INFM Laboratorio Regionale SUPERMAT e Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno, via Salvador Allende, 84081 Baronissi (Italy)], E-mail: giubileo@sa.infn.it; Bobba, F.; Scarfato, A.; Piano, S. [CNR-INFM Laboratorio Regionale SUPERMAT e Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno, via Salvador Allende, 84081 Baronissi (Italy); Aprili, M. [Laboratoire de Spectroscopie en Lumiere Polarisee, ESPCI, 10 rue Vauquelin, 75005 Paris (France); CSNSM-CNRS, Bat. 108 Universite Paris-Sud, 91405 Orsay (France); Cucolo, A.M. [CNR-INFM Laboratorio Regionale SUPERMAT e Dipartimento di Fisica ' E.R. Caianiello' , Universita degli Studi di Salerno, via Salvador Allende, 84081 Baronissi (Italy)

    2007-09-01

    We have performed point-contact spectroscopy experiments on superconducting micro-constrictions between Nb tips and high quality MgB{sub 2} pellets. We measured the temperature evolution (between 4.2 K and 300 K) of the current-voltage (I-V) and of the dynamical conductance (dI/dV-V) characteristics. Above the Nb critical temperature T{sub C}{sup Nb}, the conductance of the constrictions behaves as predicted by the BTK model for S/N contacts being Nb in its normal state below T{sub C}{sup Nb}, the contacts show Josephson current and subharmonic gap structures, due to multiple Andreev reflections. These observations clearly indicate the coupling of the MgB{sub 2} 3D {pi}-band with the Nb superconducting order parameter. We found {delta}{sub {pi}} = 2.4 {+-} 0.2 meV for the three-dimensional gap of MgB{sub 2}.

  17. Modelling offshore sand wave evolution

    NARCIS (Netherlands)

    Nemeth, Attila; Hulscher, Suzanne J.M.H.; van Damme, Rudolf M.J.

    2007-01-01

    We present a two-dimensional vertical (2DV) flow and morphological numerical model describing the behaviour of offshore sand waves. The model contains the 2DV shallow water equations, with a free water surface and a general bed load formula. The water movement is coupled to the sediment transport

  18. QSO evolution in the interaction model

    International Nuclear Information System (INIS)

    De Robertis, M.

    1985-01-01

    QSO evolution is investigated according to the interaction hypothesis described most recently by Stockton (1982), in which activity results from an interaction between two galaxies resulting in the transfer of gas onto a supermassive black hole (SBH) at the center of at least one participant. Explicit models presented here for interactions in cluster environments show that a peak QSO population can be formed in this way at zroughly-equal2--3, with little activity prior to this epoch. Calculated space densities match those inferred from observations for this epoch. Substantial density evolution is expected in such models, since, after virialization, conditions in the cores of rich clusters lead to the depletion of gas-rich systems through ram-pressure stripping. Density evolution parameters of 6--12 are easily accounted for. At smaller redshifts, however, QSOs should be found primarily in poor clusters or groups. Probability estimates provided by this model are consistent with local estimates for the observed number of QSOs per interaction. Significant luminosity-dependent evolution might also be expected in these models. It is suggested that the mean SBH mass increases with lookback time, leading to a statistical brightening with redshift. Undoubtedly, both forms of evolution contribute to the overall QSO luminosity function

  19. Evolution of branch points for a laser beam propagating through an uplink turbulent atmosphere.

    Science.gov (United States)

    Ge, Xiao-Lu; Liu, Xuan; Guo, Cheng-Shan

    2014-03-24

    Evolution of branch points in the distorted optical field is studied when a laser beam propagates through turbulent atmosphere along an uplink path. Two categories of propagation events are mainly explored for the same propagation height: fixed wavelength with change of the turbulence strength and fixed turbulence strength with change of the wavelength. It is shown that, when the beam propagates to a certain height, the density of the branch-points reaches its maximum and such a height changes with the turbulence strength but nearly remains constant with different wavelengths. The relationship between the density of branch-points and the Rytov number is also given. A fitted formula describing the relationship between the density of branch-points and propagation height with different turbulence strength and wavelength is found out. Interestingly, this formula is very similar to the formula used for describing the Blackbody radiation in physics. The results obtained may be helpful for atmospheric optics, astronomy and optical communication.

  20. Exact 2-point function in Hermitian matrix model

    International Nuclear Information System (INIS)

    Morozov, A.; Shakirov, Sh.

    2009-01-01

    J. Harer and D. Zagier have found a strikingly simple generating function [1,2] for exact (all-genera) 1-point correlators in the Gaussian Hermitian matrix model. In this paper we generalize their result to 2-point correlators, using Toda integrability of the model. Remarkably, this exact 2-point correlation function turns out to be an elementary function - arctangent. Relation to the standard 2-point resolvents is pointed out. Some attempts of generalization to 3-point and higher functions are described.

  1. Evolution of Motor Control: From Reflexes and Motor Programs to the Equilibrium-Point Hypothesis

    OpenAIRE

    Latash, Mark L.

    2008-01-01

    This brief review analyzes the evolution of motor control theories along two lines that emphasize active (motor programs) and reactive (reflexes) features of voluntary movements. It suggests that the only contemporary hypothesis that integrates both approaches in a fruitful way is the equilibrium-point hypothesis. Physical, physiological, and behavioral foundations of the EP-hypothesis are considered as well as relations between the EP-hypothesis and the recent developments of the notion of m...

  2. Evolution of the ICESIM model

    International Nuclear Information System (INIS)

    Carson, R.W.; Groeneveld, J.L.

    1997-01-01

    A computer model named ICESIM was developed by Acres International in 1973 to study river ice problems associated with the construction of the Limestone Hydroelectric Generating Station on the Nelson River. The program could numerically simulate the processes of river ice formation under steady state conditions of flow. The program has evolved over two decades and has been used as a design and analytical tool for several river ice problems. One of the shortcomings of the model was its inability to consider varying river flows during a simulation. The model has recently been restructured into a new version called ICEDYN which uses a hydrodynamic module to compute river hydraulics. The ICEDYN program uses the same approach as ICESIM, but river hydraulics, which are affected by changes in inflow, and the accumulation of ice, are computed through a hydrodynamic solution of the St. Venant Equations. The ICEDYN model requires an extensive data set to describe the particular river reach being simulated. It has been tested on the Nelson River in northern Manitoba to see whether the numerical methods in the model can successfully represent field conditions. Results were encouraging but additional refinement is still needed. 7 refs., 1 tab., 7 figs

  3. Shaping asteroid models using genetic evolution (SAGE)

    Science.gov (United States)

    Bartczak, P.; Dudziński, G.

    2018-02-01

    In this work, we present SAGE (shaping asteroid models using genetic evolution), an asteroid modelling algorithm based solely on photometric lightcurve data. It produces non-convex shapes, orientations of the rotation axes and rotational periods of asteroids. The main concept behind a genetic evolution algorithm is to produce random populations of shapes and spin-axis orientations by mutating a seed shape and iterating the process until it converges to a stable global minimum. We tested SAGE on five artificial shapes. We also modelled asteroids 433 Eros and 9 Metis, since ground truth observations for them exist, allowing us to validate the models. We compared the derived shape of Eros with the NEAR Shoemaker model and that of Metis with adaptive optics and stellar occultation observations since other models from various inversion methods were available for Metis.

  4. Optimization of dynamic economic dispatch with valve-point effect using chaotic sequence based differential evolution algorithms

    International Nuclear Information System (INIS)

    He Dakuo; Dong Gang; Wang Fuli; Mao Zhizhong

    2011-01-01

    A chaotic sequence based differential evolution (DE) approach for solving the dynamic economic dispatch problem (DEDP) with valve-point effect is presented in this paper. The proposed method combines the DE algorithm with the local search technique to improve the performance of the algorithm. DE is the main optimizer, while an approximated model for local search is applied to fine tune in the solution of the DE run. To accelerate convergence of DE, a series of constraints handling rules are adopted. An initial population obtained by using chaotic sequence exerts optimal performance of the proposed algorithm. The combined algorithm is validated for two test systems consisting of 10 and 13 thermal units whose incremental fuel cost function takes into account the valve-point loading effects. The proposed combined method outperforms other algorithms reported in literatures for DEDP considering valve-point effects.

  5. A model for evolution and extinction

    OpenAIRE

    Roberts, Bruce W.; Newman, M. E. J.

    1995-01-01

    We present a model for evolution and extinction in large ecosystems. The model incorporates the effects of interactions between species and the influences of abiotic environmental factors. We study the properties of the model by approximate analytic solution and also by numerical simulation, and use it to make predictions about the distribution of extinctions and species lifetimes that we would expect to see in real ecosystems. It should be possible to test these predictions against the fossi...

  6. Political model of social evolution.

    Science.gov (United States)

    Acemoglu, Daron; Egorov, Georgy; Sonin, Konstantin

    2011-12-27

    Almost all democratic societies evolved socially and politically out of authoritarian and nondemocratic regimes. These changes not only altered the allocation of economic resources in society but also the structure of political power. In this paper, we develop a framework for studying the dynamics of political and social change. The society consists of agents that care about current and future social arrangements and economic allocations; allocation of political power determines who has the capacity to implement changes in economic allocations and future allocations of power. The set of available social rules and allocations at any point in time is stochastic. We show that political and social change may happen without any stochastic shocks or as a result of a shock destabilizing an otherwise stable social arrangement. Crucially, the process of social change is contingent (and history-dependent): the timing and sequence of stochastic events determine the long-run equilibrium social arrangements. For example, the extent of democratization may depend on how early uncertainty about the set of feasible reforms in the future is resolved.

  7. A Thermodynamic Point of View on Dark Energy Models

    Directory of Open Access Journals (Sweden)

    Vincenzo F. Cardone

    2017-07-01

    Full Text Available We present a conjugate analysis of two different dark energy models, namely the Barboza–Alcaniz parameterization and the phenomenologically-motivated Hobbit model, investigating both their agreement with observational data and their thermodynamical properties. We successfully fit a wide dataset including the Hubble diagram of Type Ia Supernovae, the Hubble rate expansion parameter as measured from cosmic chronometers, the baryon acoustic oscillations (BAO standard ruler data and the Planck distance priors. This analysis allows us to constrain the model parameters, thus pointing at the region of the wide parameters space, which is worth focusing on. As a novel step, we exploit the strong connection between gravity and thermodynamics to further check models’ viability by investigating their thermodynamical quantities. In particular, we study whether the cosmological scenario fulfills the generalized second law of thermodynamics, and moreover, we contrast the two models, asking whether the evolution of the total entropy is in agreement with the expectation for a closed system. As a general result, we discuss whether thermodynamic constraints can be a valid complementary way to both constrain dark energy models and differentiate among rival scenarios.

  8. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  9. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  10. Evolution of management activities and performance of the Point Lepreau Steam Generators

    International Nuclear Information System (INIS)

    Slade, J.; Keating, J.; Gendron, T.

    2007-01-01

    The Point Lepreau steam generators have been in service since 1983 when the plant was commissioned. During the first thirteen years of operation, Point Lepreau steam generator maintenance issues led to 3-4% unplanned plant incapability Steam generator fouling, corrosion, and the introduction of foreign materials during maintenance led to six tube leaks, two unplanned outages, two lengthy extended outages, and degraded thermal performance during this period. In recognition of the link between steam generator maintenance activities and plant performance, improvements to steam generator management activities have been continuously implemented since 1987. This paper reviews the evolution of steam generator management activities at Point Lepreau and the resulting improved trends in performance. Plant incapability from unplanned steam generator maintenance has been close to zero since 1996. The positive trends have provided a strong basis for the management strategies developed for post-refurbishment operation. (author)

  11. The thermal evolution of universe: standard model

    International Nuclear Information System (INIS)

    Nascimento, L.C.S. do.

    1975-08-01

    A description of the dynamical evolution of the Universe following a model based on the theory of General Relativity is made. The model admits the Cosmological principle,the principle of Equivalence and the Robertson-Walker metric (of which an original derivation is presented). In this model, the universe is considered as a perfect fluid, ideal and symmetric relatively to the number of particles and antiparticles. The thermodynamic relations deriving from these hypothesis are derived, and from them the several eras of the thermal evolution of the universe are established. Finally, the problems arising from certain specific predictions of the model are studied, and the predictions of the abundances of the elements according to nucleosynthesis and the actual behavior of the universe are analysed in detail. (author) [pt

  12. Evolution of Motor Control: From Reflexes and Motor Programs to the Equilibrium-Point Hypothesis.

    Science.gov (United States)

    Latash, Mark L

    2008-01-01

    This brief review analyzes the evolution of motor control theories along two lines that emphasize active (motor programs) and reactive (reflexes) features of voluntary movements. It suggests that the only contemporary hypothesis that integrates both approaches in a fruitful way is the equilibrium-point hypothesis. Physical, physiological, and behavioral foundations of the EP-hypothesis are considered as well as relations between the EP-hypothesis and the recent developments of the notion of motor synergies. The paper ends with a brief review of the criticisms of the EP-hypothesis and challenges that the hypothesis faces at this time.

  13. Modeling Evolution on Nearly Neutral Network Fitness Landscapes

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.

    2017-08-01

    To describe virus evolution, it is necessary to define a fitness landscape. In this article, we consider the microscopic models with the advanced version of neutral network fitness landscapes. In this problem setting, we suppose a fitness difference between one-point mutation neighbors to be small. We construct a modification of the Wright-Fisher model, which is related to ordinary infinite population models with nearly neutral network fitness landscape at the large population limit. From the microscopic models in the realistic sequence space, we derive two versions of nearly neutral network models: with sinks and without sinks. We claim that the suggested model describes the evolutionary dynamics of RNA viruses better than the traditional Wright-Fisher model with few sequences.

  14. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  15. Modeling aeolian dune and dune field evolution

    Science.gov (United States)

    Diniega, Serina

    Aeolian sand dune morphologies and sizes are strongly connected to the environmental context and physical processes active since dune formation. As such, the patterns and measurable features found within dunes and dune fields can be interpreted as records of environmental conditions. Using mathematical models of dune and dune field evolution, it should be possible to quantitatively predict dune field dynamics from current conditions or to determine past field conditions based on present-day observations. In this dissertation, we focus on the construction and quantitative analysis of a continuum dune evolution model. We then apply this model towards interpretation of the formative history of terrestrial and martian dunes and dune fields. Our first aim is to identify the controls for the characteristic lengthscales seen in patterned dune fields. Variations in sand flux, binary dune interactions, and topography are evaluated with respect to evolution of individual dunes. Through the use of both quantitative and qualitative multiscale models, these results are then extended to determine the role such processes may play in (de)stabilization of the dune field. We find that sand flux variations and topography generally destabilize dune fields, while dune collisions can yield more similarly-sized dunes. We construct and apply a phenomenological macroscale dune evolution model to then quantitatively demonstrate how dune collisions cause a dune field to evolve into a set of uniformly-sized dunes. Our second goal is to investigate the influence of reversing winds and polar processes in relation to dune slope and morphology. Using numerical experiments, we investigate possible causes of distinctive morphologies seen in Antarctic and martian polar dunes. Finally, we discuss possible model extensions and needed observations that will enable the inclusion of more realistic physical environments in the dune and dune field evolution models. By elucidating the qualitative and

  16. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  17. Sand Point, Alaska Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  18. Utilization of the Discrete Differential Evolution for Optimization in Multidimensional Point Clouds.

    Science.gov (United States)

    Uher, Vojtěch; Gajdoš, Petr; Radecký, Michal; Snášel, Václav

    2016-01-01

    The Differential Evolution (DE) is a widely used bioinspired optimization algorithm developed by Storn and Price. It is popular for its simplicity and robustness. This algorithm was primarily designed for real-valued problems and continuous functions, but several modified versions optimizing both integer and discrete-valued problems have been developed. The discrete-coded DE has been mostly used for combinatorial problems in a set of enumerative variants. However, the DE has a great potential in the spatial data analysis and pattern recognition. This paper formulates the problem as a search of a combination of distinct vertices which meet the specified conditions. It proposes a novel approach called the Multidimensional Discrete Differential Evolution (MDDE) applying the principle of the discrete-coded DE in discrete point clouds (PCs). The paper examines the local searching abilities of the MDDE and its convergence to the global optimum in the PCs. The multidimensional discrete vertices cannot be simply ordered to get a convenient course of the discrete data, which is crucial for good convergence of a population. A novel mutation operator utilizing linear ordering of spatial data based on the space filling curves is introduced. The algorithm is tested on several spatial datasets and optimization problems. The experiments show that the MDDE is an efficient and fast method for discrete optimizations in the multidimensional point clouds.

  19. Sand Point, Alaska Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Sand Point, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....

  20. Toke Point, Washington Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Toke Point, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....

  1. Conductance histogram evolution of an EC-MCBJ fabricated Au atomic point contact

    Energy Technology Data Exchange (ETDEWEB)

    Yang Yang; Liu Junyang; Chen Zhaobin; Tian Jinghua; Jin Xi; Liu Bo; Yang Fangzu; Tian Zhongqun [State Key Laboratory of Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005 (China); Li Xiulan; Tao Nongjian [Center for Bioelectronics and Biosensors, Biodesign Institute, Department of Electrical Engineering, Arizona State University, Tempe, AZ 85287-6206 (United States); Luo Zhongzi; Lu Miao, E-mail: zqtian@xmu.edu.cn [Micro-Electro-Mechanical Systems Research Center, Pen-Tung Sah Micro-Nano Technology Institute, Xiamen University, Xiamen 361005 (China)

    2011-07-08

    This work presents a study of Au conductance quantization based on a combined electrochemical deposition and mechanically controllable break junction (MCBJ) method. We describe the microfabrication process and discuss improved features of our microchip structure compared to the previous one. The improved structure prolongs the available life of the microchip and also increases the success rate of the MCBJ experiment. Stepwise changes in the current were observed at the last stage of atomic point contact breakdown and conductance histograms were constructed. The evolution of 1G{sub 0} peak height in conductance histograms was used to investigate the probability of formation of an atomic point contact. It has been shown that the success rate in forming an atomic point contact can be improved by decreasing the stretching speed and the degree that the two electrodes are brought into contact. The repeated breakdown and formation over thousands of cycles led to a distinctive increase of 1G{sub 0} peak height in the conductance histograms, and this increased probability of forming a single atomic point contact is discussed.

  2. Conductance histogram evolution of an EC-MCBJ fabricated Au atomic point contact

    International Nuclear Information System (INIS)

    Yang Yang; Liu Junyang; Chen Zhaobin; Tian Jinghua; Jin Xi; Liu Bo; Yang Fangzu; Tian Zhongqun; Li Xiulan; Tao Nongjian; Luo Zhongzi; Lu Miao

    2011-01-01

    This work presents a study of Au conductance quantization based on a combined electrochemical deposition and mechanically controllable break junction (MCBJ) method. We describe the microfabrication process and discuss improved features of our microchip structure compared to the previous one. The improved structure prolongs the available life of the microchip and also increases the success rate of the MCBJ experiment. Stepwise changes in the current were observed at the last stage of atomic point contact breakdown and conductance histograms were constructed. The evolution of 1G 0 peak height in conductance histograms was used to investigate the probability of formation of an atomic point contact. It has been shown that the success rate in forming an atomic point contact can be improved by decreasing the stretching speed and the degree that the two electrodes are brought into contact. The repeated breakdown and formation over thousands of cycles led to a distinctive increase of 1G 0 peak height in the conductance histograms, and this increased probability of forming a single atomic point contact is discussed.

  3. LAPSUS: soil erosion - landscape evolution model

    Science.gov (United States)

    van Gorp, Wouter; Temme, Arnaud; Schoorl, Jeroen

    2015-04-01

    LAPSUS is a soil erosion - landscape evolution model which is capable of simulating landscape evolution of a gridded DEM by using multiple water, mass movement and human driven processes on multiple temporal and spatial scales. It is able to deal with a variety of human landscape interventions such as landuse management and tillage and it can model their interactions with natural processes. The complex spatially explicit feedbacks the model simulates demonstrate the importance of spatial interaction of human activity and erosion deposition patterns. In addition LAPSUS can model shallow landsliding, slope collapse, creep, solifluction, biological and frost weathering, fluvial behaviour. Furthermore, an algorithm to deal with natural depressions has been added and event-based modelling with an improved infiltration description and dust deposition has been pursued. LAPSUS has been used for case studies in many parts of the world and is continuously developing and expanding. it is now available for third-party and educational use. It has a comprehensive user interface and it is accompanied by a manual and exercises. The LAPSUS model is highly suitable to quantify and understand catchment-scale erosion processes. More information and a download link is available on www.lapsusmodel.nl.

  4. Evolution of the vortex phase diagram in YBa2Cu3O7-δ with random point disorder

    International Nuclear Information System (INIS)

    Paulius, L. M.; Kwok, W.-K.; Olsson, R. J.; Petrean, A. M.; Tobos, V.; Fendrich, J. A.; Crabtree, G. W.; Burns, C. A.; Ferguson, S.

    2000-01-01

    We demonstrate the gradual evolution of the first-order vortex melting transition into a continuous transition with the systematic addition of point disorder induced by proton irradiation. The evolution occurs via the decrease of the upper critical point and the increase of the lower critical point. The collapse of the first-order melting transition occurs when the two critical points merge. We compare these results with the effects of electron irradiation on the first-order transition. (c) 2000 The American Physical Society

  5. Optimal evolution models for quantum tomography

    International Nuclear Information System (INIS)

    Czerwiński, Artur

    2016-01-01

    The research presented in this article concerns the stroboscopic approach to quantum tomography, which is an area of science where quantum physics and linear algebra overlap. In this article we introduce the algebraic structure of the parametric-dependent quantum channels for 2-level and 3-level systems such that the generator of evolution corresponding with the Kraus operators has no degenerate eigenvalues. In such cases the index of cyclicity of the generator is equal to 1, which physically means that there exists one observable the measurement of which performed a sufficient number of times at distinct instants provides enough data to reconstruct the initial density matrix and, consequently, the trajectory of the state. The necessary conditions for the parameters and relations between them are introduced. The results presented in this paper seem to have considerable potential applications in experiments due to the fact that one can perform quantum tomography by conducting only one kind of measurement. Therefore, the analyzed evolution models can be considered optimal in the context of quantum tomography. Finally, we introduce some remarks concerning optimal evolution models in the case of n-dimensional Hilbert space. (paper)

  6. Brand Equity Evolution: a System Dynamics Model

    Directory of Open Access Journals (Sweden)

    Edson Crescitelli

    2009-04-01

    Full Text Available One of the greatest challenges in brand management lies in monitoring brand equity over time. This paper aimsto present a simulation model able to represent this evolution. The model was drawn on brand equity concepts developed by Aaker and Joachimsthaler (2000, using the system dynamics methodology. The use ofcomputational dynamic models aims to create new sources of information able to sensitize academics and managers alike to the dynamic implications of their brand management. As a result, an easily implementable model was generated, capable of executing continuous scenario simulations by surveying casual relations among the variables that explain brand equity. Moreover, the existence of a number of system modeling tools will allow extensive application of the concepts used in this study in practical situations, both in professional and educational settings

  7. Galactic chemical evolution in hierarchical formation models

    Science.gov (United States)

    Arrigoni, Matias

    2010-10-01

    The chemical properties and abundance ratios of galaxies provide important information about their formation histories. Galactic chemical evolution has been modelled in detail within the monolithic collapse scenario. These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies. In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves. In this scenario, galaxies form when gas radiatively cools and condenses inside dark matter haloes, which themselves follow dissipationless gravitational collapse. The CDM picture has been successful at predicting many observed properties of galaxies (for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history), though many potential problems and open questions remain. It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. With the advent of more powerfull telescopes and detectors, precise observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them. However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a modern cosmological model of hierarchical assembly.

  8. Self-exciting point process in modeling earthquake occurrences

    International Nuclear Information System (INIS)

    Pratiwi, H.; Slamet, I.; Respatiwulan; Saputro, D. R. S.

    2017-01-01

    In this paper, we present a procedure for modeling earthquake based on spatial-temporal point process. The magnitude distribution is expressed as truncated exponential and the event frequency is modeled with a spatial-temporal point process that is characterized uniquely by its associated conditional intensity process. The earthquakes can be regarded as point patterns that have a temporal clustering feature so we use self-exciting point process for modeling the conditional intensity function. The choice of main shocks is conducted via window algorithm by Gardner and Knopoff and the model can be fitted by maximum likelihood method for three random variables. (paper)

  9. Dew Point modelling using GEP based multi objective optimization

    OpenAIRE

    Shroff, Siddharth; Dabhi, Vipul

    2013-01-01

    Different techniques are used to model the relationship between temperatures, dew point and relative humidity. Gene expression programming is capable of modelling complex realities with great accuracy, allowing at the same time, the extraction of knowledge from the evolved models compared to other learning algorithms. We aim to use Gene Expression Programming for modelling of dew point. Generally, accuracy of the model is the only objective used by selection mechanism of GEP. This will evolve...

  10. The turning points of revenue management: a brief history of future evolution

    Directory of Open Access Journals (Sweden)

    Ian Seymour Yeoman

    2017-04-01

    Full Text Available Purpose – The primary aim of revenue management (RM is to sell the right product to the right customer at the right time for the right price. Ever since the deregulation of US airline industry, and the emergence of the internet as a distribution channel, RM has come of age. The purpose of this paper is to map out ten turning points in the evolution of Revenue Management taking an historical perspective. Design/methodology/approach – The paper is a chronological account based upon published research and literature fundamentally drawn from the Journal of Revenue and Pricing Management. Findings – The significance and success to RM is attributed to the following turning points: Littlewood’s rule, Expected Marginal Seat Revenue, deregulation of the US air industry, single leg to origin and destination RM, the use of family fares, technological advancement, low-cost carriers, dynamic pricing, consumer and price transparency and pricing capabilities in organizations. Originality/value – The originality of the paper lies in identifying the core trends or turning points that have shaped the development of RM thus assisting futurists or forecasters to shape the future.

  11. Evolution of native point defects in ZnO bulk probed by positron annihilation spectroscopy

    Science.gov (United States)

    Peng, Cheng-Xiao; Wang, Ke-Fan; Zhang, Yang; Guo, Feng-Li; Weng, Hui-Min; Ye, Bang-Jiao

    2009-05-01

    This paper studies the evolution of native point defects with temperature in ZnO single crystals by positron lifetime and coincidence Doppler broadening (CDB) spectroscopy, combined with the calculated results of positron lifetime and electron momentum distribution. The calculated and experimental results of the positron lifetime in ZnO bulk ensure the presence of zinc monovacancy, and zinc monovacancy concentration begins to decrease above 600 °C annealing treatment. CDB is an effective method to distinguish the elemental species, here we combine this technique with calculated electron momentum distribution to determine the oxygen vacancies, which do not trap positrons due to their positive charge. The CDB spectra show that oxygen vacancies do not appear until 600 °C annealing treatment, and increase with the increase of annealing temperature. This study supports the idea that green luminescence has a close relation with oxygen vacancies.

  12. Evolution of native point defects in ZnO bulk probed by positron annihilation spectroscopy

    International Nuclear Information System (INIS)

    Cheng-Xiao, Peng; Ke-Fan, Wang; Yang, Zhang; Feng-Li, Guo; Hui-Min, Weng; Bang-Jiao, Ye

    2009-01-01

    This paper studies the evolution of native point defects with temperature in ZnO single crystals by positron lifetime and coincidence Doppler broadening (CDB) spectroscopy, combined with the calculated results of positron lifetime and electron momentum distribution. The calculated and experimental results of the positron lifetime in ZnO bulk ensure the presence of zinc monovacancy, and zinc monovacancy concentration begins to decrease above 600 °C annealing treatment. CDB is an effective method to distinguish the elemental species, here we combine this technique with calculated electron momentum distribution to determine the oxygen vacancies, which do not trap positrons due to their positive charge. The CDB spectra show that oxygen vacancies do not appear until 600 °C annealing treatment, and increase with the increase of annealing temperature. This study supports the idea that green luminescence has a close relation with oxygen vacancies

  13. Radical prostatectomy: evolution of surgical technique from the laparoscopic point of view

    Directory of Open Access Journals (Sweden)

    Xavier Cathelineau

    2010-04-01

    Full Text Available PURPOSE: To review the literature and present a current picture of the evolution in radical prostatectomy from the laparoscopic point of view. MATERIALS AND METHODS: We conducted an extensive Medline literature search. Articles obtained regarding laparoscopic radical prostatectomy (LRP and our experience at Institut Montsouris were used for reassessing anatomical and technical issues in radical prostatectomy. RESULTS: LRP nuances were reassessed by surgical teams in order to verify possible weaknesses in their performance. Our basic approach was to carefully study the anatomy and pioneer open surgery descriptions in order to standardized and master a technique. The learning curve is presented in terms of an objective evaluation of outcomes for cancer control and functional results. In terms of technique-outcomes, there are several key elements in radical prostatectomy, such as dorsal vein control-apex exposure and nerve sparing with particular implications in oncological and functional results. Major variations among the surgical teams' performance and follow-up prevented objective comparisons in radical prostatectomy. The remarkable evolution of LRP needs to be supported by comprehensive results. CONCLUSIONS: Radical prostatectomy is a complex surgical operation with difficult objectives. Surgical technique should be standardized in order to allow an adequate and reliable performance in all settings, keeping in mind that cancer control remains the primary objective. Reassessing anatomy and a return to basics in surgical technique is the means to improve outcomes and overcome the difficult task of the learning curve, especially in minimally access urological surgery.

  14. Point Reyes, California Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Point Reyes, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...

  15. A hierarchical model exhibiting the Kosterlitz-Thouless fixed point

    International Nuclear Information System (INIS)

    Marchetti, D.H.U.; Perez, J.F.

    1985-01-01

    A hierarchical model for 2-d Coulomb gases displaying a line stable of fixed points describing the Kosterlitz-Thouless phase transition is constructed. For Coulomb gases corresponding to Z sub(N)- models these fixed points are stable for an intermediate temperature interval. (Author) [pt

  16. Mathematical models of ecology and evolution

    DEFF Research Database (Denmark)

    Zhang, Lai

    2012-01-01

    -history processes: net-assimilation mechanism of 􀀀rule and net-reproduction mechanism of size dependence using a simple model comprising a size-structured consumer Daphina and an unstructured resource alge. It is found that in contrast to the former mechanism, the latter tends to destabilize population...... dynamics but as a trade-o promotes species survival by shortening juvenile delay between birth and the onset of reproduction. Paper II compares the size-spectrum and food-web representations of communities using two traits (body size and habitat location) based unstructured population model of Lotka......) based size-structured population model, that is, interference in foraging, maintenance, survival, and recruitment. Their impacts on the ecology and evolution of size-structured populations and communities are explored. Ecologically, interference aects population demographic properties either negatively...

  17. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  18. Constraints and entropy in a model of network evolution

    Science.gov (United States)

    Tee, Philip; Wakeman, Ian; Parisis, George; Dawes, Jonathan; Kiss, István Z.

    2017-11-01

    Barabási-Albert's "Scale Free" model is the starting point for much of the accepted theory of the evolution of real world communication networks. Careful comparison of the theory with a wide range of real world networks, however, indicates that the model is in some cases, only a rough approximation to the dynamical evolution of real networks. In particular, the exponent γ of the power law distribution of degree is predicted by the model to be exactly 3, whereas in a number of real world networks it has values between 1.2 and 2.9. In addition, the degree distributions of real networks exhibit cut offs at high node degree, which indicates the existence of maximal node degrees for these networks. In this paper we propose a simple extension to the "Scale Free" model, which offers better agreement with the experimental data. This improvement is satisfying, but the model still does not explain why the attachment probabilities should favor high degree nodes, or indeed how constraints arrive in non-physical networks. Using recent advances in the analysis of the entropy of graphs at the node level we propose a first principles derivation for the "Scale Free" and "constraints" model from thermodynamic principles, and demonstrate that both preferential attachment and constraints could arise as a natural consequence of the second law of thermodynamics.

  19. UNCERTAINTIES IN GALACTIC CHEMICAL EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Côté, Benoit; Ritter, Christian; Herwig, Falk; O’Shea, Brian W.; Pignatari, Marco; Jones, Samuel; Fryer, Chris L.

    2016-01-01

    We use a simple one-zone galactic chemical evolution model to quantify the uncertainties generated by the input parameters in numerical predictions for a galaxy with properties similar to those of the Milky Way. We compiled several studies from the literature to gather the current constraints for our simulations regarding the typical value and uncertainty of the following seven basic parameters: the lower and upper mass limits of the stellar initial mass function (IMF), the slope of the high-mass end of the stellar IMF, the slope of the delay-time distribution function of Type Ia supernovae (SNe Ia), the number of SNe Ia per M ⊙ formed, the total stellar mass formed, and the final mass of gas. We derived a probability distribution function to express the range of likely values for every parameter, which were then included in a Monte Carlo code to run several hundred simulations with randomly selected input parameters. This approach enables us to analyze the predicted chemical evolution of 16 elements in a statistical manner by identifying the most probable solutions, along with their 68% and 95% confidence levels. Our results show that the overall uncertainties are shaped by several input parameters that individually contribute at different metallicities, and thus at different galactic ages. The level of uncertainty then depends on the metallicity and is different from one element to another. Among the seven input parameters considered in this work, the slope of the IMF and the number of SNe Ia are currently the two main sources of uncertainty. The thicknesses of the uncertainty bands bounded by the 68% and 95% confidence levels are generally within 0.3 and 0.6 dex, respectively. When looking at the evolution of individual elements as a function of galactic age instead of metallicity, those same thicknesses range from 0.1 to 0.6 dex for the 68% confidence levels and from 0.3 to 1.0 dex for the 95% confidence levels. The uncertainty in our chemical evolution model

  20. A Mudball Model for the Evolution of Carbonaceous Asteroids

    Science.gov (United States)

    Travis, B. J.; Bland, P. A.

    2018-05-01

    We simulation the evolution of carbonaceous chondrite parent bodies from initially unconsolidated aggregations of rock grains and ice crystals. Application of the numerical model MAGHNUM to evolution of CM type planetesimals and Ceres is described.

  1. Monte-Carlo simulation of the evolution of point defects in solids under non-equilibrium conditions

    International Nuclear Information System (INIS)

    Maurice, Francoise; Doan, N.V.

    1981-11-01

    This report was written in order to serve as a guide for courageous users who want to tackle the problem of the evolution of point defect populations in a solid under non-equilibrium conditions by the Monte-Carlo technique. The original program, developed by Lanore in her swelling investigations on solids under irradiation by different particles, was generalized in order to take into account the effects and the phenomena related to the presence of solute atoms. Detailed descriptions of the simulation model, computational procedures and formulae used in the calculations are given. Two examples are shown to illustrate the applications to the swelling phenomenon: first, the effect to temperature or dose rate changes on void-swelling in electron-irradiated copper; second, the influence of solute atoms on the void nucleation in electron-irradiated nickel [fr

  2. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    International Nuclear Information System (INIS)

    Lonsdale, C.J.; Hacking, P.B.

    1989-01-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs

  3. A modified differential evolution approach for dynamic economic dispatch with valve-point effects

    International Nuclear Information System (INIS)

    Yuan Xiaohui; Wang Liang; Yuan Yanbin; Zhang Yongchuan; Cao Bo; Yang Bo

    2008-01-01

    Dynamic economic dispatch (DED) plays an important role in power system operation, which is a complicated non-linear constrained optimization problem. It has nonsmooth and nonconvex characteristic when generation unit valve-point effects are taken into account. This paper proposes a modified differential evolution approach (MDE) to solve DED problem with valve-point effects. In the proposed MDE method, feasibility-based selection comparison techniques and heuristic search rules are devised to handle constraints effectively. In contrast to the penalty function method, the constraints-handling method does not require penalty factors or any extra parameters and can guide the population to the feasible region quickly. Especially, it can be satisfied equality constraints of DED problem precisely. Moreover, the effects of two crucial parameters on the performance of the MDE for DED problem are studied as well. The feasibility and effectiveness of the proposed method is demonstrated for application example and the test results are compared with those of other methods reported in literature. It is shown that the proposed method is capable of yielding higher quality solutions

  4. Genealogies in simple models of evolution

    International Nuclear Information System (INIS)

    Brunet, Éric; Derrida, Bernard

    2013-01-01

    We review the statistical properties of the genealogies of a few models of evolution. In the asexual case, selection leads to coalescence times which grow logarithmically with the size of the population, in contrast with the linear growth of the neutral case. Moreover for a whole class of models, the statistics of the genealogies are those of the Bolthausen–Sznitman coalescent rather than the Kingman coalescent in the neutral case. For sexual reproduction in the neutral case, the time to reach the first common ancestors for the whole population and the time for all individuals to have all their ancestors in common are also logarithmic in the population size, as predicted by Chang in 1999. We discuss how these times are modified by introducing selection in a simple way. (paper)

  5. Evolution model with a cumulative feedback coupling

    Science.gov (United States)

    Trimper, Steffen; Zabrocki, Knud; Schulz, Michael

    2002-05-01

    The paper is concerned with a toy model that generalizes the standard Lotka-Volterra equation for a certain population by introducing a competition between instantaneous and accumulative, history-dependent nonlinear feedback the origin of which could be a contribution from any kind of mismanagement in the past. The results depend on the sign of that additional cumulative loss or gain term of strength λ. In case of a positive coupling the system offers a maximum gain achieved after a finite time but the population will die out in the long time limit. In this case the instantaneous loss term of strength u is irrelevant and the model exhibits an exact solution. In the opposite case λ<0 the time evolution of the system is terminated in a crash after ts provided u=0. This singularity after a finite time can be avoided if u≠0. The approach may well be of relevance for the qualitative understanding of more realistic descriptions.

  6. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  7. Identification of Influential Points in a Linear Regression Model

    Directory of Open Access Journals (Sweden)

    Jan Grosz

    2011-03-01

    Full Text Available The article deals with the detection and identification of influential points in the linear regression model. Three methods of detection of outliers and leverage points are described. These procedures can also be used for one-sample (independentdatasets. This paper briefly describes theoretical aspects of several robust methods as well. Robust statistics is a powerful tool to increase the reliability and accuracy of statistical modelling and data analysis. A simulation model of the simple linear regression is presented.

  8. Six-vertex model and Schramm-Loewner evolution

    Science.gov (United States)

    Kenyon, Richard; Miller, Jason; Sheffield, Scott; Wilson, David B.

    2017-05-01

    Square ice is a statistical mechanics model for two-dimensional ice, widely believed to have a conformally invariant scaling limit. We associate a Peano (space-filling) curve to a square ice configuration, and more generally to a so-called six-vertex model configuration, and argue that its scaling limit is a space-filling version of the random fractal curve SL E κ, Schramm-Loewner evolution with parameter κ , where 4 <κ ≤12 +8 √{2 } . For square ice, κ =12 . At the "free-fermion point" of the six-vertex model, κ =8 +4 √{3 } . These unusual values lie outside the classical interval 2 ≤κ ≤8 .

  9. Four point functions in the SL(2,R) WZW model

    Energy Technology Data Exchange (ETDEWEB)

    Minces, Pablo [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina)]. E-mail: minces@iafe.uba.ar; Nunez, Carmen [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina) and Physics Department, University of Buenos Aires, Ciudad Universitaria, Pab. I, 1428 Buenos Aires (Argentina)]. E-mail: carmen@iafe.uba.ar

    2007-04-19

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions.

  10. Four point functions in the SL(2,R) WZW model

    International Nuclear Information System (INIS)

    Minces, Pablo; Nunez, Carmen

    2007-01-01

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions

  11. A two-point kinetic model for the PROTEUS reactor

    International Nuclear Information System (INIS)

    Dam, H. van.

    1995-03-01

    A two-point reactor kinetic model for the PROTEUS-reactor is developed and the results are described in terms of frequency dependent reactivity transfer functions for the core and the reflector. It is shown that at higher frequencies space-dependent effects occur which imply failure of the one-point kinetic model. In the modulus of the transfer functions these effects become apparent above a radian frequency of about 100 s -1 , whereas for the phase behaviour the deviation from a point model already starts at a radian frequency of 10 s -1 . (orig.)

  12. A MARKED POINT PROCESS MODEL FOR VEHICLE DETECTION IN AERIAL LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    A. Börcs

    2012-07-01

    Full Text Available In this paper we present an automated method for vehicle detection in LiDAR point clouds of crowded urban areas collected from an aerial platform. We assume that the input cloud is unordered, but it contains additional intensity and return number information which are jointly exploited by the proposed solution. Firstly, the 3-D point set is segmented into ground, vehicle, building roof, vegetation and clutter classes. Then the points with the corresponding class labels and intensity values are projected to the ground plane, where the optimal vehicle configuration is described by a Marked Point Process (MPP model of 2-D rectangles. Finally, the Multiple Birth and Death algorithm is utilized to find the configuration with the highest confidence.

  13. Phenotypic heterogeneity in modeling cancer evolution.

    Directory of Open Access Journals (Sweden)

    Ali Mahdipour-Shirayeh

    Full Text Available The unwelcome evolution of malignancy during cancer progression emerges through a selection process in a complex heterogeneous population structure. In the present work, we investigate evolutionary dynamics in a phenotypically heterogeneous population of stem cells (SCs and their associated progenitors. The fate of a malignant mutation is determined not only by overall stem cell and non-stem cell growth rates but also differentiation and dedifferentiation rates. We investigate the effect of such a complex population structure on the evolution of malignant mutations. We derive exactly calculated results for the fixation probability of a mutant arising in each of the subpopulations. The exactly calculated results are in almost perfect agreement with the numerical simulations. Moreover, a condition for evolutionary advantage of a mutant cell versus the wild type population is given in the present study. We also show that microenvironment-induced plasticity in invading mutants leads to more aggressive mutants with higher fixation probability. Our model predicts that decreasing polarity between stem and non-stem cells' turnover would raise the survivability of non-plastic mutants; while it would suppress the development of malignancy for plastic mutants. The derived results are novel and general with potential applications in nature; we discuss our model in the context of colorectal/intestinal cancer (at the epithelium. However, the model clearly needs to be validated through appropriate experimental data. This novel mathematical framework can be applied more generally to a variety of problems concerning selection in heterogeneous populations, in other contexts such as population genetics, and ecology.

  14. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Fazio, D.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Sedov, A.; Twomey, M. S.; Wang, F.; Zaytsev, A.

    2015-12-01

    During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.

  15. TUNNEL POINT CLOUD FILTERING METHOD BASED ON ELLIPTIC CYLINDRICAL MODEL

    Directory of Open Access Journals (Sweden)

    N. Zhu

    2016-06-01

    Full Text Available The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points, therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  16. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  17. Automata network models of galaxy evolution

    Science.gov (United States)

    Chappell, David; Scalo, John

    1993-01-01

    Two ideas appear frequently in theories of star formation and galaxy evolution: (1) star formation is nonlocally excitatory, stimulating star formation in neighboring regions by propagation of a dense fragmenting shell or the compression of preexisting clouds; and (2) star formation is nonlocally inhibitory, making H2 regions and explosions which can create low-density and/or high temperature regions and increase the macroscopic velocity dispersion of the cloudy gas. Since it is not possible, given the present state of hydrodynamic modeling, to estimate whether one of these effects greatly dominates the other, it is of interest to investigate the predicted spatial pattern of star formation and its temporal behavior in simple models which incorporate both effects in a controlled manner. The present work presents preliminary results of such a study which is based on lattice galaxy models with various types of nonlocal inhibitory and excitatory couplings of the local SFR to the gas density, temperature, and velocity field meant to model a number of theoretical suggestions.

  18. The Impact of Modeling Assumptions in Galactic Chemical Evolution Models

    Science.gov (United States)

    Côté, Benoit; O'Shea, Brian W.; Ritter, Christian; Herwig, Falk; Venn, Kim A.

    2017-02-01

    We use the OMEGA galactic chemical evolution code to investigate how the assumptions used for the treatment of galactic inflows and outflows impact numerical predictions. The goal is to determine how our capacity to reproduce the chemical evolution trends of a galaxy is affected by the choice of implementation used to include those physical processes. In pursuit of this goal, we experiment with three different prescriptions for galactic inflows and outflows and use OMEGA within a Markov Chain Monte Carlo code to recover the set of input parameters that best reproduces the chemical evolution of nine elements in the dwarf spheroidal galaxy Sculptor. This provides a consistent framework for comparing the best-fit solutions generated by our different models. Despite their different degrees of intended physical realism, we found that all three prescriptions can reproduce in an almost identical way the stellar abundance trends observed in Sculptor. This result supports the similar conclusions originally claimed by Romano & Starkenburg for Sculptor. While the three models have the same capacity to fit the data, the best values recovered for the parameters controlling the number of SNe Ia and the strength of galactic outflows, are substantially different and in fact mutually exclusive from one model to another. For the purpose of understanding how a galaxy evolves, we conclude that only reproducing the evolution of a limited number of elements is insufficient and can lead to misleading conclusions. More elements or additional constraints such as the Galaxy’s star-formation efficiency and the gas fraction are needed in order to break the degeneracy between the different modeling assumptions. Our results show that the successes and failures of chemical evolution models are predominantly driven by the input stellar yields, rather than by the complexity of the Galaxy model itself. Simple models such as OMEGA are therefore sufficient to test and validate stellar yields. OMEGA

  19. A 'Turing' Test for Landscape Evolution Models

    Science.gov (United States)

    Parsons, A. J.; Wise, S. M.; Wainwright, J.; Swift, D. A.

    2008-12-01

    Resolving the interactions among tectonics, climate and surface processes at long timescales has benefited from the development of computer models of landscape evolution. However, testing these Landscape Evolution Models (LEMs) has been piecemeal and partial. We argue that a more systematic approach is required. What is needed is a test that will establish how 'realistic' an LEM is and thus the extent to which its predictions may be trusted. We propose a test based upon the Turing Test of artificial intelligence as a way forward. In 1950 Alan Turing posed the question of whether a machine could think. Rather than attempt to address the question directly he proposed a test in which an interrogator asked questions of a person and a machine, with no means of telling which was which. If the machine's answer could not be distinguished from those of the human, the machine could be said to demonstrate artificial intelligence. By analogy, if an LEM cannot be distinguished from a real landscape it can be deemed to be realistic. The Turing test of intelligence is a test of the way in which a computer behaves. The analogy in the case of an LEM is that it should show realistic behaviour in terms of form and process, both at a given moment in time (punctual) and in the way both form and process evolve over time (dynamic). For some of these behaviours, tests already exist. For example there are numerous morphometric tests of punctual form and measurements of punctual process. The test discussed in this paper provides new ways of assessing dynamic behaviour of an LEM over realistically long timescales. However challenges remain in developing an appropriate suite of challenging tests, in applying these tests to current LEMs and in developing LEMs that pass them.

  20. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  1. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  2. THE DATA REDUCTION PIPELINE FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT

    International Nuclear Information System (INIS)

    Nidever, David L.; Holtzman, Jon A.; Prieto, Carlos Allende; Mészáros, Szabolcs; Beland, Stephane; Bender, Chad; Desphande, Rohit; Bizyaev, Dmitry; Burton, Adam; García Pérez, Ana E.; Hearty, Fred R.; Majewski, Steven R.; Skrutskie, Michael F.; Sobeck, Jennifer S.; Wilson, John C.; Fleming, Scott W.; Muna, Demitri; Nguyen, Duy; Schiavon, Ricardo P.; Shetrone, Matthew

    2015-01-01

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, explores the stellar populations of the Milky Way using the Sloan 2.5-m telescope linked to a high resolution (R ∼ 22,500), near-infrared (1.51–1.70 μm) spectrograph with 300 optical fibers. For over 150,000 predominantly red giant branch stars that APOGEE targeted across the Galactic bulge, disks and halo, the collected high signal-to-noise ratio (>100 per half-resolution element) spectra provide accurate (∼0.1 km s −1 ) RVs, stellar atmospheric parameters, and precise (≲0.1 dex) chemical abundances for about 15 chemical species. Here we describe the basic APOGEE data reduction software that reduces multiple 3D raw data cubes into calibrated, well-sampled, combined 1D spectra, as implemented for the SDSS-III/APOGEE data releases (DR10, DR11 and DR12). The processing of the near-IR spectral data of APOGEE presents some challenges for reduction, including automated sky subtraction and telluric correction over a 3°-diameter field and the combination of spectrally dithered spectra. We also discuss areas for future improvement

  3. Design, Results, Evolution and Status of the ATLAS Simulation at Point1 Project

    CERN Document Server

    AUTHOR|(SzGeCERN)377840; Fressard-Batraneanu, Silvia Maria; Ballestrero, Sergio; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2015-01-01

    Abstract. During the LHC Long Shutdown 1 period (LS1), that started in 2013, the Simulation at Point1 (Sim@P1) Project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT (High Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 virtual machines (VMs) provided with 8 CPU cores each, for a total of up to 22000 parallel running jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 Project; operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities. The design aspects a...

  4. Design, Results, Evolution and Status of the ATLAS simulation in Point1 project.

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Brasolin, Franco; Contescu, Alexandru Cristian; Fazio, Daniel; Di Girolamo, Alessandro; Lee, Christopher Jon; Pozo Astigarraga, Mikel Eukeni; Scannicchio, Diana; Sedov, Alexey; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2015-01-01

    During the LHC long shutdown period (LS1), that started in 2013, the simulation in Point1 (Sim@P1) project takes advantage in an opportunistic way of the trigger and data acquisition (TDAQ) farm of the ATLAS experiment. The farm provides more than 1500 computer nodes, and they are particularly suitable for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2500 virtual machines (VM) provided with 8 CPU cores each, for a total of up to 20000 parallel running jobs. This contribution gives a thorough review of the design, the results and the evolution of the Sim@P1 project operating a large scale Openstack based virtualized platform deployed on top of the ATLAS TDAQ farm computing resources. During LS1, Sim@P1 was one of the most productive GRID sites: it delivered more than 50 million CPU-hours and it generated more than 1.7 billion Monte Carlo events to various analysis communities within the ATLAS collaboration. The particular design ...

  5. THE DATA REDUCTION PIPELINE FOR THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT

    Energy Technology Data Exchange (ETDEWEB)

    Nidever, David L. [Department of Astronomy, University of Michigan, Ann Arbor, MI 48109 (United States); Holtzman, Jon A. [New Mexico State University, Las Cruces, NM 88003 (United States); Prieto, Carlos Allende; Mészáros, Szabolcs [Instituto de Astrofísica de Canarias, Via Láctea s/n, E-38205 La Laguna, Tenerife (Spain); Beland, Stephane [Laboratory for Atmospheric and Space Sciences, University of Colorado at Boulder, Boulder, CO (United States); Bender, Chad; Desphande, Rohit [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Bizyaev, Dmitry [Apache Point Observatory and New Mexico State University, P.O. Box 59, sunspot, NM 88349-0059 (United States); Burton, Adam; García Pérez, Ana E.; Hearty, Fred R.; Majewski, Steven R.; Skrutskie, Michael F.; Sobeck, Jennifer S.; Wilson, John C. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Fleming, Scott W. [Computer Sciences Corporation, 3700 San Martin Dr, Baltimore, MD 21218 (United States); Muna, Demitri [Department of Astronomy and the Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Nguyen, Duy [Department of Astronomy and Astrophysics, University of Toronto, Toronto, Ontario, M5S 3H4 (Canada); Schiavon, Ricardo P. [Gemini Observatory, 670 N. A’Ohoku Place, Hilo, HI 96720 (United States); Shetrone, Matthew, E-mail: dnidever@umich.edu [University of Texas at Austin, McDonald Observatory, Fort Davis, TX 79734 (United States)

    2015-12-15

    The Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, explores the stellar populations of the Milky Way using the Sloan 2.5-m telescope linked to a high resolution (R ∼ 22,500), near-infrared (1.51–1.70 μm) spectrograph with 300 optical fibers. For over 150,000 predominantly red giant branch stars that APOGEE targeted across the Galactic bulge, disks and halo, the collected high signal-to-noise ratio (>100 per half-resolution element) spectra provide accurate (∼0.1 km s{sup −1}) RVs, stellar atmospheric parameters, and precise (≲0.1 dex) chemical abundances for about 15 chemical species. Here we describe the basic APOGEE data reduction software that reduces multiple 3D raw data cubes into calibrated, well-sampled, combined 1D spectra, as implemented for the SDSS-III/APOGEE data releases (DR10, DR11 and DR12). The processing of the near-IR spectral data of APOGEE presents some challenges for reduction, including automated sky subtraction and telluric correction over a 3°-diameter field and the combination of spectrally dithered spectra. We also discuss areas for future improvement.

  6. Random unitary evolution model of quantum Darwinism with pure decoherence

    Science.gov (United States)

    Balanesković, Nenad

    2015-10-01

    We study the behavior of Quantum Darwinism [W.H. Zurek, Nat. Phys. 5, 181 (2009)] within the iterative, random unitary operations qubit-model of pure decoherence [J. Novotný, G. Alber, I. Jex, New J. Phys. 13, 053052 (2011)]. We conclude that Quantum Darwinism, which describes the quantum mechanical evolution of an open system S from the point of view of its environment E, is not a generic phenomenon, but depends on the specific form of input states and on the type of S-E-interactions. Furthermore, we show that within the random unitary model the concept of Quantum Darwinism enables one to explicitly construct and specify artificial input states of environment E that allow to store information about an open system S of interest with maximal efficiency.

  7. Genetic Models in Evolutionary Game Theory: The Evolution of Altruism

    NARCIS (Netherlands)

    Rubin, Hannah

    2015-01-01

    While prior models of the evolution of altruism have assumed that organisms reproduce asexually, this paper presents a model of the evolution of altruism for sexually reproducing organisms using Hardy–Weinberg dynamics. In this model, the presence of reciprocal altruists allows the population to

  8. Evolution of offshore wind waves tracked by surface drifters with a point-positioning GPS sensor

    Science.gov (United States)

    Komatsu, K.

    2009-12-01

    Wind-generated waves have been recognized as one of the most important factors of the sea surface roughness which plays crucial roles in various air-sea interactions such as energy, momentum, heat and gas exchanges. At the same time, wind waves with extreme wave heights representatively called as freak or rogue waves have been a matter of great concern for many people involved in shipping, fishing, constracting, surfing and other marine activities, because such extreme waves frequently affect on the marine activities and sometimes cause serious disasters. Nevertheless, investigations of actual conditions for the evolution of wind waves in the offshore region are less and sparse in contrast to dense monitoring networks in the coastal regions because of difficulty of offshore observation with high accuracy. Recently accurate in situ observation of offshore wind waves is getting possible at low cost owing to a wave height and direction sensor developed by Harigae et al. (2004) by installing a point-positioning GPS receiver on a surface drifting buoy. The point-positioning GPS sensor can extract three dimensional movements of the buoy excited by ocean waves with minimizing effects of GPS point-positioning errors through the use of a high-pass filter. Two drifting buoys equipped with the GPS-based wave sensor charged by solar cells were drifted in the western North Pacific and one of them continued to observe wind waves during 16 months from Sep. 2007. The RMSE of the GPS-based wave sensor was less than 10cm in significant wave height and about 1s in significant wave period in comparison with other sensors, i.e. accelerometers installed on drifting buoys of Japan Meteorological Agency, ultrasonic sensors placed at the Hiratsuka observation station of the University of Tokyo and altimeter of the JASON-1. The GPS-based wave buoys enabled us to detect freak waves defined as waves whose height is more than twice the significant wave height. The observation conducted by the

  9. Accuracy limit of rigid 3-point water models

    Science.gov (United States)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  10. Extending the enterprise evolution contextualisation model

    Science.gov (United States)

    de Vries, Marné; van der Merwe, Alta; Gerber, Aurona

    2017-07-01

    Enterprise engineering (EE) emerged as a new discipline to encourage comprehensive and consistent enterprise design. Since EE is multidisciplinary, various researchers study enterprises from different perspectives, which resulted in a plethora of applicable literature and terminology, but without shared meaning. Previous research specifically focused on the fragmentation of knowledge for designing and aligning the information and communication technology (ICT) subsystem of the enterprise in order to support the business organisation subsystem of the enterprise. As a solution for this fragmented landscape, a business-IT alignment model (BIAM) was developed inductively from existing business-IT alignment approaches. Since most of the existing alignment frameworks addressed the alignment between the ICT subsystem and the business organisation subsystem, BIAM also focused on the alignment between these two subsystems. Yet, the emerging EE discipline intends to address a broader scope of design, evident in the existing approaches that incorporate a broader scope of design/alignment/governance. A need was identified to address the knowledge fragmentation of the EE knowledge base by adapting BIAM to an enterprise evolution contextualisation model (EECM), to contextualise a broader set of approaches, as identified by Lapalme. The main contribution of this article is the incremental development and evaluation of EECM. We also present guiding indicators/prerequisites for applying EECM as a contextualisation tool.

  11. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  12. Last interglacial temperature evolution – a model inter-comparison

    Directory of Open Access Journals (Sweden)

    P. Bakker

    2013-03-01

    temperatures. Secondly, for the Atlantic region, the Southern Ocean and the North Pacific, possible changes in the characteristics of the Atlantic meridional overturning circulation are crucial. Thirdly, the presence of remnant continental ice from the preceding glacial has shown to be important when determining the timing of maximum LIG warmth in the Northern Hemisphere. Finally, the results reveal that changes in the monsoon regime exert a strong control on the evolution of LIG temperatures over parts of Africa and India. By listing these inter-model differences, we provide a starting point for future proxy-data studies and the sensitivity experiments needed to constrain the climate simulations and to further enhance our understanding of the temperature evolution of the LIG period.

  13. Shape Modelling Using Markov Random Field Restoration of Point Correspondences

    DEFF Research Database (Denmark)

    Paulsen, Rasmus Reinhold; Hilger, Klaus Baggesen

    2003-01-01

    A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized sh...

  14. From Point Cloud to Textured Model, the Zamani Laser Scanning ...

    African Journals Online (AJOL)

    roshan

    meshed models based on dense points has received mixed reaction from the wide range of potential end users of the final ... data, can be subdivided into the stages of data acquisition, registration, data cleaning, modelling, hole filling ..... provide management tools for site management at local and regional level. The project ...

  15. Analytic models for the evolution of semilocal string networks

    International Nuclear Information System (INIS)

    Nunes, A. S.; Martins, C. J. A. P.; Avgoustidis, A.; Urrestilla, J.

    2011-01-01

    We revisit previously developed analytic models for defect evolution and adapt them appropriately for the study of semilocal string networks. We thus confirm the expectation (based on numerical simulations) that linear scaling evolution is the attractor solution for a broad range of model parameters. We discuss in detail the evolution of individual semilocal segments, focusing on the phenomenology of segment growth, and also provide a preliminary comparison with existing numerical simulations.

  16. FINDING CUBOID-BASED BUILDING MODELS IN POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    W. Nguatem

    2012-07-01

    Full Text Available In this paper, we present an automatic approach for the derivation of 3D building models of level-of-detail 1 (LOD 1 from point clouds obtained from (dense image matching or, for comparison only, from LIDAR. Our approach makes use of the predominance of vertical structures and orthogonal intersections in architectural scenes. After robustly determining the scene's vertical direction based on the 3D points we use it as constraint for a RANSAC-based search for vertical planes in the point cloud. The planes are further analyzed to segment reliable outlines for rectangular surface within these planes, which are connected to construct cuboid-based building models. We demonstrate that our approach is robust and effective over a range of real-world input data sets with varying point density, amount of noise, and outliers.

  17. Evolution of Salmonella enterica virulence via point mutations in the fimbrial adhesin.

    Directory of Open Access Journals (Sweden)

    Dagmara I Kisiela

    Full Text Available Whereas the majority of pathogenic Salmonella serovars are capable of infecting many different animal species, typically producing a self-limited gastroenteritis, serovars with narrow host-specificity exhibit increased virulence and their infections frequently result in fatal systemic diseases. In our study, a genetic and functional analysis of the mannose-specific type 1 fimbrial adhesin FimH from a variety of serovars of Salmonella enterica revealed that specific mutant variants of FimH are common in host-adapted (systemically invasive serovars. We have found that while the low-binding shear-dependent phenotype of the adhesin is preserved in broad host-range (usually systemically non-invasive Salmonella, the majority of host-adapted serovars express FimH variants with one of two alternative phenotypes: a significantly increased binding to mannose (as in S. Typhi, S. Paratyphi C, S. Dublin and some isolates of S. Choleraesuis, or complete loss of the mannose-binding activity (as in S. Paratyphi B, S. Choleraesuis and S. Gallinarum. The functional diversification of FimH in host-adapted Salmonella results from recently acquired structural mutations. Many of the mutations are of a convergent nature indicative of strong positive selection. The high-binding phenotype of FimH that leads to increased bacterial adhesiveness to and invasiveness of epithelial cells and macrophages usually precedes acquisition of the non-binding phenotype. Collectively these observations suggest that activation or inactivation of mannose-specific adhesive properties in different systemically invasive serovars of Salmonella reflects their dynamic trajectories of adaptation to a life style in specific hosts. In conclusion, our study demonstrates that point mutations are the target of positive selection and, in addition to horizontal gene transfer and genome degradation events, can contribute to the differential pathoadaptive evolution of Salmonella.

  18. Fixed Points in Discrete Models for Regulatory Genetic Networks

    Directory of Open Access Journals (Sweden)

    Orozco Edusmildo

    2007-01-01

    Full Text Available It is desirable to have efficient mathematical methods to extract information about regulatory iterations between genes from repeated measurements of gene transcript concentrations. One piece of information is of interest when the dynamics reaches a steady state. In this paper we develop tools that enable the detection of steady states that are modeled by fixed points in discrete finite dynamical systems. We discuss two algebraic models, a univariate model and a multivariate model. We show that these two models are equivalent and that one can be converted to the other by means of a discrete Fourier transform. We give a new, more general definition of a linear finite dynamical system and we give a necessary and sufficient condition for such a system to be a fixed point system, that is, all cycles are of length one. We show how this result for generalized linear systems can be used to determine when certain nonlinear systems (monomial dynamical systems over finite fields are fixed point systems. We also show how it is possible to determine in polynomial time when an ordinary linear system (defined over a finite field is a fixed point system. We conclude with a necessary condition for a univariate finite dynamical system to be a fixed point system.

  19. New analytically solvable models of relativistic point interactions

    International Nuclear Information System (INIS)

    Gesztesy, F.; Seba, P.

    1987-01-01

    Two new analytically solvable models of relativistic point interactions in one dimension (being natural extensions of the nonrelativistic δ-resp, δ'-interaction) are considered. Their spectral properties in the case of finitely many point interactions as well as in the periodic case are fully analyzed. Moreover the spectrum is explicitely determined in the case of independent, identically distributed random coupling constants and the analog of the Saxon and Huther conjecture concerning gaps in the energy spectrum of such systems is derived

  20. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  1. modelling of directed evolution: Implications for experimental design and stepwise evolution

    OpenAIRE

    Wedge , David C.; Rowe , William; Kell , Douglas B.; Knowles , Joshua

    2009-01-01

    In silico modelling of directed evolution: Implications for experimental design and stepwise evolution correspondence: Corresponding author. Tel.: +441613065145. (Wedge, David C.) (Wedge, David C.) Manchester Interdisciplinary Biocentre, University of Manchester - 131 Princess Street--> , Manchester--> , M1 7ND--> - UNITED KINGDOM (Wedge, David C.) UNITED KINGDOM (Wedge, David C.) Man...

  2. Modelling dune evolution and dynamic roughness in rivers

    NARCIS (Netherlands)

    Paarlberg, Andries

    2008-01-01

    Accurate river flow models are essential tools for water managers, but these hydraulic simulation models often lack a proper description of dynamic roughness due to hysteresis effects in dune evolution. To incorporate the effects of dune evolution directly into the resistance coefficients of

  3. New Parallel Algorithms for Landscape Evolution Model

    Science.gov (United States)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  4. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    OpenAIRE

    J. Tang; Y. Wang; Y. Zhao; Y. Zhao; W. Hao; X. Ning; K. Lv; Z. Shi; M. Zhao

    2017-01-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which ar...

  5. A point particle model of lightly bound skyrmions

    Directory of Open Access Journals (Sweden)

    Mike Gillard

    2017-04-01

    Full Text Available A simple model of the dynamics of lightly bound skyrmions is developed in which skyrmions are replaced by point particles, each carrying an internal orientation. The model accounts well for the static energy minimizers of baryon number 1≤B≤8 obtained by numerical simulation of the full field theory. For 9≤B≤23, a large number of static solutions of the point particle model are found, all closely resembling size B subsets of a face centred cubic lattice, with the particle orientations dictated by a simple colouring rule. Rigid body quantization of these solutions is performed, and the spin and isospin of the corresponding ground states extracted. As part of the quantization scheme, an algorithm to compute the symmetry group of an oriented point cloud, and to determine its corresponding Finkelstein–Rubinstein constraints, is devised.

  6. Predicting acid dew point with a semi-empirical model

    International Nuclear Information System (INIS)

    Xiang, Baixiang; Tang, Bin; Wu, Yuxin; Yang, Hairui; Zhang, Man; Lu, Junfu

    2016-01-01

    Highlights: • The previous semi-empirical models are systematically studied. • An improved thermodynamic correlation is derived. • A semi-empirical prediction model is proposed. • The proposed semi-empirical model is validated. - Abstract: Decreasing the temperature of exhaust flue gas in boilers is one of the most effective ways to further improve the thermal efficiency, electrostatic precipitator efficiency and to decrease the water consumption of desulfurization tower, while, when this temperature is below the acid dew point, the fouling and corrosion will occur on the heating surfaces in the second pass of boilers. So, the knowledge on accurately predicting the acid dew point is essential. By investigating the previous models on acid dew point prediction, an improved thermodynamic correlation formula between the acid dew point and its influencing factors is derived first. And then, a semi-empirical prediction model is proposed, which is validated with the data both in field test and experiment, and comparing with the previous models.

  7. Bicarbonate transporters in corals point towards a key step in the evolution of cnidarian calcification

    KAUST Repository

    Zoccola, Didier

    2015-06-04

    The bicarbonate ion (HCO3−) is involved in two major physiological processes in corals, biomineralization and photosynthesis, yet no molecular data on bicarbonate transporters are available. Here, we characterized plasma membrane-type HCO3− transporters in the scleractinian coral Stylophora pistillata. Eight solute carrier (SLC) genes were found in the genome: five homologs of mammalian-type SLC4 family members, and three of mammalian-type SLC26 family members. Using relative expression analysis and immunostaining, we analyzed the cellular distribution of these transporters and conducted phylogenetic analyses to determine the extent of conservation among cnidarian model organisms. Our data suggest that the SLC4γ isoform is specific to scleractinian corals and responsible for supplying HCO3− to the site of calcification. Taken together, SLC4γ appears to be one of the key genes for skeleton building in corals, which bears profound implications for our understanding of coral biomineralization and the evolution of scleractinian corals within cnidarians.

  8. Bicarbonate transporters in corals point towards a key step in the evolution of cnidarian calcification

    KAUST Repository

    Zoccola, Didier; Ganot, Philippe; Bertucci, Anthony; Caminiti-Segonds, Natacha; Techer, Nathalie; Voolstra, Christian R.; Aranda, Manuel; Tambutté , Eric; Allemand, Denis; Casey, Joseph R; Tambutté , Sylvie

    2015-01-01

    The bicarbonate ion (HCO3−) is involved in two major physiological processes in corals, biomineralization and photosynthesis, yet no molecular data on bicarbonate transporters are available. Here, we characterized plasma membrane-type HCO3− transporters in the scleractinian coral Stylophora pistillata. Eight solute carrier (SLC) genes were found in the genome: five homologs of mammalian-type SLC4 family members, and three of mammalian-type SLC26 family members. Using relative expression analysis and immunostaining, we analyzed the cellular distribution of these transporters and conducted phylogenetic analyses to determine the extent of conservation among cnidarian model organisms. Our data suggest that the SLC4γ isoform is specific to scleractinian corals and responsible for supplying HCO3− to the site of calcification. Taken together, SLC4γ appears to be one of the key genes for skeleton building in corals, which bears profound implications for our understanding of coral biomineralization and the evolution of scleractinian corals within cnidarians.

  9. An Improved Nonlinear Five-Point Model for Photovoltaic Modules

    Directory of Open Access Journals (Sweden)

    Sakaros Bogning Dongue

    2013-01-01

    Full Text Available This paper presents an improved nonlinear five-point model capable of analytically describing the electrical behaviors of a photovoltaic module for each generic operating condition of temperature and solar irradiance. The models used to replicate the electrical behaviors of operating PV modules are usually based on some simplified assumptions which provide convenient mathematical model which can be used in conventional simulation tools. Unfortunately, these assumptions cause some inaccuracies, and hence unrealistic economic returns are predicted. As an alternative, we used the advantages of a nonlinear analytical five-point model to take into account the nonideal diode effects and nonlinear effects generally ignored, which PV modules operation depends on. To verify the capability of our method to fit PV panel characteristics, the procedure was tested on three different panels. Results were compared with the data issued by manufacturers and with the results obtained using the five-parameter model proposed by other authors.

  10. Evolution of Black-Box Models Based on Volterra Series

    Directory of Open Access Journals (Sweden)

    Daniel D. Silveira

    2015-01-01

    Full Text Available This paper presents a historical review of the many behavioral models actually used to model radio frequency power amplifiers and a new classification of these behavioral models. It also discusses the evolution of these models, from a single polynomial to multirate Volterra models, presenting equations and estimation methods. New trends in RF power amplifier behavioral modeling are suggested.

  11. Quantitative Modeling of Landscape Evolution, Treatise on Geomorphology

    NARCIS (Netherlands)

    Temme, A.J.A.M.; Schoorl, J.M.; Claessens, L.F.G.; Veldkamp, A.; Shroder, F.S.

    2013-01-01

    This chapter reviews quantitative modeling of landscape evolution – which means that not just model studies but also modeling concepts are discussed. Quantitative modeling is contrasted with conceptual or physical modeling, and four categories of model studies are presented. Procedural studies focus

  12. A model of the microphysical evolution of a cloud

    International Nuclear Information System (INIS)

    Zinn, J.

    1994-01-01

    The earth's weather and climate are influenced strongly by phenomena associated with clouds. Therefore, a general circulation model (GCM) that models the evolution of weather and climate must include an accurate physical model of the clouds. This paper describes efforts to develop a suitable cloud model. It concentrates on the microphysical processes that determine the evolution of droplet and ice crystal size distributions, precipitation rates, total and condensed water content, and radiative extinction coefficients

  13. Contemporary Ecological Interactions Improve Models of Past Trait Evolution.

    Science.gov (United States)

    Hutchinson, Matthew C; Gaiarsa, Marília P; Stouffer, Daniel B

    2018-02-20

    Despite the fact that natural selection underlies both traits and interactions, evolutionary models often neglect that ecological interactions may, and in many cases do, influence the evolution of traits. Here, we explore the interdependence of ecological interactions and functional traits in the pollination associations of hawkmoths and flowering plants. Specifically, we develop an adaptation of the Ornstein-Uhlenbeck model of trait evolution that allows us to study the influence of plant corolla depth and observed hawkmoth-plant interactions on the evolution of hawkmoth proboscis length. Across diverse modelling scenarios, we find that the inclusion of contemporary interactions can provide a better description of trait evolution than the null expectation. Moreover, we show that the pollination interactions provide more-likely models of hawkmoth trait evolution when interactions are considered at increasingly finescale groups of hawkmoths. Finally, we demonstrate how the results of best-fit modelling approaches can implicitly support the association between interactions and trait evolution that our method explicitly examines. In showing that contemporary interactions can provide insight into the historical evolution of hawkmoth proboscis length, we demonstrate the clear utility of incorporating additional ecological information to models designed to study past trait evolution.

  14. The metastable dynamo model of stellar rotational evolution

    International Nuclear Information System (INIS)

    Brown, Timothy M.

    2014-01-01

    This paper introduces a new empirical model for the rotational evolution of Sun-like stars—those with surface convection zones and non-convective interior regions. Previous models do not match the morphology of observed (rotation period)-color diagrams, notably the existence of a relatively long-lived 'C-sequence' of fast rotators first identified by Barnes. This failure motivates the Metastable Dynamo Model (MDM) described here. The MDM posits that stars are born with their magnetic dynamos operating in a mode that couples very weakly to the stellar wind, so their (initially very short) rotation periods at first change little with time. At some point, this mode spontaneously and randomly changes to a strongly coupled mode, the transition occurring with a mass-dependent lifetime that is of the order of 100 Myr. I show that with this assumption, one can obtain good fits to observations of young clusters, particularly for ages of 150-200 Myr. Previous models and the MDM both give qualitative agreement with the morphology of the slower-rotating 'I-sequence' stars, but none of them have been shown to accurately reproduce the stellar-mass-dependent evolution of the I-sequence stars, especially for clusters older than a few hundred million years. I discuss observational experiments that can test aspects of the MDM, and speculate that the physics underlying the MDM may be related to other situations described in the literature, in which stellar dynamos may have a multi-modal character.

  15. Multi-Valued Modal Fixed Point Logics for Model Checking

    Science.gov (United States)

    Nishizawa, Koki

    In this paper, I will show how multi-valued logics are used for model checking. Model checking is an automatic technique to analyze correctness of hardware and software systems. A model checker is based on a temporal logic or a modal fixed point logic. That is to say, a system to be checked is formalized as a Kripke model, a property to be satisfied by the system is formalized as a temporal formula or a modal formula, and the model checker checks that the Kripke model satisfies the formula. Although most existing model checkers are based on 2-valued logics, recently new attempts have been made to extend the underlying logics of model checkers to multi-valued logics. I will summarize these new results.

  16. A case study on point process modelling in disease mapping

    DEFF Research Database (Denmark)

    Møller, Jesper; Waagepetersen, Rasmus Plenge; Benes, Viktor

    2005-01-01

    of the risk on the covariates. Instead of using the common areal level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo...... methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence...... the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics....

  17. Multivariate Product-Shot-noise Cox Point Process Models

    DEFF Research Database (Denmark)

    Jalilian, Abdollah; Guan, Yongtao; Mateu, Jorge

    We introduce a new multivariate product-shot-noise Cox process which is useful for model- ing multi-species spatial point patterns with clustering intra-specific interactions and neutral, negative or positive inter-specific interactions. The auto and cross pair correlation functions of the process...... can be obtained in closed analytical forms and approximate simulation of the process is straightforward. We use the proposed process to model interactions within and among five tree species in the Barro Colorado Island plot....

  18. The quantum nonlinear Schroedinger model with point-like defect

    International Nuclear Information System (INIS)

    Caudrelier, V; Mintchev, M; Ragoucy, E

    2004-01-01

    We establish a family of point-like impurities which preserve the quantum integrability of the nonlinear Schroedinger model in 1+1 spacetime dimensions. We briefly describe the construction of the exact second quantized solution of this model in terms of an appropriate reflection-transmission algebra. The basic physical properties of the solution, including the spacetime symmetry of the bulk scattering matrix, are also discussed. (letter to the editor)

  19. THE APACHE POINT OBSERVATORY GALACTIC EVOLUTION EXPERIMENT: FIRST DETECTION OF HIGH-VELOCITY MILKY WAY BAR STARS

    Energy Technology Data Exchange (ETDEWEB)

    Nidever, David L.; Zasowski, Gail; Majewski, Steven R.; Beaton, Rachael L.; Wilson, John C.; Skrutskie, Michael F.; O' Connell, Robert W. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904-4325 (United States); Bird, Jonathan; Schoenrich, Ralph; Johnson, Jennifer A.; Sellgren, Kris [Department of Astronomy and the Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Robin, Annie C.; Schultheis, Mathias [Institut Utinam, CNRS UMR 6213, OSU THETA, Universite de Franche-Comte, 41bis avenue de l' Observatoire, F-25000 Besancon (France); Martinez-Valpuesta, Inma; Gerhard, Ortwin [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Shetrone, Matthew [McDonald Observatory, University of Texas at Austin, Fort Davis, TX 79734 (United States); Schiavon, Ricardo P. [Gemini Observatory, 670 North A' Ohoku Place, Hilo, HI 96720 (United States); Weiner, Benjamin [Steward Observatory, 933 North Cherry Street, University of Arizona, Tucson, AZ 85721 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Allende Prieto, Carlos, E-mail: dln5q@virginia.edu [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain); and others

    2012-08-20

    Commissioning observations with the Apache Point Observatory Galactic Evolution Experiment (APOGEE), part of the Sloan Digital Sky Survey III, have produced radial velocities (RVs) for {approx}4700 K/M-giant stars in the Milky Way (MW) bulge. These high-resolution (R {approx} 22, 500), high-S/N (>100 per resolution element), near-infrared (NIR; 1.51-1.70 {mu}m) spectra provide accurate RVs ({epsilon}{sub V} {approx} 0.2 km s{sup -1}) for the sample of stars in 18 Galactic bulge fields spanning -1 Degree-Sign -32 Degree-Sign . This represents the largest NIR high-resolution spectroscopic sample of giant stars ever assembled in this region of the Galaxy. A cold ({sigma}{sub V} {approx} 30 km s{sup -1}), high-velocity peak (V{sub GSR} Almost-Equal-To +200 km s{sup -1}) is found to comprise a significant fraction ({approx}10%) of stars in many of these fields. These high RVs have not been detected in previous MW surveys and are not expected for a simple, circularly rotating disk. Preliminary distance estimates rule out an origin from the background Sagittarius tidal stream or a new stream in the MW disk. Comparison to various Galactic models suggests that these high RVs are best explained by stars in orbits of the Galactic bar potential, although some observational features remain unexplained.

  20. Numerical study of active particles creation and evolution in a nitrogen point-to-plane afterglow discharge at low pressure

    International Nuclear Information System (INIS)

    Potamianou, S; Spyrou, N; Held, B; Loiseau, J-F

    2006-01-01

    The last part of a numerical study of low-pressure nitrogen cold plasma created by a pulsed discharge in a point-to-plane geometry at 4 Torr is presented. The present work deals with the discharge and plasma behaviour during the falling part of a rectangular shaped applied voltage pulse and completes our investigation of the discharge under the stress of this voltage shape. The relative model is based on fluid description of the cold plasma, on Poisson's equation for the electric field and on balance equations for the excited population taking into account only the most important generation and decay mechanisms of the radiative B 3 Π g , C 3 Π u and the metastables A 3 Σ μ + states of nitrogen, according to the conclusions of our recent work (Potamianou et al 2003 Eur. Phys. J. Appl. Phys. 22 179-88). Results for space and time evolution of the charged particles densities, electric field, potential and electron current density are reported. According to these results, a non-neutral channel is formed that evolves slowly and ends in the formation of a double layer. Excited particle distributions are presented and the influence of the electron current density discussed. It seems that, in this kind of discharge, creation of active particles is not only due to electron current density but also physicochemical mechanisms. The obtained results will help to determine optimal conditions for polymer surface treatment

  1. Modelling the evolution and consequences of mate choice

    OpenAIRE

    Tazzyman, S. J.

    2010-01-01

    This thesis considers the evolution and the consequences of mate choice across a variety of taxa, using game theoretic, population genetic, and quantitative genetic modelling techniques. Part I is about the evolution of mate choice. In chapter 2, a population genetic model shows that mate choice is even beneficial in self-fertilising species such as Saccharomyces yeast. In chapter 3, a game theoretic model shows that female choice will be strongly dependent upon whether the benefi...

  2. Computer modelling as a tool for understanding language evolution

    NARCIS (Netherlands)

    de Boer, Bart; Gontier, N; VanBendegem, JP; Aerts, D

    2006-01-01

    This paper describes the uses of computer models in studying the evolution of language. Language is a complex dynamic system that can be studied at the level of the individual and at the level of the population. Much of the dynamics of language evolution and language change occur because of the

  3. Learning from input and memory evolution: points of vulnerability on a pathway to mastery in word learning.

    Science.gov (United States)

    Storkel, Holly L

    2015-02-01

    Word learning consists of at least two neurocognitive processes: learning from input during training and memory evolution during gaps between training sessions. Fine-grained analysis of word learning by normal adults provides evidence that learning from input is swift and stable, whereas memory evolution is a point of potential vulnerability on the pathway to mastery. Moreover, success during learning from input is linked to positive outcomes from memory evolution. These two neurocognitive processes can be overlaid on to components of clinical treatment with within-session variables (i.e. dose form and dose) potentially linked to learning from input and between-session variables (i.e. dose frequency) linked to memory evolution. Collecting data at the beginning and end of a treatment session can be used to identify the point of vulnerability in word learning for a given client and the appropriate treatment component can then be adjusted to improve the client's word learning. Two clinical cases are provided to illustrate this approach.

  4. Integrated modeling and analysis methodology for precision pointing applications

    Science.gov (United States)

    Gutierrez, Homero L.

    2002-07-01

    Space-based optical systems that perform tasks such as laser communications, Earth imaging, and astronomical observations require precise line-of-sight (LOS) pointing. A general approach is described for integrated modeling and analysis of these types of systems within the MATLAB/Simulink environment. The approach can be applied during all stages of program development, from early conceptual design studies to hardware implementation phases. The main objective is to predict the dynamic pointing performance subject to anticipated disturbances and noise sources. Secondary objectives include assessing the control stability, levying subsystem requirements, supporting pointing error budgets, and performing trade studies. The integrated model resides in Simulink, and several MATLAB graphical user interfaces (GUI"s) allow the user to configure the model, select analysis options, run analyses, and process the results. A convenient parameter naming and storage scheme, as well as model conditioning and reduction tools and run-time enhancements, are incorporated into the framework. This enables the proposed architecture to accommodate models of realistic complexity.

  5. Numerical Simulation of Missouri River Bed Evolution Downstream of Gavins Point Dam

    Science.gov (United States)

    Sulaiman, Z. A.; Blum, M. D.; Lephart, G.; Viparelli, E.

    2016-12-01

    The Missouri River originates in the Rocky Mountains in western Montana and joins the Mississippi River near Saint Louis, Missouri. In the 1900s dam construction and river engineering works, such as river alignment, narrowing and bank protections were performed in the Missouri River basin to control the flood flows, ensure navigation and use the water for agricultural, industrial and municipal needs, for the production of hydroelectric power generation and for recreation. These projects altered the flow and the sediment transport regimes in the river and the exchange of sediment between the river and the adjoining floodplain. Here we focus on the long term effect of dam construction and channel narrowing on the 1200 km long reach of the Missouri River between Gavins Point Dam, Nebraska and South Dakota, and the confluence with the Mississippi River. Field observations show that two downstream migrating waves of channel bed degradation formed in this reach in response to the changes in flow regime, sediment load and channel geometry. We implemented a one dimensional morphodynamic model for large, low slope sand bed rivers, we validated the model at field scale by comparing the numerical results with the available field data and we use the model to 1) predict the magnitude and the migration rate of the waves of degradation at engineering time scales ( 150 years into the future), 2) quantify the changes in the sand load delivered to the Mississippi River, where field observations at Thebes, i.e. downstream of Saint Louis, suggest a decline in the mean annual sand load in the past 50 years, and 3) identify the role of the main tributaries - Little Sioux River, Platte River and Kansas River - on the wave migration speed and the annual sand load in the Missouri River main channel.

  6. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    Energy Technology Data Exchange (ETDEWEB)

    Tudora, A. [University of Bucharest, Faculty of Physics, Bucharest Magurele (Romania); Hambsch, F.J. [European Commission, Joint Research Centre, Directorate G - Nuclear Safety and Security, Unit G2, Geel (Belgium)

    2017-08-15

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for {sup 252}Cf(SF) and {sup 235}U(n,f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν(A,TKE) and γ-ray energy E{sub γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A (e.g., ν(A), E{sub γ}(A), left angle ε right angle (A) etc.), as a function of TKE (e.g., ν(TKE), E{sub γ}(TKE)) up to the prompt neutron distribution P(ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning

  7. Recent tests of the equilibrium-point hypothesis (lambda model).

    Science.gov (United States)

    Feldman, A G; Ostry, D J; Levin, M F; Gribble, P L; Mitnitski, A B

    1998-07-01

    The lambda model of the equilibrium-point hypothesis (Feldman & Levin, 1995) is an approach to motor control which, like physics, is based on a logical system coordinating empirical data. The model has gone through an interesting period. On one hand, several nontrivial predictions of the model have been successfully verified in recent studies. In addition, the explanatory and predictive capacity of the model has been enhanced by its extension to multimuscle and multijoint systems. On the other hand, claims have recently appeared suggesting that the model should be abandoned. The present paper focuses on these claims and concludes that they are unfounded. Much of the experimental data that have been used to reject the model are actually consistent with it.

  8. Modeling Software Evolution using Algebraic Graph Rewriting

    NARCIS (Netherlands)

    Ciraci, Selim; van den Broek, Pim

    We show how evolution requests can be formalized using algebraic graph rewriting. In particular, we present a way to convert the UML class diagrams to colored graphs. Since changes in software may effect the relation between the methods of classes, our colored graph representation also employs the

  9. Integrable Seven-Point Discrete Equations and Second-Order Evolution Chains

    Science.gov (United States)

    Adler, V. E.

    2018-04-01

    We consider differential-difference equations defining continuous symmetries for discrete equations on a triangular lattice. We show that a certain combination of continuous flows can be represented as a secondorder scalar evolution chain. We illustrate the general construction with a set of examples including an analogue of the elliptic Yamilov chain.

  10. Learning and evolution in games and oligopoly models

    NARCIS (Netherlands)

    Possajennikov, A.

    2000-01-01

    Dynamic models of adjustment, as well as static models of equilibrium, are important to understand economic reality. This thesis considers such dynamic models applied to economic games. The models can broadly be divided into two categories: learning and evolution. This thesis analyzes reinforcement

  11. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    Science.gov (United States)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  12. An analytically solvable model for rapid evolution of modular structure.

    Directory of Open Access Journals (Sweden)

    Nadav Kashtan

    2009-04-01

    Full Text Available Biological systems often display modularity, in the sense that they can be decomposed into nearly independent subsystems. Recent studies have suggested that modular structure can spontaneously emerge if goals (environments change over time, such that each new goal shares the same set of sub-problems with previous goals. Such modularly varying goals can also dramatically speed up evolution, relative to evolution under a constant goal. These studies were based on simulations of model systems, such as logic circuits and RNA structure, which are generally not easy to treat analytically. We present, here, a simple model for evolution under modularly varying goals that can be solved analytically. This model helps to understand some of the fundamental mechanisms that lead to rapid emergence of modular structure under modularly varying goals. In particular, the model suggests a mechanism for the dramatic speedup in evolution observed under such temporally varying goals.

  13. Considering bioactivity in modelling continental growth and the Earth's evolution

    Science.gov (United States)

    Höning, D.; Spohn, T.

    2013-09-01

    The complexity of planetary evolution increases with the number of interacting reservoirs. On Earth, even the biosphere is speculated to interact with the interior. It has been argued (e.g., Rosing et al. 2006; Sleep et al, 2012) that the formation of continents could be a consequence of bioactivity harvesting solar energy through photosynthesis to help build the continents and that the mantle should carry a chemical biosignature. Through plate tectonics, the surface biosphere can impact deep subduction zone processes and the interior of the Earth. Subducted sediments are particularly important, because they influence the Earth's interior in several ways, and in turn are strongly influenced by the Earth's biosphere. In our model, we use the assumption that a thick sedimentary layer of low permeability on top of the subducting oceanic crust, caused by a biologically enhanced weathering rate, can suppress shallow dewatering. This in turn leads to greater vailability of water in the source region of andesitic partial melt, resulting in an enhanced rate of continental production and regassing rate into the mantle. Our model includes (i) mantle convection, (ii) continental erosion and production, and (iii) mantle water degassing at mid-ocean ridges and regassing at subduction zones. The mantle viscosity of our model depends on (i) the mantle water concentration and (ii) the mantle temperature, whose time dependency is given by radioactive decay of isotopes in the Earth's mantle. Boundary layer theory yields the speed of convection and the water outgassing rate of the Earth's mantle. Our results indicate that present day values of continental surface area and water content of the Earth's mantle represent an attractor in a phase plane spanned by both parameters. We show that the biologic enhancement of the continental erosion rate is important for the system to reach this fixed point. An abiotic Earth tends to reach an alternative stable fixed point with a smaller

  14. Modeling molecular boiling points using computed interaction energies.

    Science.gov (United States)

    Peterangelo, Stephen C; Seybold, Paul G

    2017-12-20

    The noncovalent van der Waals interactions between molecules in liquids are typically described in textbooks as occurring between the total molecular dipoles (permanent, induced, or transient) of the molecules. This notion was tested by examining the boiling points of 67 halogenated hydrocarbon liquids using quantum chemically calculated molecular dipole moments, ionization potentials, and polarizabilities obtained from semi-empirical (AM1 and PM3) and ab initio Hartree-Fock [HF 6-31G(d), HF 6-311G(d,p)], and density functional theory [B3LYP/6-311G(d,p)] methods. The calculated interaction energies and an empirical measure of hydrogen bonding were employed to model the boiling points of the halocarbons. It was found that only terms related to London dispersion energies and hydrogen bonding proved significant in the regression analyses, and the performances of the models generally improved at higher levels of quantum chemical computation. An empirical estimate for the molecular polarizabilities was also tested, and the best models for the boiling points were obtained using either this empirical polarizability itself or the polarizabilities calculated at the B3LYP/6-311G(d,p) level, along with the hydrogen-bonding parameter. The results suggest that the cohesive forces are more appropriately described as resulting from highly localized interactions rather than interactions between the global molecular dipoles.

  15. Spin Glass Models of Syntax and Language Evolution

    OpenAIRE

    Siva, Karthik; Tao, Jim; Marcolli, Matilde

    2015-01-01

    Using the SSWL database of syntactic parameters of world languages, and the MIT Media Lab data on language interactions, we construct a spin glass model of language evolution. We treat binary syntactic parameters as spin states, with languages as vertices of a graph, and assigned interaction energies along the edges. We study a rough model of syntax evolution, under the assumption that a strong interaction energy tends to cause parameters to align, as in the case of ferromagnetic materials. W...

  16. Third generation masses from a two Higgs model fixed point

    International Nuclear Information System (INIS)

    Froggatt, C.D.; Knowles, I.G.; Moorhouse, R.G.

    1990-01-01

    The large mass ratio between the top and bottom quarks may be attributed to a hierarchy in the vacuum expectation values of scalar doublets. We consider an effective renormalisation group fixed point determination of the quartic scalar and third generation Yukawa couplings in such a two doublet model. This predicts a mass m t =220 GeV and a mass ratio m b /m τ =2.6. In its simplest form the model also predicts the scalar masses, including a light scalar with a mass of order the b quark mass. Experimental implications are discussed. (orig.)

  17. [Modeling asthma evolution by a multi-state model].

    Science.gov (United States)

    Boudemaghe, T; Daurès, J P

    2000-06-01

    There are many scores for the evaluation of asthma. However, most do not take into account the evolutionary aspects of this illness. We propose a model for the clinical course of asthma by a homogeneous Markov model process based on data provided by the A.R.I.A. (Association de Recherche en Intelligence Artificielle dans le cadre de l'asthme et des maladies respiratoires). The criterion used is the activity of the illness during the month before consultation. The activity is divided into three levels: light (state 1), mild (state 2) and severe (state 3). The model allows the evaluation of the strength of transition between states. We found that strong intensities were implicated towards state 2 (lambda(12) and lambda(32)), less towards state 1 (lambda(21) and lambda(31)), and minimum towards state 3 (lambda(23)). This results in an equilibrium distribution essentially divided between state 1 and 2 (44.6% and 51.0% respectively) with a small proportion in state 3 (4.4%). In the future, the increasing amount of available data should permit the introduction of covariables, the distinction of subgroups and the implementation of clinical studies. The interest of this model falls within the domain of the quantification of the illness as well as the representation allowed thereof, while offering a formal framework for the clinical notion of time and evolution.

  18. Dynamic Evolution Model Based on Social Network Services

    Science.gov (United States)

    Xiong, Xi; Gou, Zhi-Jian; Zhang, Shi-Bin; Zhao, Wen

    2013-11-01

    Based on the analysis of evolutionary characteristics of public opinion in social networking services (SNS), in the paper we propose a dynamic evolution model, in which opinions are coupled with topology. This model shows the clustering phenomenon of opinions in dynamic network evolution. The simulation results show that the model can fit the data from a social network site. The dynamic evolution of networks accelerates the opinion, separation and aggregation. The scale and the number of clusters are influenced by confidence limit and rewiring probability. Dynamic changes of the topology reduce the number of isolated nodes, while the increased confidence limit allows nodes to communicate more sufficiently. The two effects make the distribution of opinion more neutral. The dynamic evolution of networks generates central clusters with high connectivity and high betweenness, which make it difficult to control public opinions in SNS.

  19. Analysis of relationship between registration performance of point cloud statistical model and generation method of corresponding points

    International Nuclear Information System (INIS)

    Yamaoka, Naoto; Watanabe, Wataru; Hontani, Hidekata

    2010-01-01

    Most of the time when we construct statistical point cloud model, we need to calculate the corresponding points. Constructed statistical model will not be the same if we use different types of method to calculate the corresponding points. This article proposes the effect to statistical model of human organ made by different types of method to calculate the corresponding points. We validated the performance of statistical model by registering a surface of an organ in a 3D medical image. We compare two methods to calculate corresponding points. The first, the 'Generalized Multi-Dimensional Scaling (GMDS)', determines the corresponding points by the shapes of two curved surfaces. The second approach, the 'Entropy-based Particle system', chooses corresponding points by calculating a number of curved surfaces statistically. By these methods we construct the statistical models and using these models we conducted registration with the medical image. For the estimation, we use non-parametric belief propagation and this method estimates not only the position of the organ but also the probability density of the organ position. We evaluate how the two different types of method that calculates corresponding points affects the statistical model by change in probability density of each points. (author)

  20. Zirconium - ab initio modelling of point defects diffusion

    International Nuclear Information System (INIS)

    Gasca, Petrica

    2010-01-01

    Zirconium is the main element of the cladding found in pressurized water reactors, under an alloy form. Under irradiation, the cladding elongate significantly, phenomena attributed to the vacancy dislocation loops growth in the basal planes of the hexagonal compact structure. The understanding of the atomic scale mechanisms originating this process motivated this work. Using the ab initio atomic modeling technique we studied the structure and mobility of point defects in Zirconium. This led us to find four interstitial point defects with formation energies in an interval of 0.11 eV. The migration paths study allowed the discovery of activation energies, used as entry parameters for a kinetic Monte Carlo code. This code was developed for calculating the diffusion coefficient of the interstitial point defect. Our results suggest a migration parallel to the basal plane twice as fast as one parallel to the c direction, with an activation energy of 0.08 eV, independent of the direction. The vacancy diffusion coefficient, estimated with a two-jump model, is also anisotropic, with a faster process in the basal planes than perpendicular to them. Hydrogen influence on the vacancy dislocation loops nucleation was also studied, due to recent experimental observations of cladding growth acceleration in the presence of this element [fr

  1. Dissipative N-point-vortex Models in the Plane

    Science.gov (United States)

    Shashikanth, Banavara N.

    2010-02-01

    A method is presented for constructing point vortex models in the plane that dissipate the Hamiltonian function at any prescribed rate and yet conserve the level sets of the invariants of the Hamiltonian model arising from the SE (2) symmetries. The method is purely geometric in that it uses the level sets of the Hamiltonian and the invariants to construct the dissipative field and is based on elementary classical geometry in ℝ3. Extension to higher-dimensional spaces, such as the point vortex phase space, is done using exterior algebra. The method is in fact general enough to apply to any smooth finite-dimensional system with conserved quantities, and, for certain special cases, the dissipative vector field constructed can be associated with an appropriately defined double Nambu-Poisson bracket. The most interesting feature of this method is that it allows for an infinite sequence of such dissipative vector fields to be constructed by repeated application of a symmetric linear operator (matrix) at each point of the intersection of the level sets.

  2. Development of a numerical 2-dimensional beach evolution model

    DEFF Research Database (Denmark)

    Baykal, Cüneyt

    2014-01-01

    This paper presents the description of a 2-dimensional numerical model constructed for the simulation of beach evolution under the action of wind waves only over the arbitrary land and sea topographies around existing coastal structures and formations. The developed beach evolution numerical model...... is composed of 4 submodels: a nearshore spectral wave transformation model based on an energy balance equation including random wave breaking and diffraction terms to compute the nearshore wave characteristics, a nearshore wave-induced circulation model based on the nonlinear shallow water equations...... to compute the nearshore depth-averaged wave-induced current velocities and mean water level changes, a sediment transport model to compute the local total sediment transport rates occurring under the action of wind waves, and a bottom evolution model to compute the bed level changes in time based...

  3. Computer models of vocal tract evolution: an overview and critique

    NARCIS (Netherlands)

    de Boer, B.; Fitch, W. T.

    2010-01-01

    Human speech has been investigated with computer models since the invention of digital computers, and models of the evolution of speech first appeared in the late 1960s and early 1970s. Speech science and computer models have a long shared history because speech is a physical signal and can be

  4. Evolution of the Moon: the 1974 model

    International Nuclear Information System (INIS)

    Schmitt, H.H.

    1975-01-01

    The interpretive evolution of the Moon can be divided now into seven major stages beginning sometime near the end of the formation of the solar system. These stages and their approximate durations are as follows: 1. The Beginning - 4.6 billion years ago. 2. The Melted Shell-4.6-4.4 billion years ago. 3. The Cratered Highlands -4.4-4.1 billion years ago. 4. The Large Basins-4.1-3.9 billion years ago. 5. The Light-Coloured Plains-3.9-3.8 billion years ago 6. The Basaltic Maria -3.8-3.0 billion years ago. 7. The Quiet Crust-3.0 billion years ago to the present. The Apollo and Luna explorations that permit the study of these stages of evolution have each contributed in progressive and significant ways. Through them the early differentiation of the Earth, the nature of the Earth's protocrust, the influence of the formation of large impact basins in that crust, the effects of early partial melting of the protomantle and possibly the earliest stages of the breakup of the protocrust into continents and ocean basins can now be looked at with new insight. (Auth.)

  5. a Modeling Method of Fluttering Leaves Based on Point Cloud

    Science.gov (United States)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  6. A MODELING METHOD OF FLUTTERING LEAVES BASED ON POINT CLOUD

    Directory of Open Access Journals (Sweden)

    J. Tang

    2017-09-01

    Full Text Available Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  7. A relativistic point coupling model for nuclear structure calculations

    International Nuclear Information System (INIS)

    Buervenich, T.; Maruhn, J.A.; Madland, D.G.; Reinhard, P.G.

    2002-01-01

    A relativistic point coupling model is discussed focusing on a variety of aspects. In addition to the coupling using various bilinear Dirac invariants, derivative terms are also included to simulate finite-range effects. The formalism is presented for nuclear structure calculations of ground state properties of nuclei in the Hartree and Hartree-Fock approximations. Different fitting strategies for the determination of the parameters have been applied and the quality of the fit obtainable in this model is discussed. The model is then compared more generally to other mean-field approaches both formally and in the context of applications to ground-state properties of known and superheavy nuclei. Perspectives for further extensions such as an exact treatment of the exchange terms using a higher-order Fierz transformation are discussed briefly. (author)

  8. Self-Exciting Point Process Modeling of Conversation Event Sequences

    Science.gov (United States)

    Masuda, Naoki; Takaguchi, Taro; Sato, Nobuo; Yano, Kazuo

    Self-exciting processes of Hawkes type have been used to model various phenomena including earthquakes, neural activities, and views of online videos. Studies of temporal networks have revealed that sequences of social interevent times for individuals are highly bursty. We examine some basic properties of event sequences generated by the Hawkes self-exciting process to show that it generates bursty interevent times for a wide parameter range. Then, we fit the model to the data of conversation sequences recorded in company offices in Japan. In this way, we can estimate relative magnitudes of the self excitement, its temporal decay, and the base event rate independent of the self excitation. These variables highly depend on individuals. We also point out that the Hawkes model has an important limitation that the correlation in the interevent times and the burstiness cannot be independently modulated.

  9. Evolutive masing model, cyclic plasticity, ageing and memory effects

    International Nuclear Information System (INIS)

    Sidoroff, F.

    1987-01-01

    Many models are proposed for the mechanical description of the cyclic behaviour of metals and used for structure analysis under cyclic loading. Such a model must include two basic features: Dissipative behaviour on each cycle (hysteresis loop); evolution of this behaviour during the material's life (cyclic hardening or softening, aging,...). However, if both aspects are present in most existing models, the balance between them may be quite different. Many metallurgical investigations have been performed about the microstructure and its evolution during cyclic loading, and it is desirable to introduce these informations in phenomenological models. The evolutive Masing model has been proposed to combine: the accuracy of hereditary models for the description of hysteresis on each cycle, the versatility of internal variables for the state description and evolution, a sufficient microstructural basis to make the interaction easier with microstructural investigations. The purpose of the present work is to discuss this model and to compare different evolution assumptions with respect to some memory effects (cyclic hardening and softening, multilevel tests, aging). Attention is limited to uniaxial, rate independent elasto-plastic behaviour

  10. Application of catastrophe theory to a point model for bumpy torus with neoclassical non-resonant electrons

    Energy Technology Data Exchange (ETDEWEB)

    Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics

    1983-12-01

    The point model for the toroidal core plasma in the ELMO Bumpy Torus (with neoclassical non-resonant electrons) is examined in the light of catastrophe theory. Even though the point model equations do not constitute a gradient dynamic system, the equilibrium surfaces are similar to those of the canonical cusp catastrophe. The point model is then extended to incorporate ion cyclotron resonance heating. A detailed parametric study of the equilibria is presented. Further, the nonlinear time evolution of these equilibria is studied, and it is observed that the point model obeys the delay convention (and hence hysteresis) and shows catastrophes at the fold edges of the equilibrium surfaces. Tentative applications are made to experimental results.

  11. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  12. Modelling the evolution and diversity of cumulative culture

    Science.gov (United States)

    Enquist, Magnus; Ghirlanda, Stefano; Eriksson, Kimmo

    2011-01-01

    Previous work on mathematical models of cultural evolution has mainly focused on the diffusion of simple cultural elements. However, a characteristic feature of human cultural evolution is the seemingly limitless appearance of new and increasingly complex cultural elements. Here, we develop a general modelling framework to study such cumulative processes, in which we assume that the appearance and disappearance of cultural elements are stochastic events that depend on the current state of culture. Five scenarios are explored: evolution of independent cultural elements, stepwise modification of elements, differentiation or combination of elements and systems of cultural elements. As one application of our framework, we study the evolution of cultural diversity (in time as well as between groups). PMID:21199845

  13. The Critical Point Entanglement and Chaos in the Dicke Model

    Directory of Open Access Journals (Sweden)

    Lina Bao

    2015-07-01

    Full Text Available Ground state properties and level statistics of the Dicke model for a finite number of atoms are investigated based on a progressive diagonalization scheme (PDS. Particle number statistics, the entanglement measure and the Shannon information entropy at the resonance point in cases with a finite number of atoms as functions of the coupling parameter are calculated. It is shown that the entanglement measure defined in terms of the normalized von Neumann entropy of the reduced density matrix of the atoms reaches its maximum value at the critical point of the quantum phase transition where the system is most chaotic. Noticeable change in the Shannon information entropy near or at the critical point of the quantum phase transition is also observed. In addition, the quantum phase transition may be observed not only in the ground state mean photon number and the ground state atomic inversion as shown previously, but also in fluctuations of these two quantities in the ground state, especially in the atomic inversion fluctuation.

  14. Multiplicative point process as a model of trading activity

    Science.gov (United States)

    Gontis, V.; Kaulakys, B.

    2004-11-01

    Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.

  15. General Approach to the Evolution of Singlet Nanoparticles from a Rapidly Quenched Point Source

    NARCIS (Netherlands)

    Feng, J.; Huang, Luyi; Ludvigsson, Linus; Messing, Maria; Maiser, A.; Biskos, G.; Schmidt-Ott, A.

    2016-01-01

    Among the numerous point vapor sources, microsecond-pulsed spark ablation at atmospheric pressure is a versatile and environmentally friendly method for producing ultrapure inorganic nanoparticles ranging from singlets having sizes smaller than 1 nm to larger agglomerated structures. Due to its fast

  16. Complex magnetic monopoles, geometric phases and quantum evolution in the vicinity of diabolic and exceptional points

    International Nuclear Information System (INIS)

    Nesterov, Alexander I; Aceves de la Cruz, F

    2008-01-01

    We consider the geometric phase and quantum tunneling in the vicinity of diabolic and exceptional points. We show that the geometric phase associated with the degeneracy points is defined by the flux of complex magnetic monopoles. In the limit of weak coupling, the leading contribution to the real part of the geometric phase is given by the flux of the Dirac monopole plus a quadrupole term, and the expansion of the imaginary part starts with a dipole-like field. For a two-level system governed by a generic non-Hermitian Hamiltonian, we derive a formula to compute the non-adiabatic, complex, geometric phase by integrating over the complex Bloch sphere. We apply our results to study a dissipative two-level system driven by a periodic electromagnetic field and show that, in the vicinity of the exceptional point, the complex geometric phase behaves like a step-function. Studying the tunneling process near and at the exceptional point, we find two different regimes: coherent and incoherent. The coherent regime is characterized by Rabi oscillations, with a one-sheeted hyperbolic monopole emerging in this region of the parameters. The two-sheeted hyperbolic monopole is associated with the incoherent regime. We show that the dissipation results in a series of pulses in the complex geometric phase which disappear when the dissipation dies out. Such a strong coupling effect of the environment is beyond the conventional adiabatic treatment of the Berry phase

  17. Multiobjective optimization of an extremal evolution model

    International Nuclear Information System (INIS)

    Elettreby, M.F.

    2004-09-01

    We propose a two-dimensional model for a co-evolving ecosystem that generalizes the extremal coupled map lattice model. The model takes into account the concept of multiobjective optimization. We find that the system self-organizes into a critical state. The distributions of the distances between subsequent mutations as well as the distribution of avalanches sizes follow power law. (author)

  18. Evolutive Masing model, cycling plasticity, ageing and memory effects

    International Nuclear Information System (INIS)

    Sidoroff, F.

    1987-01-01

    Many models are proposed for the mechanical description of the cyclic behaviour of metals and used for structure analysis under cyclic loading. The evolutive Masing model has been proposed (Fougeres, Sidoroff, Vincent and Waille 1985) to combine - the accuracy of hereditary models for the description of hysteresis on each cycle, - the versatility of internal variables for the state description and evolution, - a sufficient microstructural basis to make the interaction easier with microstructural investigations. The purpose of the present work is to discuss this model and to compare different evolution assumptions with respect to some memory effects (cyclic hardening and softening, multilevel tests, ageing). Attention is limited to uniaxial, rate independent elasto-plastic behaviour. (orig./GL)

  19. A last updating evolution model for online social networks

    Science.gov (United States)

    Bu, Zhan; Xia, Zhengyou; Wang, Jiandong; Zhang, Chengcui

    2013-05-01

    As information technology has advanced, people are turning to electronic media more frequently for communication, and social relationships are increasingly found on online channels. However, there is very limited knowledge about the actual evolution of the online social networks. In this paper, we propose and study a novel evolution network model with the new concept of “last updating time”, which exists in many real-life online social networks. The last updating evolution network model can maintain the robustness of scale-free networks and can improve the network reliance against intentional attacks. What is more, we also found that it has the “small-world effect”, which is the inherent property of most social networks. Simulation experiment based on this model show that the results and the real-life data are consistent, which means that our model is valid.

  20. Two-point model for electron transport in EBT

    International Nuclear Information System (INIS)

    Chiu, S.C.; Guest, G.E.

    1980-01-01

    The electron transport in EBT is simulated by a two-point model corresponding to the central plasma and the edge. The central plasma is assumed to obey neoclassical collisionless transport. The edge plasma is assumed turbulent and modeled by Bohm diffusion. The steady-state temperatures and densities in both regions are obtained as functions of neutral influx and microwave power. It is found that as the neutral influx decreases and power increases, the edge density decreases while the core density increases. We conclude that if ring instability is responsible for the T-M mode transition, and if stability is correlated with cold electron density at the edge, it will depend sensitively on ambient gas pressure and microwave power

  1. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  2. SALLY, Dynamic Behaviour of Reactor Cooling Channel by Point Model

    International Nuclear Information System (INIS)

    Reiche, Chr.; Ziegenbein, D.

    1981-01-01

    1 - Nature of the physical problem solved: The dynamical behaviour of a cooling channel is calculated. Starting from an equilibrium state a perturbation is introduced into the system. That may be an outer reactivity perturbation or a change in the coolant velocity or in the coolant temperature. The neutron kinetics is treated in the framework of the one-point model. The cooling channel consists of a cladded and cooled fuel rod. The temperature distribution is taken into account as an array above a mesh of radial zones and axial layers. Heat transfer is considered in radial direction only, the thermodynamical coupling of the different layers is obtained by the coolant flow. The thermal material parameters are considered to be temperature independent. Reactivity feedback is introduced by means of reactivity coefficients for fuel, canning, and coolant. Doppler broadening is included. The first cooling cycle can be taken into account by a simple model. 2 - Method of solution: The integration of the point kinetics equations is done numerically by the P11 scheme. The system of temperature equations with constant heat resistance coefficients is solved by the method of factorization. 3 - Restrictions on the complexity of the problem: Given limits are: 10 radial fuel zones, 25 axial layers, 6 groups of delayed neutrons

  3. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  4. Application of the evolution theory in modelling of innovation diffusion

    Directory of Open Access Journals (Sweden)

    Krstić Milan

    2016-01-01

    Full Text Available The theory of evolution has found numerous analogies and applications in other scientific disciplines apart from biology. In that sense, today the so-called 'memetic-evolution' has been widely accepted. Memes represent a complex adaptable system, where one 'meme' represents an evolutional cultural element, i.e. the smallest unit of information which can be identified and used in order to explain the evolution process. Among others, the field of innovations has proved itself to be a suitable area where the theory of evolution can also be successfully applied. In this work the authors have started from the assumption that it is also possible to apply the theory of evolution in the modelling of the process of innovation diffusion. Based on the conducted theoretical research, the authors conclude that the process of innovation diffusion in the interpretation of a 'meme' is actually the process of imitation of the 'meme' of innovation. Since during the process of their replication certain 'memes' show a bigger success compared to others, that eventually leads to their natural selection. For the survival of innovation 'memes', their manifestations are of key importance in the sense of their longevity, fruitfulness and faithful replicating. The results of the conducted research have categorically confirmed the assumption of the possibility of application of the evolution theory with the innovation diffusion with the help of innovation 'memes', which opens up the perspectives for some new researches on the subject.

  5. Modeling the summertime evolution of sea-ice melt ponds

    DEFF Research Database (Denmark)

    Lüthje, Mikael; Feltham, D.L.; Taylor, P.D.

    2006-01-01

    We present a mathematical model describing the summer melting of sea ice. We simulate the evolution of melt ponds and determine area coverage and total surface ablation. The model predictions are tested for sensitivity to the melt rate of unponded ice, enhanced melt rate beneath the melt ponds...

  6. Optimality models in the age of experimental evolution and genomics.

    Science.gov (United States)

    Bull, J J; Wang, I-N

    2010-09-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimental context with a well-researched organism allows dissection of the evolutionary process to identify causes of model failure--whether the model is wrong about genetics or selection. Second, optimality models provide a meaningful context for the process and mechanics of evolution, and thus may be used to elicit realistic genetic bases of adaptation--an especially useful augmentation to well-researched genetic systems. A few studies of microbes have begun to pioneer this new direction. Incompatibility between the assumed and actual genetics has been demonstrated to be the cause of model failure in some cases. More interestingly, evolution at the phenotypic level has sometimes matched prediction even though the adaptive mutations defy mechanisms established by decades of classic genetic studies. Integration of experimental evolutionary tests with genetics heralds a new wave for optimality models and their extensions that does not merely emphasize the forces driving evolution.

  7. Fluorine in the solar neighborhood: Chemical evolution models

    Science.gov (United States)

    Spitoni, E.; Matteucci, F.; Jönsson, H.; Ryde, N.; Romano, D.

    2018-04-01

    Context. In light of new observational data related to fluorine abundances in solar neighborhood stars, we present chemical evolution models testing various fluorine nucleosynthesis prescriptions with the aim to best fit those new data. Aim. We consider chemical evolution models in the solar neighborhood testing various nucleosynthesis prescriptions for fluorine production with the aim of reproducing the observed abundance ratios [F/O] versus [O/H] and [F/Fe] versus [Fe/H]. We study in detail the effects of various stellar yields on fluorine production. Methods: We adopted two chemical evolution models: the classical two-infall model, which follows the chemical evolution of halo-thick disk and thin disk phases; and the one-infall model, which is designed only for thin disk evolution. We tested the effects on the predicted fluorine abundance ratios of various nucleosynthesis yield sources, that is, asymptotic giant branch (AGB) stars, Wolf-Rayet (W-R) stars, Type II and Type Ia supernovae, and novae. Results: The fluorine production is dominated by AGB stars but the W-R stars are required to reproduce the trend of the observed data in the solar neighborhood with our chemical evolution models. In particular, the best model both for the two-infall and one-infall cases requires an increase by a factor of 2 of the W-R yields. We also show that the novae, even if their yields are still uncertain, could help to better reproduce the secondary behavior of F in the [F/O] versus [O/H] relation. Conclusions: The inclusion of the fluorine production by W-R stars seems to be essential to reproduce the new observed ratio [F/O] versus [O/H] in the solar neighborhood. Moreover, the inclusion of novae helps to reproduce the observed fluorine secondary behavior substantially.

  8. A CASE STUDY ON POINT PROCESS MODELLING IN DISEASE MAPPING

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We consider a data set of locations where people in Central Bohemia have been infected by tick-borne encephalitis (TBE, and where population census data and covariates concerning vegetation and altitude are available. The aims are to estimate the risk map of the disease and to study the dependence of the risk on the covariates. Instead of using the common area level approaches we base the analysis on a Bayesian approach for a log Gaussian Cox point process with covariates. Posterior characteristics for a discretized version of the log Gaussian Cox process are computed using Markov chain Monte Carlo methods. A particular problem which is thoroughly discussed is to determine a model for the background population density. The risk map shows a clear dependency with the population intensity models and the basic model which is adopted for the population intensity determines what covariates influence the risk of TBE. Model validation is based on the posterior predictive distribution of various summary statistics.

  9. Flat Knitting Loop Deformation Simulation Based on Interlacing Point Model

    Directory of Open Access Journals (Sweden)

    Jiang Gaoming

    2017-12-01

    Full Text Available In order to create realistic loop primitives suitable for the faster CAD of the flat-knitted fabric, we have performed research on the model of the loop as well as the variation of the loop surface. This paper proposes an interlacing point-based model for the loop center curve, and uses the cubic Bezier curve to fit the central curve of the regular loop, elongated loop, transfer loop, and irregular deformed loop. In this way, a general model for the central curve of the deformed loop is obtained. The obtained model is then utilized to perform texture mapping, texture interpolation, and brightness processing, simulating a clearly structured and lifelike deformed loop. The computer program LOOP is developed by using the algorithm. The deformed loop is simulated with different yarns, and the deformed loop is applied to design of a cable stitch, demonstrating feasibility of the proposed algorithm. This paper provides a loop primitive simulation method characterized by lifelikeness, yarn material variability, and deformation flexibility, and facilitates the loop-based fast computer-aided design (CAD of the knitted fabric.

  10. Continuous "in vitro" Evolution of a Ribozyme Ligase: A Model Experiment for the Evolution of a Biomolecule

    Science.gov (United States)

    Ledbetter, Michael P.; Hwang, Tony W.; Stovall, Gwendolyn M.; Ellington, Andrew D.

    2013-01-01

    Evolution is a defining criterion of life and is central to understanding biological systems. However, the timescale of evolutionary shifts in phenotype limits most classroom evolution experiments to simple probability simulations. "In vitro" directed evolution (IVDE) frequently serves as a model system for the study of Darwinian…

  11. Evolution and experience with the ATLAS Simulation at Point1 Project

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00389536; The ATLAS collaboration; Brasolin, Franco; Kouba, Tomas; Schovancova, Jaroslava; Fazio, Daniel; Di Girolamo, Alessandro; Scannicchio, Diana; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander; Lee, Christopher

    2017-01-01

    The Simulation at Point1 project is successfully running standard ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We present our experience with using the Event Service that provides the event-level granularity of computations. We show the design decisions and overhead time related to the usage of the Event Service. The improved utilization of the resources is also presented with the recent development in monitoring, automatic alerting, deployment and GUI.

  12. Evolution and experience with the ATLAS simulation at Point1 project

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Di Girolamo, Alessandro; Kouba, Tomas; Lee, Christopher; Scannicchio, Diana; Schovancova, Jaroslava; Twomey, Matthew Shaun; Wang, Fuquan; Zaytsev, Alexander

    2016-01-01

    The Simulation at Point1 project is successfully running traditional ATLAS simulation jobs on the TDAQ HLT resources. The pool of available resources changes dynamically, therefore we need to be very effective in exploiting the available computing cycles. We will present our experience with using the Event Service that provides the event-level granularity of computations. We will show the design decisions and overhead time related to the usage of the Event Service. The improved utilization of the resources will also be presented with the recent development in monitoring, automatic alerting, deployment and GUI.

  13. Numerical schemes for one-point closure turbulence models

    International Nuclear Information System (INIS)

    Larcher, Aurelien

    2010-01-01

    First-order Reynolds Averaged Navier-Stokes (RANS) turbulence models are studied in this thesis. These latter consist of the Navier-Stokes equations, supplemented with a system of balance equations describing the evolution of characteristic scalar quantities called 'turbulent scales'. In so doing, the contribution of the turbulent agitation to the momentum can be determined by adding a diffusive coefficient (called 'turbulent viscosity') in the Navier-Stokes equations, such that it is defined as a function of the turbulent scales. The numerical analysis problems, which are studied in this dissertation, are treated in the frame of a fractional step algorithm, consisting of an approximation on regular meshes of the Navier-Stokes equations by the nonconforming Crouzeix-Raviart finite elements, and a set of scalar convection-diffusion balance equations discretized by the standard finite volume method. A monotone numerical scheme based on the standard finite volume method is proposed so as to ensure that the turbulent scales, like the turbulent kinetic energy (k) and its dissipation rate (ε), remain positive in the case of the standard k - ε model, as well as the k - ε RNG and the extended k - ε - ν 2 models. The convergence of the proposed numerical scheme is then studied on a system composed of the incompressible Stokes equations and a steady convection-diffusion equation, which are both coupled by the viscosities and the turbulent production term. This reduced model allows to deal with the main difficulty encountered in the analysis of such problems: the definition of the turbulent production term leads to consider a class of convection-diffusion problems with an irregular right-hand side belonging to L 1 . Finally, to step towards the unsteady problem, the convergence of the finite volume scheme for a model convection-diffusion equation with L 1 data is proved. The a priori estimates on the solution and on its time derivative are obtained in discrete norms, for

  14. Defining the end-point of mastication: A conceptual model.

    Science.gov (United States)

    Gray-Stuart, Eli M; Jones, Jim R; Bronlund, John E

    2017-10-01

    The great risks of swallowing are choking and aspiration of food into the lungs. Both are rare in normal functioning humans, which is remarkable given the diversity of foods and the estimated 10 million swallows performed in a lifetime. Nevertheless, it remains a major challenge to define the food properties that are necessary to ensure a safe swallow. Here, the mouth is viewed as a well-controlled processor where mechanical sensory assessment occurs throughout the occlusion-circulation cycle of mastication. Swallowing is a subsequent action. It is proposed here that, during mastication, temporal maps of interfacial property data are generated, which the central nervous system compares against a series of criteria in order to be sure that the bolus is safe to swallow. To determine these criteria, an engineering hazard analysis tool, alongside an understanding of fluid and particle mechanics, is used to deduce the mechanisms by which food may deposit or become stranded during swallowing. These mechanisms define the food properties that must be avoided. By inverting the thinking, from hazards to ensuring safety, six criteria arise which are necessary for a safe-to-swallow bolus. A new conceptual model is proposed to define when food is safe to swallow during mastication. This significantly advances earlier mouth models. The conceptual model proposed in this work provides a framework of decision-making to define when food is safe to swallow. This will be of interest to designers of dietary foods, foods for dysphagia sufferers and will aid the further development of mastication robots for preparation of artificial boluses for digestion research. It enables food designers to influence the swallow-point properties of their products. For example, a product may be designed to satisfy five of the criteria for a safe-to-swallow bolus, which means the sixth criterion and its attendant food properties define the swallow-point. Alongside other organoleptic factors, these

  15. MIDAS/PK code development using point kinetics model

    International Nuclear Information System (INIS)

    Song, Y. M.; Park, S. H.

    1999-01-01

    In this study, a MIDAS/PK code has been developed for analyzing the ATWS (Anticipated Transients Without Scram) which can be one of severe accident initiating events. The MIDAS is an integrated computer code based on the MELCOR code to develop a severe accident risk reduction strategy by Korea Atomic Energy Research Institute. In the mean time, the Chexal-Layman correlation in the current MELCOR, which was developed under a BWR condition, is appeared to be inappropriate for a PWR. So as to provide ATWS analysis capability to the MIDAS code, a point kinetics module, PKINETIC, has first been developed as a stand-alone code whose reference model was selected from the current accident analysis codes. In the next step, the MIDAS/PK code has been developed via coupling PKINETIC with the MIDAS code by inter-connecting several thermal hydraulic parameters between the two codes. Since the major concern in the ATWS analysis is the primary peak pressure during the early few minutes into the accident, the peak pressure from the PKINETIC module and the MIDAS/PK are compared with the RETRAN calculations showing a good agreement between them. The MIDAS/PK code is considered to be valuable for analyzing the plant response during ATWS deterministically, especially for the early domestic Westinghouse plants which rely on the operator procedure instead of an AMSAC (ATWS Mitigating System Actuation Circuitry) against ATWS. This capability of ATWS analysis is also important from the view point of accident management and mitigation

  16. Applications of a composite model of microstructural evolution

    International Nuclear Information System (INIS)

    Stoller, R.E.

    1986-01-01

    Near-term fusion reactors will have to be designed using radiation effects data from experiments conducted in fast fission reactors. These fast reactors generate atomic displacements at a rate similar to that expected in a DT fusion reactor first wall. However, the transmutant helium production in an austenitic stainless steel first wall will exceed that in fast reactor fuel cladding by about a factor of 30. Hence, the use of the fast reactor data will involve some extrapolation. A major goal of this work is to develop theoretical models of microstructural evolution to aid in this extrapolation. In the present work a detailed rate-theory-based model of microstructural evolution under fast neutron irradiation has been developed. The prominent new aspect of this model is a treatment of dislocation evolution in which Frank faulted loops nucleate, grow and unfault to provide a source for network dislocations while the dislocation network can be simultaneously annihilated by a climb/glide process. The predictions of this model compare very favorably with the observed dose and temperature dependence of these key microstructural features over a broad range. In addition, this new description of dislocation evolution has been coupled with a previously developed model of cavity evolution and good agreement has been obtained between the predictions of the composite model and fast reactor swelling data. The results from the composite model also reveal that the various components of the irradiation-induced microstructure evolve in a highly coupled manner. The predictions of the composite model are more sensitive to parametric variations than more simple models. Hence, its value as a tool in data analysis and extrapolation is enhanced

  17. Modeling elephant-mediated cascading effects of water point closure.

    Science.gov (United States)

    Hilbers, Jelle P; Van Langevelde, Frank; Prins, Herbert H T; Grant, C C; Peel, Mike J S; Coughenour, Michael B; De Knegt, Henrik J; Slotow, Rob; Smit, Izak P J; Kiker, Greg A; De Boer, Willem F

    2015-03-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are, however, alternative ways to control wildlife densities, such as opening or closing water points. The effects of these alternatives are poorly studied. In this paper, we focus on manipulating large herbivores through the closure of water points (WPs). Removal of artificial WPs has been suggested in order to change the distribution of African elephants, which occur in high densities in national parks in Southern Africa and are thought to have a destructive effect on the vegetation. Here, we modeled the long-term effects of different scenarios of WP closure on the spatial distribution of elephants, and consequential effects on the vegetation and other herbivores in Kruger National Park, South Africa. Using a dynamic ecosystem model, SAVANNA, scenarios were evaluated that varied in availability of artificial WPs; levels of natural water; and elephant densities. Our modeling results showed that elephants can indirectly negatively affect the distributions of meso-mixed feeders, meso-browsers, and some meso-grazers under wet conditions. The closure of artificial WPs hardly had any effect during these natural wet conditions. Under dry conditions, the spatial distribution of both elephant bulls and cows changed when the availability of artificial water was severely reduced in the model. These changes in spatial distribution triggered changes in the spatial availability of woody biomass over the simulation period of 80 years, and this led to changes in the rest of the herbivore community, resulting in increased densities of all herbivores, except for giraffe and steenbok, in areas close to rivers. The spatial distributions of elephant bulls and cows showed to be less affected by the closure of WPs than most of the other herbivore species. Our study contributes to ecologically

  18. Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger

    We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...

  19. Forecasting Macedonian Business Cycle Turning Points Using Qual Var Model

    Directory of Open Access Journals (Sweden)

    Petrovska Magdalena

    2016-09-01

    Full Text Available This paper aims at assessing the usefulness of leading indicators in business cycle research and forecast. Initially we test the predictive power of the economic sentiment indicator (ESI within a static probit model as a leading indicator, commonly perceived to be able to provide a reliable summary of the current economic conditions. We further proceed analyzing how well an extended set of indicators performs in forecasting turning points of the Macedonian business cycle by employing the Qual VAR approach of Dueker (2005. In continuation, we evaluate the quality of the selected indicators in pseudo-out-of-sample context. The results show that the use of survey-based indicators as a complement to macroeconomic data work satisfactory well in capturing the business cycle developments in Macedonia.

  20. Microstructure evolution during cyclic tests on EUROFER 97 at room temperature. TEM observation and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Giordana, M.F., E-mail: giordana@ifir-conicet.gov.ar [Instituto de Fisica Rosario, CONICET-UNR, Bv. 27 de Febrero 210 Bis, 2000 Rosario (Argentina); Giroux, P.-F. [Commissariat a l' Energie Atomique, DEN/DANS/DMN/SRMA, 91191 Gif-sur-Yvette Cedex (France); Alvarez-Armas, I. [Instituto de Fisica Rosario, CONICET-UNR, Bv. 27 de Febrero 210 Bis, 2000 Rosario (Argentina); Sauzay, M. [Commissariat a l' Energie Atomique, DEN/DANS/DMN/SRMA, 91191 Gif-sur-Yvette Cedex (France); Armas, A. [Instituto de Fisica Rosario, CONICET-UNR, Bv. 27 de Febrero 210 Bis, 2000 Rosario (Argentina); Kruml, T. [CEITEC IPM, Institute of Physics of Materials, Academy of Sciences of the Czech Republic, Zizkova 22, Brno, 616 62 (Czech Republic)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer Low cycle fatigue test are carried out on EUROFER 97 at room temperature. Black-Right-Pointing-Pointer EUROFER 97 shows a pronounced cyclic softening accompanied by microstructural changes. Black-Right-Pointing-Pointer Cycling induces a decrement in dislocation density and subgrain growth. Black-Right-Pointing-Pointer A simple mean-field model based on crystalline plasticity is proposed. Black-Right-Pointing-Pointer The mean subgrain size evolution is predicted by modelling. - Abstract: The 9% Cr quenched and tempered reduced-activation ferritic/martensitic steel EUROFER 97 is one of the candidates for structural components of fusion reactors. Isothermal, plastic strain-controlled, low-cycle fatigue tests are performed. Tested at room temperature, this steel suffers a cyclic softening effect linked to microstructural changes observed by transmission electron microscopy, such as the decrease of dislocation density inside subgrains or the growth of subgrain size. From the assumed mechanisms of softening a simple mean-field model based on crystalline plasticity is proposed to predict these microstructure evolutions during cycling and monotonic deformation.

  1. Modeling on Fe-Cr microstructure: evolution with Cr content

    International Nuclear Information System (INIS)

    Diaz Arroyo, D.; Perlado, J.M.; Hernandez-Mayoral, M.; Caturla, M.J.; Victoria, M.

    2007-01-01

    Full text of publication follows: The minimum energy configuration of interstitials in the Fe-Cr system, which is the base for the low activation steels being developed in the European fusion reactor materials community, is determined by magnetism. Magnetism plays also a role in the atomic configurations found with increasing Cr content. Results will be presented from a program in which the microstructure evolution produced after heavy ion irradiation in the range from room temperature to 80 K is studied as a function of the Cr content in alloys produced under well controlled conditions, i.e. from high purity elements and with adequate heat treatment. It is expected that these measurements will serve as matrix for model validation. The first step in such modeling sequence is being performed by modeling the evolution of displacement cascades in Fe using the Dudarev -Derlet and Mendeleev potentials for Fe and the Caro potential for Fe-Cr. It is of particular interest to study the evolution of high-energy cascades, where an attempt will be made to clarify the role of the evolution of sub-cascades. Kinetic Monte Carlo (kMC) techniques will be used then to simulate the defect evolution. A new parallel kMC code is being implemented for this purpose. (authors)

  2. Evolution of a minimal parallel programming model

    International Nuclear Information System (INIS)

    Lusk, Ewing; Butler, Ralph; Pieper, Steven C.

    2017-01-01

    Here, we take a historical approach to our presentation of self-scheduled task parallelism, a programming model with its origins in early irregular and nondeterministic computations encountered in automated theorem proving and logic programming. We show how an extremely simple task model has evolved into a system, asynchronous dynamic load balancing (ADLB), and a scalable implementation capable of supporting sophisticated applications on today’s (and tomorrow’s) largest supercomputers; and we illustrate the use of ADLB with a Green’s function Monte Carlo application, a modern, mature nuclear physics code in production use. Our lesson is that by surrendering a certain amount of generality and thus applicability, a minimal programming model (in terms of its basic concepts and the size of its application programmer interface) can achieve extreme scalability without introducing complexity.

  3. Optimality models in the age of experimental evolution and genomics

    OpenAIRE

    Bull, J. J.; Wang, I.-N.

    2010-01-01

    Optimality models have been used to predict evolution of many properties of organisms. They typically neglect genetic details, whether by necessity or design. This omission is a common source of criticism, and although this limitation of optimality is widely acknowledged, it has mostly been defended rather than evaluated for its impact. Experimental adaptation of model organisms provides a new arena for testing optimality models and for simultaneously integrating genetics. First, an experimen...

  4. Evolution analysis of the states of the EZ model

    International Nuclear Information System (INIS)

    Qing-Hua, Chen; Yi-Ming, Ding; Hong-Guang, Dong

    2009-01-01

    Based on suitable choice of states, this paper studies the stability of the equilibrium state of the EZ model by regarding the evolution of the EZ model as a Markov chain and by showing that the Markov chain is ergodic. The Markov analysis is applied to the EZ model with small number of agents, the exact equilibrium state for N = 5 and numerical results for N = 18 are obtained. (cross-disciplinary physics and related areas of science and technology)

  5. Energetics in a model of prebiotic evolution

    Science.gov (United States)

    Intoy, B. F.; Halley, J. W.

    2017-12-01

    Previously we reported [A. Wynveen et al., Phys. Rev. E 89, 022725 (2014), 10.1103/PhysRevE.89.022725] that requiring that the systems regarded as lifelike be out of chemical equilibrium in a model of abstracted polymers undergoing ligation and scission first introduced by Kauffman [S. A. Kauffman, The Origins of Order (Oxford University Press, New York, 1993), Chap. 7] implied that lifelike systems were most probable when the reaction network was sparse. The model was entirely statistical and took no account of the bond energies or other energetic constraints. Here we report results of an extension of the model to include effects of a finite bonding energy in the model. We studied two conditions: (1) A food set is continuously replenished and the total polymer population is constrained but the system is otherwise isolated and (2) in addition to the constraints in (1) the system is in contact with a finite-temperature heat bath. In each case, detailed balance in the dynamics is guaranteed during the computations by continuous recomputation of a temperature [in case (1)] and of the chemical potential (in both cases) toward which the system is driven by the dynamics. In the isolated case, the probability of reaching a metastable nonequilibrium state in this model depends significantly on the composition of the food set, and the nonequilibrium states satisfying lifelike condition turn out to be at energies and particle numbers consistent with an equilibrium state at high negative temperature. As a function of the sparseness of the reaction network, the lifelike probability is nonmonotonic, as in our previous model, but the maximum probability occurs when the network is less sparse. In the case of contact with a thermal bath at a positive ambient temperature, we identify two types of metastable nonequilibrium states, termed locally and thermally alive, and locally dead and thermally alive, and evaluate their likelihood of appearance, finding maxima at an optimal

  6. Universality in a Neutral Evolution Model

    Science.gov (United States)

    King, Dawn; Scott, Adam; Maric, Nevena; Bahar, Sonya

    2013-03-01

    Agent-based models are ideal for investigating the complex problems of biodiversity and speciation because they allow for complex interactions between individuals and between individuals and the environment. Presented here is a ``null'' model that investigates three mating types - assortative, bacterial, and random - in phenotype space, as a function of the percentage of random death δ. Previous work has shown phase transition behavior in an assortative mating model with variable fitness landscapes as the maximum mutation size (μ) was varied (Dees and Bahar, 2010). Similarly, this behavior was recently presented in the work of Scott et al. (submitted), on a completely neutral landscape, for bacterial-like fission as well as for assortative mating. Here, in order to achieve an appropriate ``null'' hypothesis, the random death process was changed so each individual, in each generation, has the same probability of death. Results show a continuous nonequilibrium phase transition for the order parameters of the population size and the number of clusters (analogue of species) as δ is varied for three different mutation sizes of the system. The system shows increasing robustness as μ increases. Universality classes and percolation properties of this system are also explored. This research was supported by funding from: University of Missouri Research Board and James S. McDonnell Foundation

  7. Overdeepening development in a glacial landscape evolution model with quarrying

    DEFF Research Database (Denmark)

    Ugelvig, Sofie Vej; Egholm, D.L.; Iverson, Neal R.

    In glacial landscape evolution models, subglacial erosion rates are often related to basal sliding or ice discharge by a power-law. This relation can be justified when considering bed abrasion, where rock debris transported in the basal ice drives erosion. However, the relation is not well...... supported when considering models for quarrying of rock blocks from the bed. Field observations indicate that the principal mechanism of glacial erosion is quarrying, which emphasize the importance of a better way of implementing erosion by quarrying in glacial landscape evolution models. Iverson (2012...... around the obstacles. The erosion rate is quantified by considering the likelihood of rock fracturing on topographic bumps. The model includes a statistical treatment of the bedrock weakness, which is neglected in previous quarrying models. Sliding rate, effective pressure, and average bedslope...

  8. 1996, a turning point in the evolution of the EDF with good financial results

    International Nuclear Information System (INIS)

    1997-03-01

    This conference held on March 1997 presents the financial results obtained in 1996 by French Electricity Company EDF (Electricite de France). This year represented a turning point because essential landmarks were established in 1996. An european directive has been adopted in 1996 defining the framework in which european electricity market will function as well as the rules for its opening to competition. Discussions with the trade unions were conducted at the beginning of 1997 and a social agreement aiming at the amelioration of services and hiring of 11,000 to 15,000 young people in the next three years. Finally, a new contract of enterprise with the state has been discussed and approved in March 5, 1997, which redefined the place and the role of EDF in the French economy, in the new stage of electricity market started by the opening to competition. The document contains the following 8 chapters: 1. Financial results and enterprise management; 2. Institutional frame and the relation with the state; 3. Development in France; 4. International development; 5. Alliances, partnerships and cooperation; 6. Management, social and human resources; 7. Environment; 8. Technical results

  9. Evolution of Network Access Points (NAPs and agreements among Internet Service Providers (ISPs in South America

    Directory of Open Access Journals (Sweden)

    Fernando Beltrán

    2006-05-01

    Full Text Available Este artículo presenta los aspectos principales del desarrollo histórico y de asuntos actuales en el mercado suramericano de acceso a Internet: los acuerdos de interconexión para el intercambio de tráfico local y regional en Suramérica, los incentivos que tienen los proveedores de acceso a Internet para mantener o modificar la naturaleza de los acuerdos y los métodos de recuperación de costos en los puntos de intercambio de tráfico. El artículo también identifica algunas amenazas a la estabilidad de los puntos de intercambio de tráfico y las ilustra con dos casos. / This paper presents the main aspects of the historical development and the current issues at stake in the South American Internet access market: the interconnection schemes for the exchange of local and regional traffic in the South American region, the incentives Internet access providers have for keeping or modifying the nature of the agreements, and the cost recovery methods at the traffic exchange points. Some threats to the stability of the scheme for domestic traffic exchange adopted throughout the region are also identified and subsequently illustrated with country-cases.

  10. HIV-specific probabilistic models of protein evolution.

    Directory of Open Access Journals (Sweden)

    David C Nickle

    2007-06-01

    Full Text Available Comparative sequence analyses, including such fundamental bioinformatics techniques as similarity searching, sequence alignment and phylogenetic inference, have become a mainstay for researchers studying type 1 Human Immunodeficiency Virus (HIV-1 genome structure and evolution. Implicit in comparative analyses is an underlying model of evolution, and the chosen model can significantly affect the results. In general, evolutionary models describe the probabilities of replacing one amino acid character with another over a period of time. Most widely used evolutionary models for protein sequences have been derived from curated alignments of hundreds of proteins, usually based on mammalian genomes. It is unclear to what extent these empirical models are generalizable to a very different organism, such as HIV-1-the most extensively sequenced organism in existence. We developed a maximum likelihood model fitting procedure to a collection of HIV-1 alignments sampled from different viral genes, and inferred two empirical substitution models, suitable for describing between-and within-host evolution. Our procedure pools the information from multiple sequence alignments, and provided software implementation can be run efficiently in parallel on a computer cluster. We describe how the inferred substitution models can be used to generate scoring matrices suitable for alignment and similarity searches. Our models had a consistently superior fit relative to the best existing models and to parameter-rich data-driven models when benchmarked on independent HIV-1 alignments, demonstrating evolutionary biases in amino-acid substitution that are unique to HIV, and that are not captured by the existing models. The scoring matrices derived from the models showed a marked difference from common amino-acid scoring matrices. The use of an appropriate evolutionary model recovered a known viral transmission history, whereas a poorly chosen model introduced phylogenetic

  11. The infinite sites model of genome evolution.

    Science.gov (United States)

    Ma, Jian; Ratan, Aakrosh; Raney, Brian J; Suh, Bernard B; Miller, Webb; Haussler, David

    2008-09-23

    We formalize the problem of recovering the evolutionary history of a set of genomes that are related to an unseen common ancestor genome by operations of speciation, deletion, insertion, duplication, and rearrangement of segments of bases. The problem is examined in the limit as the number of bases in each genome goes to infinity. In this limit, the chromosomes are represented by continuous circles or line segments. For such an infinite-sites model, we present a polynomial-time algorithm to find the most parsimonious evolutionary history of any set of related present-day genomes.

  12. Impact of selected troposphere models on Precise Point Positioning convergence

    Science.gov (United States)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  13. Modeling river dune evolution using a parameterization of flow separation

    NARCIS (Netherlands)

    Paarlberg, Andries J.; Dohmen-Janssen, C. Marjolein; Hulscher, Suzanne J.M.H.; Termes, Paul

    2009-01-01

    This paper presents an idealized morphodynamic model to predict river dune evolution. The flow field is solved in a vertical plane assuming hydrostatic pressure conditions. The sediment transport is computed using a Meyer-Peter–Müller type of equation, including gravitational bed slope effects and a

  14. Top-cited publications on point-of-care ultrasound: The evolution of research trends.

    Science.gov (United States)

    Liao, Shao-Feng; Chen, Pai-Jung; Chaou, Chung-Hsien; Lee, Ching-Hsing

    2018-01-06

    Point-of-care ultrasound (POCUS) has been a rapidly growing and broadly used modality in recent decades. The purpose of this study was to determine how POCUS is incorporated into clinical medicine by analyzing trends of use in the published literature. POCUS-related publications were retrieved from the Web of Science (WoS) database. The search results were ranked according to the number of times an article was cited during three time frames and average annual number of citations. Of the top 100 most cited publications in the four rankings, information regarding the publication journal, publication year, first author's nationality, field of POCUS application, and number of times the article was cited was recorded for trend analysis. A total of 7860 POCUS-related publications were retrieved, and publications related to POCUS increased from 8 in 1990 to 754 in 2016. The top 148 cited publications from the four ranking groups were included in this study. Trauma was the leading application field in which POCUS was studied prior to 2001. After 2004, thorax, cardiovascular, and procedure-guidance were the leading fields in POCUS research. >79% (118/148) of the top-cited publications were conducted by authors in the United States, Italy, and France. The majority of publications were published in critical care medicine and emergency medicine journals. In recent years, publications relating to POCUS have increased. POCUS-related research has mainly been performed in thorax, cardiovascular, and procedure-guidance ultrasonography fields, replacing trauma as the major field in which POCUS was previously studied. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  16. Coastal Foredune Evolution, Part 2: Modeling Approaches for Meso-Scale Morphologic Evolution

    Science.gov (United States)

    2017-03-01

    for Meso-Scale Morphologic Evolution by Margaret L. Palmsten1, Katherine L. Brodie2, and Nicholas J. Spore2 PURPOSE: This Coastal and Hydraulics ...managers because foredunes provide ecosystem services and can reduce storm damages to coastal infrastructure, both of which increase the resiliency...MS 2 U.S. Army Engineer Research and Development Center, Coastal and Hydraulics Laboratory, Duck, NC ERDC/CHL CHETN-II-57 March 2017 2 models of

  17. Digital Forensic Investigation Models, an Evolution study

    Directory of Open Access Journals (Sweden)

    Khuram Mushtaque

    2015-10-01

    Full Text Available In business today, one of the most important segments that enable any business to get competitive advantage over others is appropriate, effective adaptation of Information Technology into business and then managing and governing it on their will. To govern IT organizations need to identify value of acquiring services of forensic firms to compete cyber criminals. Digital forensic firms follow different mechanisms to perform investigation. Time by time forensic firms are facilitated with different models for investigation containing phases for different purposes of the entire process. Along with forensic firms, enterprises also need to build a secure and supportive platform to make successful investigation process possible. We have underlined different elements of organizations in Pakistan; need to be addressed to provide support to forensic firms.

  18. From Particles and Point Clouds to Voxel Models: High Resolution Modeling of Dynamic Landscapes in Open Source GIS

    Science.gov (United States)

    Mitasova, H.; Hardin, E. J.; Kratochvilova, A.; Landa, M.

    2012-12-01

    Multitemporal data acquired by modern mapping technologies provide unique insights into processes driving land surface dynamics. These high resolution data also offer an opportunity to improve the theoretical foundations and accuracy of process-based simulations of evolving landforms. We discuss development of new generation of visualization and analytics tools for GRASS GIS designed for 3D multitemporal data from repeated lidar surveys and from landscape process simulations. We focus on data and simulation methods that are based on point sampling of continuous fields and lead to representation of evolving surfaces as series of raster map layers or voxel models. For multitemporal lidar data we present workflows that combine open source point cloud processing tools with GRASS GIS and custom python scripts to model and analyze dynamics of coastal topography (Figure 1) and we outline development of coastal analysis toolbox. The simulations focus on particle sampling method for solving continuity equations and its application for geospatial modeling of landscape processes. In addition to water and sediment transport models, already implemented in GIS, the new capabilities under development combine OpenFOAM for wind shear stress simulation with a new module for aeolian sand transport and dune evolution simulations. Comparison of observed dynamics with the results of simulations is supported by a new, integrated 2D and 3D visualization interface that provides highly interactive and intuitive access to the redesigned and enhanced visualization tools. Several case studies will be used to illustrate the presented methods and tools and demonstrate the power of workflows built with FOSS and highlight their interoperability.Figure 1. Isosurfaces representing evolution of shoreline and a z=4.5m contour between the years 1997-2011at Cape Hatteras, NC extracted from a voxel model derived from series of lidar-based DEMs.

  19. Two-point boundary correlation functions of dense loop models

    Directory of Open Access Journals (Sweden)

    Alexi Morin-Duchesne, Jesper Lykke Jacobsen

    2018-06-01

    Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.

  20. Observational constraints from models of close binary evolution

    International Nuclear Information System (INIS)

    Greve, J.P. de; Packet, W.

    1984-01-01

    The evolution of a system of 9 solar masses + 5.4 solar masses is computed from Zero Age Main Sequence through an early case B of mass exchange, up to the second phase of mass transfer after core helium burning. Both components are calculated simultaneously. The evolution is divided into several physically different phases. The characteristics of the models in each of these phases are transformed into corresponding 'observable' quantities. The outlook of the system for photometric observations is discussed, for an idealized case. The influence of the mass of the loser and the initial mass ratio is considered. (Auth.)

  1. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  2. Anomalous diffusion in neutral evolution of model proteins

    Science.gov (United States)

    Nelson, Erik D.; Grishin, Nick V.

    2015-06-01

    Protein evolution is frequently explored using minimalist polymer models, however, little attention has been given to the problem of structural drift, or diffusion. Here, we study neutral evolution of small protein motifs using an off-lattice heteropolymer model in which individual monomers interact as low-resolution amino acids. In contrast to most earlier models, both the length and folded structure of the polymers are permitted to change. To describe structural change, we compute the mean-square distance (MSD) between monomers in homologous folds separated by n neutral mutations. We find that structural change is episodic, and, averaged over lineages (for example, those extending from a single sequence), exhibits a power-law dependence on n . We show that this exponent depends on the alignment method used, and we analyze the distribution of waiting times between neutral mutations. The latter are more disperse than for models required to maintain a specific fold, but exhibit a similar power-law tail.

  3. APOGEE-2: The Second Phase of the Apache Point Observatory Galactic Evolution Experiment in SDSS-IV

    Science.gov (United States)

    Sobeck, Jennifer; Majewski, S.; Hearty, F.; Schiavon, R. P.; Holtzman, J. A.; Johnson, J.; Frinchaboy, P. M.; Skrutskie, M. F.; Munoz, R.; Pinsonneault, M. H.; Nidever, D. L.; Zasowski, G.; Garcia Perez, A.; Fabbian, D.; Meza Cofre, A.; Cunha, K. M.; Smith, V. V.; Chiappini, C.; Beers, T. C.; Steinmetz, M.; Anders, F.; Bizyaev, D.; Roman, A.; Fleming, S. W.; Crane, J. D.; SDSS-IV/APOGEE-2 Collaboration

    2014-01-01

    The second phase of the Apache Point Observatory Galactic Evolution Experiment (APOGEE-2), a part of the Sloan Digital Sky Survey IV (SDSS-IV), will commence operations in 2014. APOGEE-2 represents a significant expansion over APOGEE-1, not only in the size of the stellar sample, but also in the coverage of the sky through observations in both the Northern and Southern Hemispheres. Observations on the 2.5m Sloan Foundation Telescope of the Apache Point Observatory (APOGEE-2N) will continue immediately after the conclusion of APOGEE-1, to be followed by observations with the 2.5m du Pont Telescope of the Las Campanas Observatory (APOGEE-2S) within three years. Over the six-year lifetime of the project, high resolution (R˜22,500), high signal-to-noise (≥100) spectroscopic data in the H-band wavelength regime (1.51-1.69 μm) will be obtained for several hundred thousand stars, more than tripling the total APOGEE-1 sample. Accurate radial velocities and detailed chemical compositions will be generated for target stars in the main Galactic components (bulge, disk, and halo), open/globular clusters, and satellite dwarf galaxies. The spectroscopic follow-up program of Kepler targets with the APOGEE-2N instrument will be continued and expanded. APOGEE-2 will significantly extend and enhance the APOGEE-1 legacy of scientific contributions to understanding the origin and evolution of the elements, the assembly and formation history of galaxies like the Milky Way, and fundamental stellar astrophysics.

  4. MODELING THE RED SEQUENCE: HIERARCHICAL GROWTH YET SLOW LUMINOSITY EVOLUTION

    International Nuclear Information System (INIS)

    Skelton, Rosalind E.; Bell, Eric F.; Somerville, Rachel S.

    2012-01-01

    We explore the effects of mergers on the evolution of massive early-type galaxies by modeling the evolution of their stellar populations in a hierarchical context. We investigate how a realistic red sequence population set up by z ∼ 1 evolves under different assumptions for the merger and star formation histories, comparing changes in color, luminosity, and mass. The purely passive fading of existing red sequence galaxies, with no further mergers or star formation, results in dramatic changes at the bright end of the luminosity function and color-magnitude relation. Without mergers there is too much evolution in luminosity at a fixed space density compared to observations. The change in color and magnitude at a fixed mass resembles that of a passively evolving population that formed relatively recently, at z ∼ 2. Mergers among the red sequence population ('dry mergers') occurring after z = 1 build up mass, counteracting the fading of the existing stellar populations to give smaller changes in both color and luminosity for massive galaxies. By allowing some galaxies to migrate from the blue cloud onto the red sequence after z = 1 through gas-rich mergers, younger stellar populations are added to the red sequence. This manifestation of the progenitor bias increases the scatter in age and results in even smaller changes in color and luminosity between z = 1 and z = 0 at a fixed mass. The resultant evolution appears much slower, resembling the passive evolution of a population that formed at high redshift (z ∼ 3-5), and is in closer agreement with observations. We conclude that measurements of the luminosity and color evolution alone are not sufficient to distinguish between the purely passive evolution of an old population and cosmologically motivated hierarchical growth, although these scenarios have very different implications for the mass growth of early-type galaxies over the last half of cosmic history.

  5. The evolution of interaction between grain boundary and irradiation-induced point defects: Symmetric tilt GB in tungsten

    Science.gov (United States)

    Li, Hong; Qin, Yuan; Yang, Yingying; Yao, Man; Wang, Xudong; Xu, Haixuan; Phillpot, Simon R.

    2018-03-01

    Molecular dynamics method is used and scheme of calculational tests is designed. The atomic evolution view of the interaction between grain boundary (GB) and irradiation-induced point defects is given in six symmetric tilt GB structures of bcc tungsten with the energy of the primary knock-on atom (PKA) EPKA of 3 and 5 keV and the simulated temperature of 300 K. During the collision cascade with GB structure there are synergistic mechanisms to reduce the number of point defects: one is vacancies recombine with interstitials, and another is interstitials diffuse towards the GB with vacancies almost not move. The larger the ratio of the peak defect zone of the cascades overlaps with the GB region, the statistically relative smaller the number of surviving point defects in the grain interior (GI); and when the two almost do not overlap, vacancy-intensive area generally exists nearby GBs, and has a tendency to move toward GB with the increase of EPKA. In contrast, the distribution of interstitials is relatively uniform nearby GBs and is affected by the EPKA far less than the vacancy. The GB has a bias-absorption effect on the interstitials compared with vacancies. It shows that the number of surviving vacancies statistically has increasing trend with the increase of the distance between PKA and GB. While the number of surviving interstitials does not change much, and is less than the number of interstitials in the single crystal at the same conditions. The number of surviving vacancies in the GI is always larger than that of interstitials. The GB local extension after irradiation is observed for which the interstitials absorbed by the GB may be responsible. The designed scheme of calculational tests in the paper is completely applicable to the investigation of the interaction between other types of GBs and irradiation-induced point defects.

  6. Reservoir pressure evolution model during exploration drilling

    Directory of Open Access Journals (Sweden)

    Korotaev B. A.

    2017-03-01

    Full Text Available Based on the analysis of laboratory studies and literature data the method for estimating reservoir pressure in exploratory drilling has been proposed, it allows identify zones of abnormal reservoir pressure in the presence of seismic data on reservoir location depths. This method of assessment is based on developed at the end of the XX century methods using d- and σ-exponentials taking into account the mechanical drilling speed, rotor speed, bit load and its diameter, lithological constant and degree of rocks' compaction, mud density and "regional density". It is known that in exploratory drilling pulsation of pressure at the wellhead is observed. Such pulsation is a consequence of transferring reservoir pressure through clay. In the paper the mechanism for transferring pressure to the bottomhole as well as the behaviour of the clay layer during transmission of excess pressure has been described. A laboratory installation has been built, it has been used for modelling pressure propagation to the bottomhole of the well through a layer of clay. The bulge of the clay layer is established for 215.9 mm bottomhole diameter. Functional correlation of pressure propagation through the layer of clay has been determined and a reaction of the top clay layer has been shown to have bulge with a height of 25 mm. A pressure distribution scheme (balance has been developed, which takes into account the distance from layers with abnormal pressure to the bottomhole. A balance equation for reservoir pressure evaluation has been derived including well depth, distance from bottomhole to the top of the formation with abnormal pressure and density of clay.

  7. Signals for the QCD phase transition and critical point in a Langevin dynamical model

    International Nuclear Information System (INIS)

    Herold, Christoph; Bleicher, Marcus; Yan, Yu-Peng

    2013-01-01

    The search for the critical point is one of the central issues that will be investigated in the upcoming FAIR project. For a profound theoretical understanding of the expected signals we go beyond thermodynamic studies and present a fully dynamical model for the chiral and deconfinement phase transition in heavy ion collisions. The corresponding order parameters are propagated by Langevin equations of motions on a thermal background provided by a fluid dynamically expanding plasma of quarks. By that we are able to describe nonequilibrium effects occurring during the rapid expansion of a hot fireball. For an evolution through the phase transition the formation of a supercooled phase and its subsequent decay crucially influence the trajectories in the phase diagram and lead to a significant reheating of the quark medium at highest baryon densities. Furthermore, we find inhomogeneous structures with high density domains along the first order transition line within single events.

  8. Effects of positive potential in the catastrophe theory study of the point model for bumpy tori

    Energy Technology Data Exchange (ETDEWEB)

    Punjabi, A; Vahala, G [College of William and Mary, Williamsburg, VA (USA). Dept. of Physics

    1985-02-01

    With positive ambipolar potential, ion non-resonant neoclassical transport leads to increased particle confinement times. In certain regimes of filling pressure, microwave powers (ECRH and ICRH) and positive potential, new folds can now emerge from previously degenerate equilibrium surfaces allowing for distinct C, T, and M modes of operation. A comparison in the equilibrium fold structure is also made between (i) equal particle and energy confinement times, and (ii) particle confinement times enhanced over the energy confinement time. The nonlinear time evolution of these point model equations is considered and confirms the delay convention occurrences at the fold edges. It is clearly seen that the time-asymptotic equilibrium state is very sensitive, not only to the values of the control parameters (neutral density, ambipolar electrostatic potential, electron and ion cyclotron power densities) but also to the initial conditions on the plasma density, and electron and ion temperatures.

  9. Multiscale Modeling of Point and Line Defects in Cubic Lattices

    National Research Council Canada - National Science Library

    Chung, P. W; Clayton, J. D

    2007-01-01

    .... This multiscale theory explicitly captures heterogeneity in microscopic atomic motion in crystalline materials, attributed, for example, to the presence of various point and line lattice defects...

  10. Modeling evolution and immune system by cellular automata

    International Nuclear Information System (INIS)

    Bezzi, M.

    2001-01-01

    In this review the behavior of two different biological systems is investigated using cellular automata. Starting from this spatially extended approach it is also tried, in some cases, to reduce the complexity of the system introducing mean-field approximation, and solving (or trying to solve) these simplified systems. It is discussed the biological meaning of the results, the comparison with experimental data (if available) and the different features between spatially extended and mean-field versions. The biological systems considered in this review are the following: Darwinian evolution in simple ecosystems and immune system response. In the first section the main features of molecular evolution are introduced, giving a short survey of genetics for physicists and discussing some models for prebiotic systems and simple ecosystems. It is also introduced a cellular automaton model for studying a set of evolving individuals in a general fitness landscape, considering also the effects of co-evolution. In particular the process of species formation (speciation) is described in sect. 5. The second part deals with immune system modeling. The biological features of immune response are discussed, as well as it is introduced the concept of shape space and of idiotypic network. More detailed reviews which deal with immune system models (mainly focused on idiotypic network models) can be found. Other themes here discussed: the applications of CA to immune system modeling, two complex cellular automata for humoral and cellular immune response. Finally, it is discussed the biological data and the general conclusions are drawn in the last section

  11. PROTOPLANETARY DISK STRUCTURE WITH GRAIN EVOLUTION: THE ANDES MODEL

    International Nuclear Information System (INIS)

    Akimkin, V.; Wiebe, D.; Pavlyuchenkov, Ya.; Zhukovska, S.; Semenov, D.; Henning, Th.; Vasyunin, A.; Birnstiel, T.

    2013-01-01

    We present a self-consistent model of a protoplanetary disk: 'ANDES' ('AccretioN disk with Dust Evolution and Sedimentation'). ANDES is based on a flexible and extendable modular structure that includes (1) a 1+1D frequency-dependent continuum radiative transfer module, (2) a module to calculate the chemical evolution using an extended gas-grain network with UV/X-ray-driven processes and surface reactions, (3) a module to calculate the gas thermal energy balance, and (4) a 1+1D module that simulates dust grain evolution. For the first time, grain evolution and time-dependent molecular chemistry are included in a protoplanetary disk model. We find that grain growth and sedimentation of large grains onto the disk midplane lead to a dust-depleted atmosphere. Consequently, dust and gas temperatures become higher in the inner disk (R ∼ 50 AU), in comparison with the disk model with pristine dust. The response of disk chemical structure to the dust growth and sedimentation is twofold. First, due to higher transparency a partly UV-shielded molecular layer is shifted closer to the dense midplane. Second, the presence of big grains in the disk midplane delays the freeze-out of volatile gas-phase species such as CO there, while in adjacent upper layers the depletion is still effective. Molecular concentrations and thus column densities of many species are enhanced in the disk model with dust evolution, e.g., CO 2 , NH 2 CN, HNO, H 2 O, HCOOH, HCN, and CO. We also show that time-dependent chemistry is important for a proper description of gas thermal balance.

  12. A microscopic model of rate and state friction evolution

    Science.gov (United States)

    Li, Tianyi; Rubin, Allan M.

    2017-08-01

    Whether rate- and state-dependent friction evolution is primarily slip dependent or time dependent is not well resolved. Although slide-hold-slide experiments are traditionally interpreted as supporting the aging law, implying time-dependent evolution, recent studies show that this evidence is equivocal. In contrast, the slip law yields extremely good fits to velocity step experiments, although a clear physical picture for slip-dependent friction evolution is lacking. We propose a new microscopic model for rate and state friction evolution in which each asperity has a heterogeneous strength, with individual portions recording the velocity at which they became part of the contact. Assuming an exponential distribution of asperity sizes on the surface, the model produces results essentially similar to the slip law, yielding very good fits to velocity step experiments but not improving much the fits to slide-hold-slide experiments. A numerical kernel for the model is developed, and an analytical expression is obtained for perfect velocity steps, which differs from the slip law expression by a slow-decaying factor. By changing the quantity that determines the intrinsic strength, we use the same model structure to investigate aging-law-like time-dependent evolution. Assuming strength to increase logarithmically with contact age, for two different definitions of age we obtain results for velocity step increases significantly different from the aging law. Interestingly, a solution very close to the aging law is obtained if we apply a third definition of age that we consider to be nonphysical. This suggests that under the current aging law, the state variable is not synonymous with contact age.

  13. Topographic evolution of sandbars: Flume experiment and computational modeling

    Science.gov (United States)

    Kinzel, Paul J.; Nelson, Jonathan M.; McDonald, Richard R.; Logan, Brandy L.

    2010-01-01

    Measurements of sandbar formation and evolution were carried out in a laboratory flume and the topographic characteristics of these barforms were compared to predictions from a computational flow and sediment transport model with bed evolution. The flume experiment produced sandbars with approximate mode 2, whereas numerical simulations produced a bed morphology better approximated as alternate bars, mode 1. In addition, bar formation occurred more rapidly in the laboratory channel than for the model channel. This paper focuses on a steady-flow laboratory experiment without upstream sediment supply. Future experiments will examine the effects of unsteady flow and sediment supply and the use of numerical models to simulate the response of barform topography to these influences.

  14. Abundance gradients in disc galaxies and chemical evolution models

    International Nuclear Information System (INIS)

    Diaz, A.I.

    1989-01-01

    The present state of abundance gradients and chemical evolution models of spiral galaxies is reviewed. An up to date compilation of abundance data in the literature concerning HII regions over galactic discs is presented. From these data Oxygen and Nitrogen radial gradients are computed. The slope of the Oxygen gradient is shown to have a break at a radius between 1.5 and 1.75 times the value of the effective radius of the disc, i.e. the radius containing half of the light of the disc. The gradient is steeper in the central parts of the disc and becomes flatter in the outer parts. N/O gradients are shown to be rather different from galaxy to galaxy and only a weak trend of N/O with O/H is found. The existing chemical evolution models for spiral galaxies are reviewed with special emphasis in the interpretation of numerical models having a large number of parameters. (author)

  15. Differential Evolution algorithm applied to FSW model calibration

    Science.gov (United States)

    Idagawa, H. S.; Santos, T. F. A.; Ramirez, A. J.

    2014-03-01

    Friction Stir Welding (FSW) is a solid state welding process that can be modelled using a Computational Fluid Dynamics (CFD) approach. These models use adjustable parameters to control the heat transfer and the heat input to the weld. These parameters are used to calibrate the model and they are generally determined using the conventional trial and error approach. Since this method is not very efficient, we used the Differential Evolution (DE) algorithm to successfully determine these parameters. In order to improve the success rate and to reduce the computational cost of the method, this work studied different characteristics of the DE algorithm, such as the evolution strategy, the objective function, the mutation scaling factor and the crossover rate. The DE algorithm was tested using a friction stir weld performed on a UNS S32205 Duplex Stainless Steel.

  16. The Supercritical Pile GRB Model: The Prompt to Afterglow Evolution

    Science.gov (United States)

    Mastichiadis, A.; Kazanas, D.

    2009-01-01

    The "Supercritical Pile" is a very economical GRB model that provides for the efficient conversion of the energy stored in the protons of a Relativistic Blast Wave (RBW) into radiation and at the same time produces - in the prompt GRB phase, even in the absence of any particle acceleration - a spectral peak at energy approx. 1 MeV. We extend this model to include the evolution of the RBW Lorentz factor Gamma and thus follow its spectral and temporal features into the early GRB afterglow stage. One of the novel features of the present treatment is the inclusion of the feedback of the GRB produced radiation on the evolution of Gamma with radius. This feedback and the presence of kinematic and dynamic thresholds in the model can be the sources of rich time evolution which we have began to explore. In particular. one can this may obtain afterglow light curves with steep decays followed by the more conventional flatter afterglow slopes, while at the same time preserving the desirable features of the model, i.e. the well defined relativistic electron source and radiative processes that produce the proper peak in the (nu)F(sub nu), spectra. In this note we present the results of a specific set of parameters of this model with emphasis on the multiwavelength prompt emission and transition to the early afterglow.

  17. Adaptive Multiscale Modeling of Geochemical Impacts on Fracture Evolution

    Science.gov (United States)

    Molins, S.; Trebotich, D.; Steefel, C. I.; Deng, H.

    2016-12-01

    Understanding fracture evolution is essential for many subsurface energy applications, including subsurface storage, shale gas production, fracking, CO2 sequestration, and geothermal energy extraction. Geochemical processes in particular play a significant role in the evolution of fractures through dissolution-driven widening, fines migration, and/or fracture sealing due to precipitation. One obstacle to understanding and exploiting geochemical fracture evolution is that it is a multiscale process. However, current geochemical modeling of fractures cannot capture this multi-scale nature of geochemical and mechanical impacts on fracture evolution, and is limited to either a continuum or pore-scale representation. Conventional continuum-scale models treat fractures as preferential flow paths, with their permeability evolving as a function (often, a cubic law) of the fracture aperture. This approach has the limitation that it oversimplifies flow within the fracture in its omission of pore scale effects while also assuming well-mixed conditions. More recently, pore-scale models along with advanced characterization techniques have allowed for accurate simulations of flow and reactive transport within the pore space (Molins et al., 2014, 2015). However, these models, even with high performance computing, are currently limited in their ability to treat tractable domain sizes (Steefel et al., 2013). Thus, there is a critical need to develop an adaptive modeling capability that can account for separate properties and processes, emergent and otherwise, in the fracture and the rock matrix at different spatial scales. Here we present an adaptive modeling capability that treats geochemical impacts on fracture evolution within a single multiscale framework. Model development makes use of the high performance simulation capability, Chombo-Crunch, leveraged by high resolution characterization and experiments. The modeling framework is based on the adaptive capability in Chombo

  18. A Survey of the Origin and Evolution of Religion from the Points of View Edward Tylor and James Frazer

    Directory of Open Access Journals (Sweden)

    Alireza khajegir

    2016-01-01

    Full Text Available As a universal human phenomenon, religion is rooted in human nature, and human beings instinctively require a superior and supreme power. Besides this internal need for religion, attention to the meaning, function, and interpretation of religion has always been prevalent in the history of human thought from West to East, and scholars have always tried to comment on and analyze this fundamental issue of human life .  From among the approaches that arose about the interpretation and explanation of religion, rationalism tendency—influenced by evolution—has stood up because in the establishment of religion, rationalism takes its genesis and evolution as manifestations of the evolution of human thought, and it takes the development and evolution of religion as equal. This approach considers religion as answer to the need of the cognitive need of human beings. In this anthropological approach, religion is the product of primitive human beings’ effort to identify objects and events in the surrounding environment. As a results, as the man’s knowledge of the world around him increases, the need for religion decreases .  Anthropologist like Edward Tylor and James Frazer have taken this view to the origin and evolution of religion. They emphasize on principles such as the bodily and cognitive unity of the mind, the survival principal, and the evolutionary intellectual pattern of human beings in order to interpret religion stages from animism and magic till monism and monotheism, which will eventually decline during the development of science .  Taylor regards anthropology as the best scientific method to achieve a universal theory to understand the origin of religion. Based on its psychological unity, religion in all times and places—despite its diversity—is a unique phenomenon and has an exclusive identity because the very existence of commonalities in all practices and customs of the people of the world is indicative of the basic

  19. A Survey of the Origin and Evolution of Religion from the Points of View Edward Tylor and James Frazer

    Directory of Open Access Journals (Sweden)

    Alireza khajegir

    2015-12-01

    Full Text Available As a universal human phenomenon, religion is rooted in human nature, and human beings instinctively require a superior and supreme power. Besides this internal need for religion, attention to the meaning, function, and interpretation of religion has always been prevalent in the history of human thought from West to East, and scholars have always tried to comment on and analyze this fundamental issue of human life .  From among the approaches that arose about the interpretation and explanation of religion, rationalism tendency—influenced by evolution—has stood up because in the establishment of religion, rationalism takes its genesis and evolution as manifestations of the evolution of human thought, and it takes the development and evolution of religion as equal. This approach considers religion as answer to the need of the cognitive need of human beings. In this anthropological approach, religion is the product of primitive human beings’ effort to identify objects and events in the surrounding environment. As a results, as the man’s knowledge of the world around him increases, the need for religion decreases .  Anthropologist like Edward Tylor and James Frazer have taken this view to the origin and evolution of religion. They emphasize on principles such as the bodily and cognitive unity of the mind, the survival principal, and the evolutionary intellectual pattern of human beings in order to interpret religion stages from animism and magic till monism and monotheism, which will eventually decline during the development of science .  Taylor regards anthropology as the best scientific method to achieve a universal theory to understand the origin of religion. Based on its psychological unity, religion in all times and places—despite its diversity—is a unique phenomenon and has an exclusive identity because the very existence of commonalities in all practices and customs of the people of the world is indicative of the basic

  20. Time evolution of one-dimensional gapless models from a domain wall initial state: stochastic Loewner evolution continued?

    International Nuclear Information System (INIS)

    Calabrese, Pasquale; Hagendorf, Christian; Doussal, Pierre Le

    2008-01-01

    We study the time evolution of quantum one-dimensional gapless systems evolving from initial states with a domain wall. We generalize the path integral imaginary time approach that together with boundary conformal field theory allows us to derive the time and space dependence of general correlation functions. The latter are explicitly obtained for the Ising universality class, and the typical behavior of one- and two-point functions is derived for the general case. Possible connections with the stochastic Loewner evolution are discussed and explicit results for one-point time dependent averages are obtained for generic κ for boundary conditions corresponding to stochastic Loewner evolution. We use this set of results to predict the time evolution of the entanglement entropy and obtain the universal constant shift due to the presence of a domain wall in the initial state

  1. A model for simulation of coupled microstructural and compositional evolution

    International Nuclear Information System (INIS)

    Tikare, Veena; Homer, Eric R.; Holm, Elizabeth A.

    2011-01-01

    The formation, transport and segregation of components in nuclear fuels fundamentally control their behavior, performance, longevity and safety. Most nuclear fuels enter service with a uniform composition consisting of a single phase with two or three components. Fission products form, introducing more components. The segregation and transport of the components is complicated by the underlying microstructure consisting of grains, pores, bubbles and more, which is evolving under temperature gradients during service. As they evolve, components and microstructural features interact such that composition affects microstructure and vice versa. The ability to predict the interdependent compositional and microstructural evolution in 3D as a function of burn-up would greatly improve the ability to design safe, high burn-up nuclear fuels. We present a model that combines elements of Potts Monte Carlo, MC, and the phase-field model to treat coupled microstructural-compositional evolution. This hybrid model uses an equation of state, EOS, based on the microstructural state and the composition. The microstructural portion uses the traditional MC EOS and the compositional portion uses the phase-field EOS: E hyb = N Σ i=1 (E v (q i ,C)+1/2 n Σ j=1 J(q i ,q j )) + ∫κ c (∇C) 2 dV. E v is the bulk free energy of each site i and J is the bond energy between neighboring sites i and j; thus, this term defines the microstructural interfacial energy. The last term is the compositional interfacial energy as defined in the traditional phase-field model. Evolution of coupled microstructure-composition is simulated by minimizing free energy in a path dependent manner. This model will be presented and will be demonstrated by applying it to evolution of nuclear fuels during service. (author)

  2. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  3. Temperature distribution model for the semiconductor dew point detector

    Science.gov (United States)

    Weremczuk, Jerzy; Gniazdowski, Z.; Jachowicz, Ryszard; Lysko, Jan M.

    2001-08-01

    The simulation results of temperature distribution in the new type silicon dew point detector are presented in this paper. Calculations were done with use of the SMACEF simulation program. Fabricated structures, apart from the impedance detector used to the dew point detection, contained the resistive four terminal thermometer and two heaters. Two detector structures, the first one located on the silicon membrane and the second one placed on the bulk materials were compared in this paper.

  4. Dynamics of symmetry breaking during quantum real-time evolution in a minimal model system.

    Science.gov (United States)

    Heyl, Markus; Vojta, Matthias

    2014-10-31

    One necessary criterion for the thermalization of a nonequilibrium quantum many-particle system is ergodicity. It is, however, not sufficient in cases where the asymptotic long-time state lies in a symmetry-broken phase but the initial state of nonequilibrium time evolution is fully symmetric with respect to this symmetry. In equilibrium, one particular symmetry-broken state is chosen as a result of an infinitesimal symmetry-breaking perturbation. From a dynamical point of view the question is: Can such an infinitesimal perturbation be sufficient for the system to establish a nonvanishing order during quantum real-time evolution? We study this question analytically for a minimal model system that can be associated with symmetry breaking, the ferromagnetic Kondo model. We show that after a quantum quench from a completely symmetric state the system is able to break its symmetry dynamically and discuss how these features can be observed experimentally.

  5. Entropy in the Tangled Nature Model of evolution

    DEFF Research Database (Denmark)

    Roach, Ty N.F.; Nulton, James; Sibani, Paolo

    2017-01-01

    Applications of entropy principles to evolution and ecology are of tantamount importance given the central role spatiotemporal structuring plays in both evolution and ecological succession. We obtain here a qualitative interpretation of the role of entropy in evolving ecological systems. Our...... interpretation is supported by mathematical arguments using simulation data generated by the Tangled Nature Model (TNM), a stochastic model of evolving ecologies. We define two types of configurational entropy and study their empirical time dependence obtained from the data. Both entropy measures increase...... logarithmically with time, while the entropy per individual decreases in time, in parallel with the growth of emergent structures visible from other aspects of the simulation. We discuss the biological relevance of these entropies to describe niche space and functional space of ecosystems, as well as their use...

  6. Long range anti-ferromagnetic spin model for prebiotic evolution

    International Nuclear Information System (INIS)

    Nokura, Kazuo

    2003-01-01

    I propose and discuss a fitness function for one-dimensional binary monomer sequences of macromolecules for prebiotic evolution. The fitness function is defined by the free energy of polymers in the high temperature random coil phase. With repulsive interactions among the same kind of monomers, the free energy in the high temperature limit becomes the energy function of the one-dimensional long range anti-ferromagnetic spin model, which is shown to have a dynamical phase transition and glassy states

  7. Nonlinear evolution inclusions arising from phase change models

    Czech Academy of Sciences Publication Activity Database

    Colli, P.; Krejčí, Pavel; Rocca, E.; Sprekels, J.

    2007-01-01

    Roč. 57, č. 4 (2007), s. 1067-1098 ISSN 0011-4642 R&D Projects: GA ČR GA201/02/1058 Institutional research plan: CEZ:AV0Z10190503 Keywords : nonlinear and nonlocal evolution equations * Cahn-Hilliard type dynamics * phase transitions models Subject RIV: BA - General Mathematics Impact factor: 0.155, year: 2007 http://www.dml.cz/bitstream/handle/10338.dmlcz/128228/CzechMathJ_57-2007-4_2.pdf

  8. Geochemical modelling of groundwater evolution using chemical equilibrium codes

    International Nuclear Information System (INIS)

    Pitkaenen, P.; Pirhonen, V.

    1991-01-01

    Geochemical equilibrium codes are a modern tool in studying interaction between groundwater and solid phases. The most common used programs and application subjects are shortly presented in this article. The main emphasis is laid on the approach method of using calculated results in evaluating groundwater evolution in hydrogeological system. At present in geochemical equilibrium modelling also kinetic as well as hydrologic constrains along a flow path are taken into consideration

  9. Insights into mortality patterns and causes of death through a process point of view model.

    Science.gov (United States)

    Anderson, James J; Li, Ting; Sharrow, David J

    2017-02-01

    Process point of view (POV) models of mortality, such as the Strehler-Mildvan and stochastic vitality models, represent death in terms of the loss of survival capacity through challenges and dissipation. Drawing on hallmarks of aging, we link these concepts to candidate biological mechanisms through a framework that defines death as challenges to vitality where distal factors defined the age-evolution of vitality and proximal factors define the probability distribution of challenges. To illustrate the process POV, we hypothesize that the immune system is a mortality nexus, characterized by two vitality streams: increasing vitality representing immune system development and immunosenescence representing vitality dissipation. Proximal challenges define three mortality partitions: juvenile and adult extrinsic mortalities and intrinsic adult mortality. Model parameters, generated from Swedish mortality data (1751-2010), exhibit biologically meaningful correspondences to economic, health and cause-of-death patterns. The model characterizes the twentieth century epidemiological transition mainly as a reduction in extrinsic mortality resulting from a shift from high magnitude disease challenges on individuals at all vitality levels to low magnitude stress challenges on low vitality individuals. Of secondary importance, intrinsic mortality was described by a gradual reduction in the rate of loss of vitality presumably resulting from reduction in the rate of immunosenescence. Extensions and limitations of a distal/proximal framework for characterizing more explicit causes of death, e.g. the young adult mortality hump or cancer in old age are discussed.

  10. A model for evolution of overlapping community networks

    Science.gov (United States)

    Karan, Rituraj; Biswal, Bibhu

    2017-05-01

    A model is proposed for the evolution of network topology in social networks with overlapping community structure. Starting from an initial community structure that is defined in terms of group affiliations, the model postulates that the subsequent growth and loss of connections is similar to the Hebbian learning and unlearning in the brain and is governed by two dominant factors: the strength and frequency of interaction between the members, and the degree of overlap between different communities. The temporal evolution from an initial community structure to the current network topology can be described based on these two parameters. It is possible to quantify the growth occurred so far and predict the final stationary state to which the network is likely to evolve. Applications in epidemiology or the spread of email virus in a computer network as well as finding specific target nodes to control it are envisaged. While facing the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities one faces the most basic questions: how do communities evolve in time? This work aims to address this issue by developing a mathematical model for the evolution of community networks and studying it through computer simulation.

  11. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. The Institutional Approach for Modeling the Evolution of Human Societies.

    Science.gov (United States)

    Powers, Simon T

    2018-01-01

    Artificial life is concerned with understanding the dynamics of human societies. A defining feature of any society is its institutions. However, defining exactly what an institution is has proven difficult, with authors often talking past each other. This article presents a dynamic model of institutions, which views them as political game forms that generate the rules of a group's economic interactions. Unlike most prior work, the framework presented here allows for the construction of explicit models of the evolution of institutional rules. It takes account of the fact that group members are likely to try to create rules that benefit themselves. Following from this, it allows us to determine the conditions under which self-interested individuals will create institutional rules that support cooperation-for example, that prevent a tragedy of the commons. The article finishes with an example of how a model of the evolution of institutional rewards and punishments for promoting cooperation can be created. It is intended that this framework will allow artificial life researchers to examine how human groups can themselves create conditions for cooperation. This will help provide a better understanding of historical human social evolution, and facilitate the resolution of pressing societal social dilemmas.

  13. A New Blind Pointing Model Improves Large Reflector Antennas Precision Pointing at Ka-Band (32 GHz)

    Science.gov (United States)

    Rochblatt, David J.

    2009-01-01

    The National Aeronautics and Space Administration (NASA), Jet Propulsion Laboratory (JPL)-Deep Space Network (DSN) subnet of 34-m Beam Waveguide (BWG) Antennas was recently upgraded with Ka-Band (32-GHz) frequency feeds for space research and communication. For normal telemetry tracking a Ka-Band monopulse system is used, which typically yields 1.6-mdeg mean radial error (MRE) pointing accuracy on the 34-m diameter antennas. However, for the monopulse to be able to acquire and lock, for special radio science applications where monopulse cannot be used, or as a back-up for the monopulse, high-precision open-loop blind pointing is required. This paper describes a new 4th order pointing model and calibration technique, which was developed and applied to the DSN 34-m BWG antennas yielding 1.8 to 3.0-mdeg MRE pointing accuracy and amplitude stability of 0.2 dB, at Ka-Band, and successfully used for the CASSINI spacecraft occultation experiment at Saturn and Titan. In addition, the new 4th order pointing model was used during a telemetry experiment at Ka-Band (32 GHz) utilizing the Mars Reconnaissance Orbiter (MRO) spacecraft while at a distance of 0.225 astronomical units (AU) from Earth and communicating with a DSN 34-m BWG antenna at a record high rate of 6-megabits per second (Mb/s).

  14. Algebraic aspects of evolution partial differential equation arising in the study of constant elasticity of variance model from financial mathematics

    Science.gov (United States)

    Motsepa, Tanki; Aziz, Taha; Fatima, Aeeman; Khalique, Chaudry Masood

    2018-03-01

    The optimal investment-consumption problem under the constant elasticity of variance (CEV) model is investigated from the perspective of Lie group analysis. The Lie symmetry group of the evolution partial differential equation describing the CEV model is derived. The Lie point symmetries are then used to obtain an exact solution of the governing model satisfying a standard terminal condition. Finally, we construct conservation laws of the underlying equation using the general theorem on conservation laws.

  15. Higgs quartic coupling and neutrino sector evolution in 2UED models

    KAUST Repository

    Abdalgabar, A.

    2014-05-20

    Two compact universal extra-dimensional models are an interesting class of models for different theoretical and phenomenological issues, such as the justification of having three standard model fermion families, suppression of proton decay rate, dark matter parity from relics of the six-dimensional Lorentz symmetry, origin of masses and mixings in the standard model. However, these theories are merely effective ones, with typically a reduced range of validity in their energy scale. We explore two limiting cases of the three standard model generations all propagating in the bulk or all localised to a brane, from the point of view of renormalisation group equation evolutions for the Higgs sector and for the neutrino sector of these models. The recent experimental results of the Higgs boson from the LHC allow, in some scenarios, stronger constraints on the cutoff scale to be placed, from the requirement of the stability of the Higgs potential. 2014 The Author(s).

  16. SIGNUM: A Matlab, TIN-based landscape evolution model

    Science.gov (United States)

    Refice, A.; Giachetta, E.; Capolongo, D.

    2012-08-01

    Several numerical landscape evolution models (LEMs) have been developed to date, and many are available as open source codes. Most are written in efficient programming languages such as Fortran or C, but often require additional code efforts to plug in to more user-friendly data analysis and/or visualization tools to ease interpretation and scientific insight. In this paper, we present an effort to port a common core of accepted physical principles governing landscape evolution directly into a high-level language and data analysis environment such as Matlab. SIGNUM (acronym for Simple Integrated Geomorphological Numerical Model) is an independent and self-contained Matlab, TIN-based landscape evolution model, built to simulate topography development at various space and time scales. SIGNUM is presently capable of simulating hillslope processes such as linear and nonlinear diffusion, fluvial incision into bedrock, spatially varying surface uplift which can be used to simulate changes in base level, thrust and faulting, as well as effects of climate changes. Although based on accepted and well-known processes and algorithms in its present version, it is built with a modular structure, which allows to easily modify and upgrade the simulated physical processes to suite virtually any user needs. The code is conceived as an open-source project, and is thus an ideal tool for both research and didactic purposes, thanks to the high-level nature of the Matlab environment and its popularity among the scientific community. In this paper the simulation code is presented together with some simple examples of surface evolution, and guidelines for development of new modules and algorithms are proposed.

  17. A Nonstationary Markov Model Detects Directional Evolution in Hymenopteran Morphology.

    Science.gov (United States)

    Klopfstein, Seraina; Vilhelmsen, Lars; Ronquist, Fredrik

    2015-11-01

    Directional evolution has played an important role in shaping the morphological, ecological, and molecular diversity of life. However, standard substitution models assume stationarity of the evolutionary process over the time scale examined, thus impeding the study of directionality. Here we explore a simple, nonstationary model of evolution for discrete data, which assumes that the state frequencies at the root differ from the equilibrium frequencies of the homogeneous evolutionary process along the rest of the tree (i.e., the process is nonstationary, nonreversible, but homogeneous). Within this framework, we develop a Bayesian approach for testing directional versus stationary evolution using a reversible-jump algorithm. Simulations show that when only data from extant taxa are available, the success in inferring directionality is strongly dependent on the evolutionary rate, the shape of the tree, the relative branch lengths, and the number of taxa. Given suitable evolutionary rates (0.1-0.5 expected substitutions between root and tips), accounting for directionality improves tree inference and often allows correct rooting of the tree without the use of an outgroup. As an empirical test, we apply our method to study directional evolution in hymenopteran morphology. We focus on three character systems: wing veins, muscles, and sclerites. We find strong support for a trend toward loss of wing veins and muscles, while stationarity cannot be ruled out for sclerites. Adding fossil and time information in a total-evidence dating approach, we show that accounting for directionality results in more precise estimates not only of the ancestral state at the root of the tree, but also of the divergence times. Our model relaxes the assumption of stationarity and reversibility by adding a minimum of additional parameters, and is thus well suited to studying the nature of the evolutionary process in data sets of limited size, such as morphology and ecology. © The Author

  18. Modelling Influence and Opinion Evolution in Online Collective Behaviour.

    Directory of Open Access Journals (Sweden)

    Corentin Vande Kerckhove

    Full Text Available Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861, it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection.

  19. Calibration of a stochastic health evolution model using NHIS data

    Science.gov (United States)

    Gupta, Aparna; Li, Zhisheng

    2011-10-01

    This paper presents and calibrates an individual's stochastic health evolution model. In this health evolution model, the uncertainty of health incidents is described by a stochastic process with a finite number of possible outcomes. We construct a comprehensive health status index (HSI) to describe an individual's health status, as well as a health risk factor system (RFS) to classify individuals into different risk groups. Based on the maximum likelihood estimation (MLE) method and the method of nonlinear least squares fitting, model calibration is formulated in terms of two mixed-integer nonlinear optimization problems. Using the National Health Interview Survey (NHIS) data, the model is calibrated for specific risk groups. Longitudinal data from the Health and Retirement Study (HRS) is used to validate the calibrated model, which displays good validation properties. The end goal of this paper is to provide a model and methodology, whose output can serve as a crucial component of decision support for strategic planning of health related financing and risk management.

  20. A case study on point process modelling in disease mapping

    Czech Academy of Sciences Publication Activity Database

    Beneš, Viktor; Bodlák, M.; Moller, J.; Waagepetersen, R.

    2005-01-01

    Roč. 24, č. 3 (2005), s. 159-168 ISSN 1580-3139 R&D Projects: GA MŠk 0021620839; GA ČR GA201/03/0946 Institutional research plan: CEZ:AV0Z10750506 Keywords : log Gaussian Cox point process * Bayesian estimation Subject RIV: BB - Applied Statistics, Operational Research

  1. Business models & business cases for point-of-care testing

    NARCIS (Netherlands)

    Staring, A.J.; Meertens, L. O.; Sikkel, N.

    2016-01-01

    Point-Of-Care Testing (POCT) enables clinical tests at or near the patient, with test results that are available instantly or in a very short time frame, to assist caregivers with immediate diagnosis and/or clinical intervention. The goal of POCT is to provide accurate, reliable, fast, and

  2. Modeling elephant-mediated cascading effects of water point closure

    NARCIS (Netherlands)

    Hilbers, J.P.; Langevelde, van F.; Prins, H.H.T.; Grant, C.C.; Peel, M.; Coughenour, M.B.; Knegt, de H.J.; Slotow, R.; Smit, I.; Kiker, G.A.; Boer, de W.F.

    2015-01-01

    Wildlife management to reduce the impact of wildlife on their habitat can be done in several ways, among which removing animals (by either culling or translocation) is most often used. There are however alternative ways to control wildlife densities, such as opening or closing water points. The

  3. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Science.gov (United States)

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  4. Geometrothermodynamic model for the evolution of the Universe

    Energy Technology Data Exchange (ETDEWEB)

    Gruber, Christine; Quevedo, Hernando, E-mail: christine.gruber@correo.nucleares.unam.mx, E-mail: quevedo@nucleares.unam.mx [Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, AP 70543, México, DF 04510 (Mexico)

    2017-07-01

    Using the formalism of geometrothermodynamics to derive a fundamental thermodynamic equation, we construct a cosmological model in the framework of relativistic cosmology. In a first step, we describe a system without thermodynamic interaction, and show it to be equivalent to the standard ΛCDM paradigm. The second step includes thermodynamic interaction and produces a model consistent with the main features of inflation. With the proposed fundamental equation we are thus able to describe all the known epochs in the evolution of our Universe, starting from the inflationary phase.

  5. A Chemical Evolution Model for the Fornax Dwarf Spheroidal Galaxy

    Directory of Open Access Journals (Sweden)

    Yuan Zhen

    2016-01-01

    Full Text Available Fornax is the brightest Milky Way (MW dwarf spheroidal galaxy and its star formation history (SFH has been derived from observations. We estimate the time evolution of its gas mass and net inflow and outflow rates from the SFH usinga simple star formation law that relates the star formation rate to the gas mass. We present a chemical evolution model on a 2D mass grid with supernovae (SNe as sources of metal enrichment. We find that a key parameter controlling the enrichment is the mass Mx of the gas to mix with the ejecta from each SN. The choice of Mx depends on the evolution of SN remnants and on the global gas dynamics. It differs between the two types of SNe involved and between the periods before and after Fornax became an MW satellite at time t = tsat. Our results indicate that due to the global gas outflow at t > tsat, part of the ejecta from each SN may directly escape from Fornax. Sample results from our model are presented and compared with data.

  6. Firework Model: Time Dependent Spectral Evolution of GRB

    Science.gov (United States)

    Barbiellini, Guido; Longo, Francesco; Ghirlanda, G.; Celotti, A.; Bosnjak, Z.

    2004-09-01

    The energetics of the long duration GRB phenomenon is compared with models of a rotating BH in a strong magnetic field generated by an accreting torus. The GRB energy emission is attributed to magnetic field vacuum breakdown that gives origin to a e +/- fireball. Its subsequent evolution is hypothesized in analogy with the in-flight decay of an elementary particle. An anisotropy in the fireball propagation is thus naturally produced. The recent discovery in some GRB of an initial phase characterized by a thermal spectrum could be interpreted as the photon emission of the fireball photosphere when it becomes transparent. In particular, the temporal evolution of the emission can be explained as the effect of a radiative deceleration of the out-moving ejecta.

  7. Modeling investigation of the stability and irradiation-induced evolution of nanoscale precipitates in advanced structural materials

    International Nuclear Information System (INIS)

    Wirth, Brian

    2015-01-01

    Materials used in extremely hostile environment such as nuclear reactors are subject to a high flux of neutron irradiation, and thus vast concentrations of vacancy and interstitial point defects are produced because of collisions of energetic neutrons with host lattice atoms. The fate of these defects depends on various reaction mechanisms which occur immediately following the displacement cascade evolution and during the longer-time kinetically dominated evolution such as annihilation, recombination, clustering or trapping at sinks of vacancies, interstitials and their clusters. The long-range diffusional transport and evolution of point defects and self-defect clusters drive a microstructural and microchemical evolution that are known to produce degradation of mechanical properties including the creep rate, yield strength, ductility, or fracture toughness, and correspondingly affect material serviceability and lifetimes in nuclear applications. Therefore, a detailed understanding of microstructural evolution in materials at different time and length scales is of significant importance. The primary objective of this work is to utilize a hierarchical computational modeling approach i) to evaluate the potential for nanoscale precipitates to enhance point defect recombination rates and thereby the self-healing ability of advanced structural materials, and ii) to evaluate the stability and irradiation-induced evolution of such nanoscale precipitates resulting from enhanced point defect transport to and annihilation at precipitate interfaces. This project will utilize, and as necessary develop, computational materials modeling techniques within a hierarchical computational modeling approach, principally including molecular dynamics, kinetic Monte Carlo and spatially-dependent cluster dynamics modeling, to identify and understand the most important physical processes relevant to promoting the ''selfhealing'' or radiation resistance in advanced

  8. Modeling investigation of the stability and irradiation-induced evolution of nanoscale precipitates in advanced structural materials

    Energy Technology Data Exchange (ETDEWEB)

    Wirth, Brian [Univ. of Tennessee, Knoxville, TN (United States)

    2015-04-08

    Materials used in extremely hostile environment such as nuclear reactors are subject to a high flux of neutron irradiation, and thus vast concentrations of vacancy and interstitial point defects are produced because of collisions of energetic neutrons with host lattice atoms. The fate of these defects depends on various reaction mechanisms which occur immediately following the displacement cascade evolution and during the longer-time kinetically dominated evolution such as annihilation, recombination, clustering or trapping at sinks of vacancies, interstitials and their clusters. The long-range diffusional transport and evolution of point defects and self-defect clusters drive a microstructural and microchemical evolution that are known to produce degradation of mechanical properties including the creep rate, yield strength, ductility, or fracture toughness, and correspondingly affect material serviceability and lifetimes in nuclear applications. Therefore, a detailed understanding of microstructural evolution in materials at different time and length scales is of significant importance. The primary objective of this work is to utilize a hierarchical computational modeling approach i) to evaluate the potential for nanoscale precipitates to enhance point defect recombination rates and thereby the self-healing ability of advanced structural materials, and ii) to evaluate the stability and irradiation-induced evolution of such nanoscale precipitates resulting from enhanced point defect transport to and annihilation at precipitate interfaces. This project will utilize, and as necessary develop, computational materials modeling techniques within a hierarchical computational modeling approach, principally including molecular dynamics, kinetic Monte Carlo and spatially-dependent cluster dynamics modeling, to identify and understand the most important physical processes relevant to promoting the “selfhealing” or radiation resistance in advanced materials containing

  9. Bayesian semiparametric regression models to characterize molecular evolution

    Directory of Open Access Journals (Sweden)

    Datta Saheli

    2012-10-01

    Full Text Available Abstract Background Statistical models and methods that associate changes in the physicochemical properties of amino acids with natural selection at the molecular level typically do not take into account the correlations between such properties. We propose a Bayesian hierarchical regression model with a generalization of the Dirichlet process prior on the distribution of the regression coefficients that describes the relationship between the changes in amino acid distances and natural selection in protein-coding DNA sequence alignments. Results The Bayesian semiparametric approach is illustrated with simulated data and the abalone lysin sperm data. Our method identifies groups of properties which, for this particular dataset, have a similar effect on evolution. The model also provides nonparametric site-specific estimates for the strength of conservation of these properties. Conclusions The model described here is distinguished by its ability to handle a large number of amino acid properties simultaneously, while taking into account that such data can be correlated. The multi-level clustering ability of the model allows for appealing interpretations of the results in terms of properties that are roughly equivalent from the standpoint of molecular evolution.

  10. The development and application of landscape evolution models to coupled coast-estuarine environments

    Science.gov (United States)

    Morris, Chloe; Coulthard, Tom; Parsons, Daniel R.; Manson, Susan; Barkwith, Andrew

    2017-04-01

    Landscape Evolution Models (LEMs) are proven to be useful tools in understanding the morphodynamics of coast and estuarine systems. However, perhaps owing to the lack of research in this area, current models are not capable of simulating the dynamic interactions between these systems and their co-evolution at the meso-scale. Through a novel coupling of numerical models, this research is designed to explore coupled coastal-estuarine interactions, controls on system behaviour and the influence that environmental change could have. This will contribute to the understanding of the morphodynamics of these systems and how they may behave and evolve over the next century in response to climate changes, with the aim of informing management practices. This goal is being achieved through the modification and coupling of the one-line Coastline Evolution Model (CEM) with the hydrodynamic LEM CAESAR-Lisflood (C-L). The major issues faced with coupling these programs are their differing complexities and the limited graphical visualisations produced by the CEM that hinder the dissemination of results. The work towards overcoming these issues and reported here, include a new version of the CEM that incorporates a range of more complex geomorphological processes and boasts a graphical user interface that guides users through model set-up and projects a live output during model runs. The improved version is a stand-alone tool that can be used for further research projects and for teaching purposes. A sensitivity analysis using the Morris method has been completed to identify which key variables, including wave climate, erosion and weathering values, dominate the control of model behaviour. The model is being applied and tested using the evolution of the Holderness Coast, Humber Estuary and Spurn Point on the east coast of England (UK), which possess diverse geomorphologies and complex, co-evolving sediment pathways. Simulations using the modified CEM are currently being completed to

  11. Model uncertainty from a regulatory point of view

    International Nuclear Information System (INIS)

    Abramson, L.R.

    1994-01-01

    This paper discusses model uncertainty in the larger context of knowledge and random uncertainty. It explores some regulatory implications of model uncertainty and argues that, from a regulator's perspective, a conservative approach must be taken. As a consequence of this perspective, averaging over model results is ruled out

  12. Neutral-point voltage dynamic model of three-level NPC inverter for reactive load

    DEFF Research Database (Denmark)

    Maheshwari, Ram Krishan; Munk-Nielsen, Stig; Busquets-Monge, Sergio

    2012-01-01

    A three-level neutral-point-clamped inverter needs a controller for the neutral-point voltage. Typically, the controller design is based on a dynamic model. The dynamic model of the neutral-point voltage depends on the pulse width modulation technique used for the inverter. A pulse width modulati...

  13. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  14. Eco-genetic modeling of contemporary life-history evolution.

    Science.gov (United States)

    Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf

    2009-10-01

    We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by

  15. Generalized correlation of latent heats of vaporization of coal liquid model compounds between their freezing points and critical points

    Energy Technology Data Exchange (ETDEWEB)

    Sivaraman, A.; Kobuyashi, R.; Mayee, J.W.

    1984-02-01

    Based on Pitzer's three-parameter corresponding states principle, the authors have developed a correlation of the latent heat of vaporization of aromatic coal liquid model compounds for a temperature range from the freezing point to the critical point. An expansion of the form L = L/sub 0/ + ..omega..L /sub 1/ is used for the dimensionless latent heat of vaporization. This model utilizes a nonanalytic functional form based on results derived from renormalization group theory of fluids in the vicinity of the critical point. A simple expression for the latent heat of vaporization L = D/sub 1/epsilon /SUP 0.3333/ + D/sub 2/epsilon /SUP 0.8333/ + D/sub 4/epsilon /SUP 1.2083/ + E/sub 1/epsilon + E/sub 2/epsilon/sup 2/ + E/sub 3/epsilon/sup 3/ is cast in a corresponding states principle correlation for coal liquid compounds. Benzene, the basic constituent of the functional groups of the multi-ring coal liquid compounds, is used as the reference compound in the present correlation. This model works very well at both low and high reduced temperatures approaching the critical point (0.02 < epsilon = (T /SUB c/ - T)/(T /SUB c/- 0.69)). About 16 compounds, including single, two, and three-ring compounds, have been tested and the percent root-mean-square deviations in latent heat of vaporization reported and estimated through the model are 0.42 to 5.27%. Tables of the coefficients of L/sub 0/ and L/sub 1/ are presented. The contributing terms of the latent heat of vaporization function are also presented in a table for small increments of epsilon.

  16. Using Pareto points for model identification in predictive toxicology

    Science.gov (United States)

    2013-01-01

    Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology. PMID:23517649

  17. AUTOMATED CALIBRATION OF FEM MODELS USING LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    B. Riveiro

    2018-05-01

    Full Text Available In present work it is pretended to estimate elastic parameters of beams through the combined use of precision geomatic techniques (laser scanning and structural behaviour simulation tools. The study has two aims, on the one hand, to develop an algorithm able to interpret automatically point clouds acquired by laser scanning systems of beams subjected to different load situations on experimental tests; and on the other hand, to minimize differences between deformation values given by simulation tools and those measured by laser scanning. In this way we will proceed to identify elastic parameters and boundary conditions of structural element so that surface stresses can be estimated more easily.

  18. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  19. Charge state evolution in the solar wind. III. Model comparison with observations

    Energy Technology Data Exchange (ETDEWEB)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B. [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States)

    2014-08-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  20. Charge state evolution in the solar wind. III. Model comparison with observations

    International Nuclear Information System (INIS)

    Landi, E.; Oran, R.; Lepri, S. T.; Zurbuchen, T. H.; Fisk, L. A.; Van der Holst, B.

    2014-01-01

    We test three theoretical models of the fast solar wind with a set of remote sensing observations and in-situ measurements taken during the minimum of solar cycle 23. First, the model electron density and temperature are compared to SOHO/SUMER spectroscopic measurements. Second, the model electron density, temperature, and wind speed are used to predict the charge state evolution of the wind plasma from the source regions to the freeze-in point. Frozen-in charge states are compared with Ulysses/SWICS measurements at 1 AU, while charge states close to the Sun are combined with the CHIANTI spectral code to calculate the intensities of selected spectral lines, to be compared with SOHO/SUMER observations in the north polar coronal hole. We find that none of the theoretical models are able to completely reproduce all observations; namely, all of them underestimate the charge state distribution of the solar wind everywhere, although the levels of disagreement vary from model to model. We discuss possible causes of the disagreement, namely, uncertainties in the calculation of the charge state evolution and of line intensities, in the atomic data, and in the assumptions on the wind plasma conditions. Last, we discuss the scenario where the wind is accelerated from a region located in the solar corona rather than in the chromosphere as assumed in the three theoretical models, and find that a wind originating from the corona is in much closer agreement with observations.

  1. Time Series ARIMA Models of Undergraduate Grade Point Average.

    Science.gov (United States)

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  2. A finite element model for the quench front evolution problem

    International Nuclear Information System (INIS)

    Folescu, J.; Galeao, A.C.N.R.; Carmo, E.G.D. do.

    1985-01-01

    A model for the rewetting problem associated with the loss of coolant accident in a PWR reactor is proposed. A variational formulation for the time-dependent heat conduction problem on fuel rod cladding is used, and appropriate boundary conditions are assumed in order to simulate the thermal interaction between the fuel rod cladding and the fluid. A numerical procedure which uses the finite element method for the spatial discretization and a Crank-Nicolson-like method for the step-by-step integration is developed. Some numerical results are presented showing the quench front evolution and its stationary profile. (Author) [pt

  3. Generative Models in Deep Learning: Constraints for Galaxy Evolution

    Science.gov (United States)

    Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.

    2018-01-01

    New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.

  4. Modeling the Evolution of Female Meiotic Drive in Maize

    Directory of Open Access Journals (Sweden)

    David W. Hall

    2018-01-01

    Full Text Available Autosomal drivers violate Mendel’s law of segregation in that they are overrepresented in gametes of heterozygous parents. For drivers to be polymorphic within populations rather than fixing, their transmission advantage must be offset by deleterious effects on other fitness components. In this paper, we develop an analytical model for the evolution of autosomal drivers that is motivated by the neocentromere drive system found in maize. In particular, we model both the transmission advantage and deleterious fitness effects on seed viability, pollen viability, seed to adult survival mediated by maternal genotype, and seed to adult survival mediated by offspring genotype. We derive general, biologically intuitive conditions for the four most likely evolutionary outcomes and discuss the expected evolution of autosomal drivers given these conditions. Finally, we determine the expected equilibrium allele frequencies predicted by the model given recent estimates of fitness components for all relevant genotypes and show that the predicted equilibrium is within the range observed in maize land races for levels of drive at the low end of what has been observed.

  5. Modeling multiscale evolution of numerous voids in shocked brittle material.

    Science.gov (United States)

    Yu, Yin; Wang, Wenqiang; He, Hongliang; Lu, Tiecheng

    2014-04-01

    The influence of the evolution of numerous voids on macroscopic properties of materials is a multiscale problem that challenges computational research. A shock-wave compression model for brittle material, which can obtain both microscopic evolution and macroscopic shock properties, was developed using discrete element methods (lattice model). Using a model interaction-parameter-mapping procedure, qualitative features, as well as trends in the calculated shock-wave profiles, are shown to agree with experimental results. The shock wave splits into an elastic wave and a deformation wave in porous brittle materials, indicating significant shock plasticity. Void collapses in the deformation wave were the natural reason for volume shrinkage and deformation. However, media slippage and rotation deformations indicated by complex vortex patterns composed of relative velocity vectors were also confirmed as an important source of shock plasticity. With increasing pressure, the contribution from slippage deformation to the final plastic strain increased. Porosity was found to determine the amplitude of the elastic wave; porosity and shock stress together determine propagation speed of the deformation wave, as well as stress and strain on the final equilibrium state. Thus, shock behaviors of porous brittle material can be systematically designed for specific applications.

  6. Sand Point, Alaska MHW Coastal Digital Elevation Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — NOAA's National Geophysical Data Center (NGDC) is building high-resolution digital elevation models (DEMs) for select U.S. coastal regions. These integrated...

  7. Lyapunov functions for the fixed points of the Lorenz model

    International Nuclear Information System (INIS)

    Bakasov, A.A.; Govorkov, B.B. Jr.

    1992-11-01

    We have shown how the explicit Lyapunov functions can be constructed in the framework of a regular procedure suggested and completed by Lyapunov a century ago (''method of critical cases''). The method completely covers all practically encountering subtle cases of stability study for ordinary differential equations when the linear stability analysis fails. These subtle cases, ''the critical cases'', according to Lyapunov, include both bifurcations of solutions and solutions of systems with symmetry. Being properly specialized and actually powerful in case of ODE's, this Lyapunov's method is formulated in simple language and should attract a wide interest of the physical audience. The method leads to inevitable construction of the explicit Lyapunov function, takes automatically into account the Fredholm alternative and avoids infinite step calculations. Easy and apparent physical interpretation of the Lyapunov function as a potential or as a time-dependent entropy provides one with more details about the local dynamics of the system at non-equilibrium phase transition points. Another advantage is that this Lyapunov's method consists of a set of very detailed explicit prescriptions which allow one to easy programmize the method for a symbolic processor. The application of the Lyapunov theory for critical cases has been done in this work to the real Lorenz equations and it is shown, in particular, that increasing σ at the Hopf bifurcation point suppresses the contribution of one of the variables to the destabilization of the system. The relation of the method to contemporary methods and its place among them have been clearly and extensively discussed. Due to Appendices, the paper is self-contained and does not require from a reader to approach results published only in Russian. (author). 38 refs

  8. Model development to evaluate evolution of redox conditions in the near field

    International Nuclear Information System (INIS)

    Chiba, Tamotsu; Miki, Takahito; Inagaki, Manabu; Sasamoto, Hiroshi; Yui, Mikazu

    1999-02-01

    Deep underground is thought to be a potential place for high level radioactive waste repository. It is believed that the chemical condition of deep groundwater is generally anoxic and reducing. However, during construction and operation phase of repository, oxygen will diffuse some distance into the surrounding rock mass, and diffused oxygen may remain in the surrounding rock mass even after repository closure. In such a case, the transitional redox condition around the drift is not preferable in view point of safety assessment for HLW disposal. Hence, it is very important to evaluate evolution of redox conditions in the near field. This report describes the status of model development to evaluate evolution of redox conditions in the near field. We use the commercial solver to equate the mathematical equations which mean evolution of redox condition in the near field. The target area modeled in this report are near field rock mass and engineered barrier (buffer). In case of near field rock mass, we consider the following two geological media: (1) porous media for sedimentary rock, (2) fractured media for crystalline rock. In case of the engineered barrier, we regard the buffer as porous media. We simulate the behavior of dissolved oxygen and Fe 2+ in groundwater during evolution of redox condition in the near field rock mass and the buffer. In case of the porous media, we consider diffusion of chemical species as dominant transport mechanism. On the other hand, in case of the fractured media, we consider diffusion of chemical species in rock matrix and advection of that (only dissolved oxygen considered in this model) in fracture as transport mechanism. We also use the rate law of iron oxidation reaction and dissolution of Fe-bearing minerals in this model besides. (author)

  9. A process model for the heat-affected zone microstructure evolution in duplex stainless steel weldments: Part I. the model

    Science.gov (United States)

    Hemmer, H.; Grong, Ø.

    1999-11-01

    The present investigation is concerned with modeling of the microstructure evolution in duplex stainless steels under thermal conditions applicable to welding. The important reactions that have been modeled are the dissolution of austenite during heating, subsequent grain growth in the delta ferrite regime, and finally, the decomposition of the delta ferrite to austenite during cooling. As a starting point, a differential formulation of the underlying diffusion problem is presented, based on the internal-state variable approach. These solutions are later manipulated and expressed in terms of the Scheil integral in the cases where the evolution equation is separable or can be made separable by a simple change of variables. The models have then been applied to describe the heat-affected zone microstructure evolution during both thick-plate and thin-plate welding of three commercial duplex stainless steel grades: 2205, 2304, and 2507. The results may conveniently be presented in the form of novel process diagrams, which display contours of constant delta ferrite grain size along with information about dissolution and reprecipitation of austenite for different combinations of weld input energy and peak temperature. These diagrams are well suited for quantitative readings and illustrate, in a condensed manner, the competition between the different variables that lead to structural changes during welding of duplex stainless steels.

  10. The Biological Big Bang model for the major transitions in evolution

    Directory of Open Access Journals (Sweden)

    Koonin Eugene V

    2007-08-01

    Full Text Available Abstract Background Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin's original proposal, remains the dominant description of biological evolution. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable. Usually, this pattern is attributed to cladogenesis compressed in time, combined with the inevitable erosion of the phylogenetic signal. Hypothesis I propose that most or all major evolutionary transitions that show the "explosive" pattern of emergence of new types of biological entities correspond to a boundary between two qualitatively distinct evolutionary phases. The first, inflationary phase is characterized by extremely rapid evolution driven by various processes of genetic information exchange, such as horizontal gene transfer, recombination, fusion, fission, and spread of mobile elements. These processes give rise to a vast diversity of forms from which the main classes of entities at the new level of complexity emerge independently, through a sampling process. In the second phase, evolution dramatically slows down, the respective process of genetic information exchange tapers off, and multiple lineages of the new type of entities emerge, each of them evolving in a tree-like fashion from that point on. This biphasic model

  11. The Biological Big Bang model for the major transitions in evolution.

    Science.gov (United States)

    Koonin, Eugene V

    2007-08-20

    Major transitions in biological evolution show the same pattern of sudden emergence of diverse forms at a new level of complexity. The relationships between major groups within an emergent new class of biological entities are hard to decipher and do not seem to fit the tree pattern that, following Darwin's original proposal, remains the dominant description of biological evolution. The cases in point include the origin of complex RNA molecules and protein folds; major groups of viruses; archaea and bacteria, and the principal lineages within each of these prokaryotic domains; eukaryotic supergroups; and animal phyla. In each of these pivotal nexuses in life's history, the principal "types" seem to appear rapidly and fully equipped with the signature features of the respective new level of biological organization. No intermediate "grades" or intermediate forms between different types are detectable. Usually, this pattern is attributed to cladogenesis compressed in time, combined with the inevitable erosion of the phylogenetic signal. I propose that most or all major evolutionary transitions that show the "explosive" pattern of emergence of new types of biological entities correspond to a boundary between two qualitatively distinct evolutionary phases. The first, inflationary phase is characterized by extremely rapid evolution driven by various processes of genetic information exchange, such as horizontal gene transfer, recombination, fusion, fission, and spread of mobile elements. These processes give rise to a vast diversity of forms from which the main classes of entities at the new level of complexity emerge independently, through a sampling process. In the second phase, evolution dramatically slows down, the respective process of genetic information exchange tapers off, and multiple lineages of the new type of entities emerge, each of them evolving in a tree-like fashion from that point on. This biphasic model of evolution incorporates the previously developed

  12. Biological signatures of dynamic river networks from a coupled landscape evolution and neutral community model

    Science.gov (United States)

    Stokes, M.; Perron, J. T.

    2017-12-01

    Freshwater systems host exceptionally species-rich communities whose spatial structure is dictated by the topology of the river networks they inhabit. Over geologic time, river networks are dynamic; drainage basins shrink and grow, and river capture establishes new connections between previously separated regions. It has been hypothesized that these changes in river network structure influence the evolution of life by exchanging and isolating species, perhaps boosting biodiversity in the process. However, no general model exists to predict the evolutionary consequences of landscape change. We couple a neutral community model of freshwater organisms to a landscape evolution model in which the river network undergoes drainage divide migration and repeated river capture. Neutral community models are macro-ecological models that include stochastic speciation and dispersal to produce realistic patterns of biodiversity. We explore the consequences of three modes of speciation - point mutation, time-protracted, and vicariant (geographic) speciation - by tracking patterns of diversity in time and comparing the final result to an equilibrium solution of the neutral model on the final landscape. Under point mutation, a simple model of stochastic and instantaneous speciation, the results are identical to the equilibrium solution and indicate the dominance of the species-area relationship in forming patterns of diversity. The number of species in a basin is proportional to its area, and regional species richness reaches its maximum when drainage area is evenly distributed among sub-basins. Time-protracted speciation is also modeled as a stochastic process, but in order to produce more realistic rates of diversification, speciation is not assumed to be instantaneous. Rather, each new species must persist for a certain amount of time before it is considered to be established. When vicariance (geographic speciation) is included, there is a transient signature of increased

  13. Phase-field modelling of microstructural evolution and properties

    Science.gov (United States)

    Zhu, Jingzhi

    As one of the most powerful techniques in computational materials science, the diffuse-interface phase-field model has been widely employed for simulating various meso-scale microstructural evolution processes. The main purpose of this thesis is to develop a quantitative phase-field model for predicting microstructures and properties in real alloy systems which can be linked to existing thermodynamic/kinetic databases and parameters obtained from experimental measurements or first-principle calculations. To achieve this goal; many factors involved in complicated real systems are investigated, many of which are often simplified or ignored in existing models, e.g. the dependence of diffusional atomic mobility and elastic constants on composition. Efficient numerical techniques must be developed to solve those partial differential equations that are involved in modelling microstructural evolutions and properties. In this thesis, different spectral methods were proposed for the time-dependent phase-field kinetic equations and diffusion equations. For solving the elastic equilibrium equation with the consideration of elastic inhomogeneity, a conjugate gradient method was utilized. The numerical approaches developed were generally found to be more accurate and efficient than conventional approach such as finite difference method. A composition-dependent Cahn-Hilliard equation was solved by using a semi-implicit Fourier-spectral method. It was shown that the morphological evolutions in bulk-diffusion-controlled coarsening and interface-diffusion-controlled developed similar patterns and scaling behaviors. For bulk-diffusion-controlled coarsening, a cubic growth law was obeyed in the scaling regime, whereas a fourth power growth law was observed for interface-diffusion-controlled coarsening. The characteristics of a microstructure under the influence of elastic energy depend on elastic properties such as elastic anisotropy, lattice mismatch, elastic inhomogeneity and

  14. Development and evaluation of spatial point process models for epidermal nerve fibers.

    Science.gov (United States)

    Olsbo, Viktor; Myllymäki, Mari; Waller, Lance A; Särkkä, Aila

    2013-06-01

    We propose two spatial point process models for the spatial structure of epidermal nerve fibers (ENFs) across human skin. The models derive from two point processes, Φb and Φe, describing the locations of the base and end points of the fibers. Each point of Φe (the end point process) is connected to a unique point in Φb (the base point process). In the first model, both Φe and Φb are Poisson processes, yielding a null model of uniform coverage of the skin by end points and general baseline results and reference values for moments of key physiologic indicators. The second model provides a mechanistic model to generate end points for each base, and we model the branching structure more directly by defining Φe as a cluster process conditioned on the realization of Φb as its parent points. In both cases, we derive distributional properties for observable quantities of direct interest to neurologists such as the number of fibers per base, and the direction and range of fibers on the skin. We contrast both models by fitting them to data from skin blister biopsy images of ENFs and provide inference regarding physiological properties of ENFs. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Modeling the Evolution of Beliefs Using an Attentional Focus Mechanism.

    Directory of Open Access Journals (Sweden)

    Dimitrije Marković

    2015-10-01

    Full Text Available For making decisions in everyday life we often have first to infer the set of environmental features that are relevant for the current task. Here we investigated the computational mechanisms underlying the evolution of beliefs about the relevance of environmental features in a dynamical and noisy environment. For this purpose we designed a probabilistic Wisconsin card sorting task (WCST with belief solicitation, in which subjects were presented with stimuli composed of multiple visual features. At each moment in time a particular feature was relevant for obtaining reward, and participants had to infer which feature was relevant and report their beliefs accordingly. To test the hypothesis that attentional focus modulates the belief update process, we derived and fitted several probabilistic and non-probabilistic behavioral models, which either incorporate a dynamical model of attentional focus, in the form of a hierarchical winner-take-all neuronal network, or a diffusive model, without attention-like features. We used Bayesian model selection to identify the most likely generative model of subjects' behavior and found that attention-like features in the behavioral model are essential for explaining subjects' responses. Furthermore, we demonstrate a method for integrating both connectionist and Bayesian models of decision making within a single framework that allowed us to infer hidden belief processes of human subjects.

  16. RANS-VOF modelling of the Wavestar point absorber

    DEFF Research Database (Denmark)

    Ransley, E. J.; Greaves, D. M.; Raby, A.

    2017-01-01

    Highlights •A fully nonlinear, coupled model of the Wavestar WEC has been created using open-source CFD software, OpenFOAM®. •The response of the Wavestar WEC is simulated in regular waves with different steepness. •Predictions of body motion, surface elevation, fluid velocity, pressure and load ...

  17. Demystifying the cytokine network: Mathematical models point the way.

    Science.gov (United States)

    Morel, Penelope A; Lee, Robin E C; Faeder, James R

    2017-10-01

    Cytokines provide the means by which immune cells communicate with each other and with parenchymal cells. There are over one hundred cytokines and many exist in families that share receptor components and signal transduction pathways, creating complex networks. Reductionist approaches to understanding the role of specific cytokines, through the use of gene-targeted mice, have revealed further complexity in the form of redundancy and pleiotropy in cytokine function. Creating an understanding of the complex interactions between cytokines and their target cells is challenging experimentally. Mathematical and computational modeling provides a robust set of tools by which complex interactions between cytokines can be studied and analyzed, in the process creating novel insights that can be further tested experimentally. This review will discuss and provide examples of the different modeling approaches that have been used to increase our understanding of cytokine networks. This includes discussion of knowledge-based and data-driven modeling approaches and the recent advance in single-cell analysis. The use of modeling to optimize cytokine-based therapies will also be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Entropy in the Tangled Nature Model of Evolution

    Directory of Open Access Journals (Sweden)

    Ty N. F. Roach

    2017-04-01

    Full Text Available Applications of entropy principles to evolution and ecology are of tantamount importance given the central role spatiotemporal structuring plays in both evolution and ecological succession. We obtain here a qualitative interpretation of the role of entropy in evolving ecological systems. Our interpretation is supported by mathematical arguments using simulation data generated by the Tangled Nature Model (TNM, a stochastic model of evolving ecologies. We define two types of configurational entropy and study their empirical time dependence obtained from the data. Both entropy measures increase logarithmically with time, while the entropy per individual decreases in time, in parallel with the growth of emergent structures visible from other aspects of the simulation. We discuss the biological relevance of these entropies to describe niche space and functional space of ecosystems, as well as their use in characterizing the number of taxonomic configurations compatible with different niche partitioning and functionality. The TNM serves as an illustrative example of how to calculate and interpret these entropies, which are, however, also relevant to real ecosystems, where they can be used to calculate the number of functional and taxonomic configurations that an ecosystem can realize.

  19. Modelling of Damage Evolution in Braided Composites: Recent Developments

    Science.gov (United States)

    Wang, Chen; Roy, Anish; Silberschmidt, Vadim V.; Chen, Zhong

    2017-12-01

    Composites reinforced with woven or braided textiles exhibit high structural stability and excellent damage tolerance thanks to yarn interlacing. With their high stiffness-to-weight and strength-to-weight ratios, braided composites are attractive for aerospace and automotive components as well as sports protective equipment. In these potential applications, components are typically subjected to multi-directional static, impact and fatigue loadings. To enhance material analysis and design for such applications, understanding mechanical behaviour of braided composites and development of predictive capabilities becomes crucial. Significant progress has been made in recent years in development of new modelling techniques allowing elucidation of static and dynamic responses of braided composites. However, because of their unique interlacing geometric structure and complicated failure modes, prediction of damage initiation and its evolution in components is still a challenge. Therefore, a comprehensive literature analysis is presented in this work focused on a review of the state-of-the-art progressive damage analysis of braided composites with finite-element simulations. Recently models employed in the studies on mechanical behaviour, impact response and fatigue analyses of braided composites are presented systematically. This review highlights the importance, advantages and limitations of as-applied failure criteria and damage evolution laws for yarns and composite unit cells. In addition, this work provides a good reference for future research on FE simulations of braided composites.

  20. An Evaluation of Models of Bentonite Pore Water Evolution

    Energy Technology Data Exchange (ETDEWEB)

    Savage, David; Watson, Claire; Wilson, James (Quintessa Ltd, Henley-on-Thames (United Kingdom)); Arthur, Randy (Monitor Scientific LLC, Denver, CO (United States))

    2010-01-15

    The determination of a bentonite pore water composition and understanding its evolution of with time underpins many radioactive waste disposal issues, such as buffer erosion, canister corrosion, and radionuclide solubility, sorption, and diffusion, inter alia. The usual approach to modelling clay pore fluids is based primarily around assumed chemical equilibrium between Na+, K+, Ca2+, and Mg2+ aqueous species and ion exchange sites on montmorillonite, but also includes protonation- deprotonation of clay edge surface sites, and dissolution-precipitation of the trace mineral constituents, calcite and gypsum. An essential feature of this modelling approach is that clay hydrolysis reactions (i.e. dissolution of the aluminosilicate octahedral and tetrahedral sheets of montmorillonite) are ignored. A consequence of the omission of clay hydrolysis reactions from bentonite pore fluid models is that montmorillonite is preserved indefinitely in the near-field system, even over million-year timescales. Here, we investigate the applicability of an alternative clay pore fluid model, one that incorporates clay hydrolysis reactions as an integral component and test it against well-characterised laboratory experimental data, where key geochemical parameters, Eh and pH, have been measured directly in compacted bentonite. Simulations have been conducted using a range of computer codes to test the applicability of this alternative model. Thermodynamic data for MX-80 smectite used in the calculations were estimated using two different methods. Simulations of 'end-point' pH measurements in batch bentonite-water slurry experiments showed different pH values according to the complexity of the system studied. The most complete system investigated revealed pH values were a strong function of partial pressure of carbon dioxide, with pH increasing with decreasing PCO{sub 2} (log PCO{sub 2} values ranging from -3.5 to -7.5 bars produced pH values ranging from 7.9 to 9.6). A second

  1. Modelling the morphodynamics and co-evolution of coast and estuarine environments

    Science.gov (United States)

    Morris, Chloe; Coulthard, Tom; Parsons, Daniel R.; Manson, Susan; Barkwith, Andrew

    2017-04-01

    The morphodynamics of coast and estuarine environments are known to be sensitive to environmental change and sea-level rise. However, whilst these systems have received considerable individual research attention, how they interact and co-evolve is relatively understudied. These systems are intrinsically linked and it is therefore advantageous to study them holistically in order to build a more comprehensive understanding of their behaviour and to inform sustainable management over the long term. Complex environments such as these are often studied using numerical modelling techniques. Inherent from the limited research in this area, existing models are currently not capable of simulating dynamic coast-estuarine interactions. A new model is being developed through coupling the one-line Coastline Evolution Model (CEM) with CAESAR-Lisflood (C-L), a hydrodynamic Landscape Evolution Model. It is intended that the eventual model be used to advance the understanding of these systems and how they may evolve over the mid to long term in response to climate change. In the UK, the Holderness Coast, Humber Estuary and Spurn Point system offers a diverse and complex case study for this research. Holderness is one of the fastest eroding coastlines in Europe and research suggests that the large volumes of material removed from its cliffs are responsible for the formation of the Spurn Point feature and for the Holocene infilling of the Humber Estuary. Marine, fluvial and coastal processes are continually reshaping this system and over the next century, it is predicted that climate change could lead to increased erosion along the coast and supply of material to the Humber Estuary and Spurn Point. How this manifests will be hugely influential to the future morphology of these systems and the existence of Spurn Point. Progress to date includes a new version of the CEM that has been prepared for integration into C-L and includes an improved graphical user interface and more complex

  2. Numerical Modeling of a Wave Energy Point Absorber

    DEFF Research Database (Denmark)

    Hernandez, Lorenzo Banos; Frigaard, Peter; Kirkegaard, Poul Henning

    2009-01-01

    The present study deals with numerical modelling of the Wave Star Energy WSE device. Hereby, linear potential theory is applied via a BEM code on the wave hydrodynamics exciting the floaters. Time and frequency domain solutions of the floater response are determined for regular and irregular seas....... Furthermore, these results are used to estimate the power and the energy absorbed by a single oscillating floater. Finally, a latching control strategy is analysed in open-loop configuration for energy maximization....

  3. Heavy ion collision evolution modeling with ECHO-QGP

    Science.gov (United States)

    Rolando, V.; Inghirami, G.; Beraudo, A.; Del Zanna, L.; Becattini, F.; Chandra, V.; De Pace, A.; Nardi, M.

    2014-11-01

    We present a numerical code modeling the evolution of the medium formed in relativistic heavy ion collisions, ECHO-QGP. The code solves relativistic hydrodynamics in (3 + 1)D, with dissipative terms included within the framework of Israel-Stewart theory; it can work both in Minkowskian and in Bjorken coordinates. Initial conditions are provided through an implementation of the Glauber model (both Optical and Monte Carlo), while freezeout and particle generation are based on the Cooper-Frye prescription. The code is validated against several test problems and shows remarkable stability and accuracy with the combination of a conservative (shock-capturing) approach and the high-order methods employed. In particular it beautifully agrees with the semi-analytic solution known as Gubser flow, both in the ideal and in the viscous Israel-Stewart case, up to very large times and without any ad hoc tuning of the algorithm.

  4. Evolution of quantum-like modeling in decision making processes

    Science.gov (United States)

    Khrennikova, Polina

    2012-12-01

    The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schrödinger equation to describe the evolution of people's mental states. A shortcoming of Schrödinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.

  5. Evolution of quantum-like modeling in decision making processes

    Energy Technology Data Exchange (ETDEWEB)

    Khrennikova, Polina [School of Management, University of Leicester, University Road Leicester LE1 7RH (United Kingdom)

    2012-12-18

    The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schroedinger equation to describe the evolution of people's mental states. A shortcoming of Schroedinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.

  6. Evolution of quantum-like modeling in decision making processes

    International Nuclear Information System (INIS)

    Khrennikova, Polina

    2012-01-01

    The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schrödinger equation to describe the evolution of people's mental states. A shortcoming of Schrödinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.

  7. A probabilistic model for the evolution of RNA structure

    Directory of Open Access Journals (Sweden)

    Holmes Ian

    2004-10-01

    Full Text Available Abstract Background For the purposes of finding and aligning noncoding RNA gene- and cis-regulatory elements in multiple-genome datasets, it is useful to be able to derive multi-sequence stochastic grammars (and hence multiple alignment algorithms systematically, starting from hypotheses about the various kinds of random mutation event and their rates. Results Here, we consider a highly simplified evolutionary model for RNA, called "The TKF91 Structure Tree" (following Thorne, Kishino and Felsenstein's 1991 model of sequence evolution with indels, which we have implemented for pairwise alignment as proof of principle for such an approach. The model, its strengths and its weaknesses are discussed with reference to four examples of functional ncRNA sequences: a riboswitch (guanine, a zipcode (nanos, a splicing factor (U4 and a ribozyme (RNase P. As shown by our visualisations of posterior probability matrices, the selected examples illustrate three different signatures of natural selection that are highly characteristic of ncRNA: (i co-ordinated basepair substitutions, (ii co-ordinated basepair indels and (iii whole-stem indels. Conclusions Although all three types of mutation "event" are built into our model, events of type (i and (ii are found to be better modeled than events of type (iii. Nevertheless, we hypothesise from the model's performance on pairwise alignments that it would form an adequate basis for a prototype multiple alignment and genefinding tool.

  8. Geographical point cloud modelling with the 3D medial axis transform

    NARCIS (Netherlands)

    Peters, R.Y.

    2018-01-01

    A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment.
    Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management,

  9. A Massless-Point-Charge Model for the Electron

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2010-04-01

    Full Text Available “It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foun- dations. Maxwell’s theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrody- namics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle” [1, p.367]. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff [2] with the theory of the Planck vacuum (PV [3], the basic idea for the model following from [2] with the PV theory adding some important details.

  10. A Massless-Point-Charge Model for the Electron

    Directory of Open Access Journals (Sweden)

    Daywitt W. C.

    2010-04-01

    Full Text Available "It is rather remarkable that the modern concept of electrodynamics is not quite 100 years old and yet still does not rest firmly upon uniformly accepted theoretical foundations. Maxwell's theory of the electromagnetic field is firmly ensconced in modern physics, to be sure, but the details of how charged particles are to be coupled to this field remain somewhat uncertain, despite the enormous advances in quantum electrodynamics over the past 45 years. Our theories remain mathematically ill-posed and mired in conceptual ambiguities which quantum mechanics has only moved to another arena rather than resolve. Fundamentally, we still do not understand just what is a charged particle" (Grandy W.T. Jr. Relativistic quantum mechanics of leptons and fields. Kluwer Academic Publishers, Dordrecht-London, 1991, p.367. As a partial answer to the preceeding quote, this paper presents a new model for the electron that combines the seminal work of Puthoff with the theory of the Planck vacuum (PV, the basic idea for the model following from Puthoff with the PV theory adding some important details.

  11. Unemployment estimation: Spatial point referenced methods and models

    KAUST Repository

    Pereira, Soraia

    2017-06-26

    Portuguese Labor force survey, from 4th quarter of 2014 onwards, started geo-referencing the sampling units, namely the dwellings in which the surveys are carried. This opens new possibilities in analysing and estimating unemployment and its spatial distribution across any region. The labor force survey choose, according to an preestablished sampling criteria, a certain number of dwellings across the nation and survey the number of unemployed in these dwellings. Based on this survey, the National Statistical Institute of Portugal presently uses direct estimation methods to estimate the national unemployment figures. Recently, there has been increased interest in estimating these figures in smaller areas. Direct estimation methods, due to reduced sampling sizes in small areas, tend to produce fairly large sampling variations therefore model based methods, which tend to

  12. Leader's opinion priority bounded confidence model for network opinion evolution

    Science.gov (United States)

    Zhu, Meixia; Xie, Guangqiang

    2017-08-01

    Aiming at the weight of trust someone given to participate in the interaction in Hegselmann-Krause's type consensus model is the same and virtual social networks among individuals with different level of education, personal influence, etc. For differences between agents, a novelty bounded confidence model was proposed with leader's opinion considered priority. Interaction neighbors can be divided into two kinds. The first kind is made up of "opinion leaders" group, another kind is made up of ordinary people. For different groups to give different weights of trust. We also analyzed the related characteristics of the new model under the symmetrical bounded confidence parameters and combined with the classical HK model were analyzed. Simulation experiment results show that no matter the network size and initial view is subject to uniform distribution or discrete distribution. We can control the "opinion-leader" good change the number of views and values, and even improve the convergence speed. Experiment also found that the choice of "opinion leaders" is not the more the better, the model well explain how the "opinion leader" in the process of the evolution of the public opinion play the role of the leader.

  13. Modelling the secular evolution of migrating planet pairs

    Science.gov (United States)

    Michtchenko, T. A.; Rodríguez, A.

    2011-08-01

    The subject of this paper is the secular behaviour of a pair of planets evolving under dissipative forces. In particular, we investigate the case when dissipative forces affect the planetary semimajor axes and the planets move inwards/outwards the central star, in a process known as planet migration. To perform this investigation, we introduce fundamental concepts of conservative and dissipative dynamics of the three-body problem. Based on these concepts, we develop a qualitative model of the secular evolution of the migrating planetary pair. Our approach is based on the analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces is invoked. We show that, under the assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the Mode I and Mode II stationary solutions of the conservative secular problem. The ultimate convergence and the evolution of the system along one of these secular modes of motion are determined uniquely by the condition that the dissipation rate is sufficiently smaller than the proper secular frequency of the system. We show that it is possible to reassemble the starting configurations and the migration history of the systems on the basis of their final states and consequently to constrain the parameters of the physical processes involved.

  14. A unifying model of genome evolution under parsimony.

    Science.gov (United States)

    Paten, Benedict; Zerbino, Daniel R; Hickey, Glenn; Haussler, David

    2014-06-19

    Parsimony and maximum likelihood methods of phylogenetic tree estimation and parsimony methods for genome rearrangements are central to the study of genome evolution yet to date they have largely been pursued in isolation. We present a data structure called a history graph that offers a practical basis for the analysis of genome evolution. It conceptually simplifies the study of parsimonious evolutionary histories by representing both substitutions and double cut and join (DCJ) rearrangements in the presence of duplications. The problem of constructing parsimonious history graphs thus subsumes related maximum parsimony problems in the fields of phylogenetic reconstruction and genome rearrangement. We show that tractable functions can be used to define upper and lower bounds on the minimum number of substitutions and DCJ rearrangements needed to explain any history graph. These bounds become tight for a special type of unambiguous history graph called an ancestral variation graph (AVG), which constrains in its combinatorial structure the number of operations required. We finally demonstrate that for a given history graph G, a finite set of AVGs describe all parsimonious interpretations of G, and this set can be explored with a few sampling moves. This theoretical study describes a model in which the inference of genome rearrangements and phylogeny can be unified under parsimony.

  15. Cost- and reliability-oriented aggregation point association in long-term evolution and passive optical network hybrid access infrastructure for smart grid neighborhood area network

    Science.gov (United States)

    Cheng, Xiao; Feng, Lei; Zhou, Fanqin; Wei, Lei; Yu, Peng; Li, Wenjing

    2018-02-01

    With the rapid development of the smart grid, the data aggregation point (AP) in the neighborhood area network (NAN) is becoming increasingly important for forwarding the information between the home area network and wide area network. Due to limited budget, it is unable to use one-single access technology to meet the ongoing requirements on AP coverage. This paper first introduces the wired and wireless hybrid access network with the integration of long-term evolution (LTE) and passive optical network (PON) system for NAN, which allows a good trade-off among cost, flexibility, and reliability. Then, based on the already existing wireless LTE network, an AP association optimization model is proposed to make the PON serve as many APs as possible, considering both the economic efficiency and network reliability. Moreover, since the features of the constraints and variables of this NP-hard problem, a hybrid intelligent optimization algorithm is proposed, which is achieved by the mixture of the genetic, ant colony and dynamic greedy algorithm. By comparing with other published methods, simulation results verify the performance of the proposed method in improving the AP coverage and the performance of the proposed algorithm in terms of convergence.

  16. Network evolution model for supply chain with manufactures as the core

    Science.gov (United States)

    Jiang, Dali; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model. PMID:29370201

  17. Network evolution model for supply chain with manufactures as the core.

    Science.gov (United States)

    Fang, Haiyang; Jiang, Dali; Yang, Tinghong; Fang, Ling; Yang, Jian; Li, Wu; Zhao, Jing

    2018-01-01

    Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model.

  18. Network evolution model for supply chain with manufactures as the core.

    Directory of Open Access Journals (Sweden)

    Haiyang Fang

    Full Text Available Building evolution model of supply chain networks could be helpful to understand its development law. However, specific characteristics and attributes of real supply chains are often neglected in existing evolution models. This work proposes a new evolution model of supply chain with manufactures as the core, based on external market demand and internal competition-cooperation. The evolution model assumes the external market environment is relatively stable, considers several factors, including specific topology of supply chain, external market demand, ecological growth and flow conservation. The simulation results suggest that the networks evolved by our model have similar structures as real supply chains. Meanwhile, the influences of external market demand and internal competition-cooperation to network evolution are analyzed. Additionally, 38 benchmark data sets are applied to validate the rationality of our evolution model, in which, nine manufacturing supply chains match the features of the networks constructed by our model.

  19. Tool for evaluating the evolution Space Weather Regional Warning Centers under the innovation point of view: the Case Study of the Embrace Space Weather Program Early Stages

    Science.gov (United States)

    Denardini, Clezio Marcos

    2016-07-01

    We have developed a tool for measuring the evolutional stage of the space weather regional warning centers using the approach of the innovative evolution starting from the perspective presented by Figueiredo (2009, Innovation Management: Concepts, metrics and experiences of companies in Brazil. Publisher LTC, Rio de Janeiro - RJ). It is based on measuring the stock of technological skills needed to perform a certain task that is (or should) be part of the scope of a space weather center. It also addresses the technological capacity for innovation considering the accumulation of technological and learning capabilities, instead of the usual international indices like number of registered patents. Based on this definition, we have developed a model for measuring the capabilities of the Brazilian Study and Monitoring Program Space Weather (Embrace), a program of the National Institute for Space Research (INPE), which has gone through three national stages of development and an international validation step. This program was created in 2007 encompassing competence from five divisions of INPE in order to carry out the data collection and maintenance of the observing system in space weather; to model processes of the Sun-Earth system; to provide real-time information and to forecast space weather; and provide diagnostic their effects on different technological systems. In the present work, we considered the issues related to the innovation of micro-processes inherent to the nature of the Embrace program, not the macro-economic processes, despite recognizing the importance of these. During the development phase, the model was submitted to five scientists/managers from five different countries member of the International Space Environment Service (ISES) who presented their evaluations, concerns and suggestions. It was applied to the Embrace program through an interview form developed to be answered by professional members of regional warning centers. Based on the returning

  20. New method for evaluation of bendability based on three-point-bending and the evolution of the cross-section moment

    Science.gov (United States)

    Troive, L.

    2017-09-01

    Friction-free 3-point bending has become a common test-method since the VDA 238-100 plate-bending test [1] was introduced. According to this test the criterion for failure is when the force suddenly drops. It was found by the author that the evolution of the cross-section moment is a more preferable measure regarding the real material response instead of the force. Beneficially, the cross-section moment gets more or less a constant maximum steady-state level when the cross-section becomes fully plastified. An expression for the moment M is presented that fulfils the criteria for energy of conservation at bending. Also an expression calculating the unit-free moment, M/Me, i.e. current moment to elastic-moment ratio, is demonstrated specifically proposed for detection of failures. The mathematical expressions are simple making it easy to transpose measured force F and stroke position S to the corresponding cross-section moment M. From that point of view it’s even possible to implement, e.g. into a conventional measurement system software, studying the cross-section moment in real-time during a test. It’s even possible to calculate other parameters such as flow-stress and shape of curvature at every stage. It has been tested on different thicknesses and grades within the range from 1.0 to 10 mm with very good results. In this paper the present model is applied on a 6.1 mm hot-rolled high strength steel from the same batch at three different conditions, i.e. directly quenched, quenched and tempered, and a third variant quench and tempered with levelling. It will be shown that very small differences in material-response can be predicted by this method.

  1. Marked point process for modelling seismic activity (case study in Sumatra and Java)

    Science.gov (United States)

    Pratiwi, Hasih; Sulistya Rini, Lia; Wayan Mangku, I.

    2018-05-01

    Earthquake is a natural phenomenon that is random, irregular in space and time. Until now the forecast of earthquake occurrence at a location is still difficult to be estimated so that the development of earthquake forecast methodology is still carried out both from seismology aspect and stochastic aspect. To explain the random nature phenomena, both in space and time, a point process approach can be used. There are two types of point processes: temporal point process and spatial point process. The temporal point process relates to events observed over time as a sequence of time, whereas the spatial point process describes the location of objects in two or three dimensional spaces. The points on the point process can be labelled with additional information called marks. A marked point process can be considered as a pair (x, m) where x is the point of location and m is the mark attached to the point of that location. This study aims to model marked point process indexed by time on earthquake data in Sumatra Island and Java Island. This model can be used to analyse seismic activity through its intensity function by considering the history process up to time before t. Based on data obtained from U.S. Geological Survey from 1973 to 2017 with magnitude threshold 5, we obtained maximum likelihood estimate for parameters of the intensity function. The estimation of model parameters shows that the seismic activity in Sumatra Island is greater than Java Island.

  2. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  3. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    A significant practical problem with the pilot point method is to choose the location of the pilot points. We present a method that is intended to relieve the modeler from much of this responsibility. The basic idea is that a very large number of pilot points are distributed more or less uniformly...... over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... combinations corresponding to the chosen eigenvectors are multiplied to obtain the pilot point values. The model can thus be transformed from having many-pilot-point parameters to having a few super parameters that can be estimated by nonlinear regression on the basis of the available observations. (This...

  4. A Monte Carlo model for 3D grain evolution during welding

    Science.gov (United States)

    Rodgers, Theron M.; Mitchell, John A.; Tikare, Veena

    2017-09-01

    Welding is one of the most wide-spread processes used in metal joining. However, there are currently no open-source software implementations for the simulation of microstructural evolution during a weld pass. Here we describe a Potts Monte Carlo based model implemented in the SPPARKS kinetic Monte Carlo computational framework. The model simulates melting, solidification and solid-state microstructural evolution of material in the fusion and heat-affected zones of a weld. The model does not simulate thermal behavior, but rather utilizes user input parameters to specify weld pool and heat-affect zone properties. Weld pool shapes are specified by Bézier curves, which allow for the specification of a wide range of pool shapes. Pool shapes can range from narrow and deep to wide and shallow representing different fluid flow conditions within the pool. Surrounding temperature gradients are calculated with the aide of a closest point projection algorithm. The model also allows simulation of pulsed power welding through time-dependent variation of the weld pool size. Example simulation results and comparisons with laboratory weld observations demonstrate microstructural variation with weld speed, pool shape, and pulsed-power.

  5. Improved point-kinetics model for the BWR control rod drop accident

    International Nuclear Information System (INIS)

    Neogy, P.; Wakabayashi, T.; Carew, J.F.

    1985-01-01

    A simple prescription to account for spatial feedback weighting effects in RDA (rod drop accident) point-kinetics analyses has been derived and tested. The point-kinetics feedback model is linear in the core peaking factor, F/sub Q/, and in the core average void fraction and fuel temperature. Comparison with detailed spatial kinetics analyses indicates that the improved point-kinetics model provides an accurate description of the BWR RDA

  6. A FAST METHOD FOR MEASURING THE SIMILARITY BETWEEN 3D MODEL AND 3D POINT CLOUD

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2016-06-01

    Full Text Available This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC. It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  7. Unified model for vortex-string network evolution

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Moore, J.N.; Shellard, E.P.S.

    2004-01-01

    We describe and numerically test the velocity-dependent one-scale string evolution model, a simple analytic approach describing a string network with the averaged correlation length and velocity. We show that it accurately reproduces the large-scale behavior (in particular the scaling laws) of numerical simulations of both Goto-Nambu and field theory string networks. We explicitly demonstrate the relation between the high-energy physics approach and the damped and nonrelativistic limits which are relevant for condensed matter physics. We also reproduce experimental results in this context and show that the vortex-string density is significantly reduced by loop production, an effect not included in the usual 'coarse-grained' approach

  8. Thermal evolution of the Schwinger model with matrix product operators

    International Nuclear Information System (INIS)

    Banuls, M.C.; Cirac, J.I.; Cichy, K.; Jansen, K.; Saito, H.

    2015-10-01

    We demonstrate the suitability of tensor network techniques for describing the thermal evolution of lattice gauge theories. As a benchmark case, we have studied the temperature dependence of the chiral condensate in the Schwinger model, using matrix product operators to approximate the thermal equilibrium states for finite system sizes with non-zero lattice spacings. We show how these techniques allow for reliable extrapolations in bond dimension, step width, system size and lattice spacing, and for a systematic estimation and control of all error sources involved in the calculation. The reached values of the lattice spacing are small enough to capture the most challenging region of high temperatures and the final results are consistent with the analytical prediction by Sachs and Wipf over a broad temperature range.

  9. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  10. On modeling micro-structural evolution using a higher order strain gradient continuum theory

    DEFF Research Database (Denmark)

    El-Naaman, S. A.; Nielsen, K. L.; Niordson, C. F.

    2016-01-01

    is to improve the micro-structural response predicted using strain gradient crystal plasticity within a continuum mechanics framework. One approach to modeling the dislocation structures observed is through a back stress formulation, which can be related directly to the strain gradient energy. The present work...... the experimentally observed micro-structural behavior, within a framework based on continuous field quantities, poses obvious challenges, since the evolution of dislocation structures is inherently a discrete and discontinuous process. This challenge, in particular, motivates the present study, and the aim...... offers an investigation of constitutive equations for the back stress based on both considerations of the gradient energy, but also includes results obtained from a purely phenomenological starting point. The influence of model parameters is brought out in a parametric study, and it is demonstrated how...

  11. Spatial Models of Prebiotic Evolution: Soup Before Pizza?

    Science.gov (United States)

    Scheuring, István; Czárán, Tamás; Szabó, Péter; Károlyi, György; Toroczkai, Zoltán

    2003-10-01

    The problem of information integration and resistance to the invasion of parasitic mutants in prebiotic replicator systems is a notorious issue of research on the origin of life. Almost all theoretical studies published so far have demonstrated that some kind of spatial structure is indispensable for the persistence and/or the parasite resistance of any feasible replicator system. Based on a detailed critical survey of spatial models on prebiotic information integration, we suggest a possible scenario for replicator system evolution leading to the emergence of the first protocells capable of independent life. We show that even the spatial versions of the hypercycle model are vulnerable to selfish parasites in heterogeneous habitats. Contrary, the metabolic system remains persistent and coexistent with its parasites both on heterogeneous surfaces and in chaotically mixing flowing media. Persistent metabolic parasites can be converted to metabolic cooperators, or they can gradually obtain replicase activity. Our simulations show that, once replicase activity emerged, a gradual and simultaneous evolutionary improvement of replicase functionality (speed and fidelity) and template efficiency is possible only on a surface that constrains the mobility of macromolecule replicators. Based on the results of the models reviewed, we suggest that open chaotic flows (`soup') and surface dynamics (`pizza') both played key roles in the sequence of evolutionary events ultimately concluding in the appearance of the first living cell on Earth.

  12. Modeling of Maximum Power Point Tracking Controller for Solar Power System

    Directory of Open Access Journals (Sweden)

    Aryuanto Soetedjo

    2012-09-01

    Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.

  13. On an application of Tikhonov's fixed point theorem to a nonlocal Cahn-Hilliard type system modeling phase separation

    Science.gov (United States)

    Colli, Pierluigi; Gilardi, Gianni; Sprekels, Jürgen

    2016-06-01

    This paper investigates a nonlocal version of a model for phase separation on an atomic lattice that was introduced by P. Podio-Guidugli (2006) [36]. The model consists of an initial-boundary value problem for a nonlinearly coupled system of two partial differential equations governing the evolution of an order parameter ρ and the chemical potential μ. Singular contributions to the local free energy in the form of logarithmic or double-obstacle potentials are admitted. In contrast to the local model, which was studied by P. Podio-Guidugli and the present authors in a series of recent publications, in the nonlocal case the equation governing the evolution of the order parameter contains in place of the Laplacian a nonlocal expression that originates from nonlocal contributions to the free energy and accounts for possible long-range interactions between the atoms. It is shown that just as in the local case the model equations are well posed, where the technique of proving existence is entirely different: it is based on an application of Tikhonov's fixed point theorem in a rather unusual separable and reflexive Banach space.

  14. THE EVOLUTION OF CANALIZATION AND THE BREAKING OF VON BAER'S LAWS: MODELING THE EVOLUTION OF DEVELOPMENT WITH EPISTASIS.

    Science.gov (United States)

    Rice, Sean H

    1998-06-01

    Evolution can change the developmental processes underlying a character without changing the average expression of the character itself. This sort of change must occur in both the evolution of canalization, in which a character becomes increasingly buffered against genetic or developmental variation, and in the phenomenon of closely related species that show similar adult phenotypes but different underlying developmental patterns. To study such phenomena, I develop a model that follows evolution on a surface representing adult phenotype as a function of underlying developmental characters. A contour on such a "phenotype landscape" is a set of states of developmental characters that produce the same adult phenotype. Epistasis induces curvature of this surface, and degree of canalization is represented by the slope along a contour. I first discuss the geometric properties of phenotype landscapes, relating epistasis to canalization. I then impose a fitness function on the phenotype and model evolution of developmental characters as a function of the fitness function and the local geometry of the surface. This model shows how canalization evolves as a population approaches an optimum phenotype. It further shows that under some circumstances, "decanalization" can occur, in which the expression of adult phenotype becomes increasingly sensitive to developmental variation. This process can cause very similar populations to diverge from one another developmentally even when their adult phenotypes experience identical selection regimes. © 1998 The Society for the Study of Evolution.

  15. Metal-Matrix Composites and Porous Materials: Constitute Models, Microstructure Evolution and Applications

    National Research Council Canada - National Science Library

    Castafieda, P

    2000-01-01

    Constitutive models were developed and implemented numerically to account for the evolution of microstructure and anisotropy in finite-deformation processes involving porous and composite materials...

  16. Simple point vortex model for the relaxation of 2D superfluid turbulence in a Bose-Einstein condensate

    Science.gov (United States)

    Kim, Joon Hyun; Kwon, Woo Jin; Shin, Yong-Il

    2016-05-01

    In a recent experiment, it was found that the dissipative evolution of a corotating vortex pair in a trapped Bose-Einstein condensate is well described by a point vortex model with longitudinal friction on the vortex motion and the thermal friction coefficient was determined as a function of sample temperature. In this poster, we present a numerical study on the relaxation of 2D superfluid turbulence based on the dissipative point vortex model. We consider a homogeneous system in a cylindrical trap having randomly distributed vortices and implement the vortex-antivortex pair annihilation by removing a pair when its separation becomes smaller than a certain threshold value. We characterize the relaxation of the turbulent vortex states with the decay time required for the vortex number to be reduced to a quarter of initial number. We find the vortex decay time is inversely proportional to the thermal friction coefficient. In particular, we observe the decay times obtained from this work show good quantitative agreement with the experimental results in, indicating that in spite of its simplicity, the point vortex model reasonably captures the physics in the relaxation dynamics of the real system.

  17. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  18. Hydrogeological modelling as a tool for understanding rockslides evolution

    Science.gov (United States)

    Crosta, Giovanni B.; De Caro, Mattia; Frattini, Paolo; Volpi, Giorgio

    2015-04-01

    construction of the models, in particular the partition of the slope in different sectors with different hydraulic conductivities, are coherent with the geological, structural, hydrological and hydrogeological field and laboratory data. The sensitivity analysis shows that the hydraulic conductivity of some slope sectors (e.g. morphostructures, compressed or relaxed slope-toe, basal shear band) strongly influence the water table position and evolution. In transient models, the values of specific storage coefficient play a major control on the amplitude of groundwater level fluctuations, deriving from snowmelt or induced reservoir level rise. The calibrated groundwater flow-models are consistent with groundwater levels measured in the proximity of the piezometers aligned along the sections. The two examples can be considered important for a more advanced understanding of the evolution of rockslides and suggest the required set of data and modelling approaches both for seasonal and long term slope stability analyses. The use of the results of such analyses is reported, for both the case studies, in a companion abstract in session 3.7 where elasto-visco-plastic rheologies have been adopted for the shear band materials to replicate the available displacement time-series.

  19. Beans (Phaseolus ssp.) as a Model for Understanding Crop Evolution

    Science.gov (United States)

    Bitocchi, Elena; Rau, Domenico; Bellucci, Elisa; Rodriguez, Monica; Murgia, Maria L.; Gioia, Tania; Santo, Debora; Nanni, Laura; Attene, Giovanna; Papa, Roberto

    2017-01-01

    Here, we aim to provide a comprehensive and up-to-date overview of the most significant outcomes in the literature regarding the origin of Phaseolus genus, the geographical distribution of the wild species, the domestication process, and the wide spread out of the centers of origin. Phaseolus can be considered as a unique model for the study of crop evolution, and in particular, for an understanding of the convergent phenotypic evolution that occurred under domestication. The almost unique situation that characterizes the Phaseolus genus is that five of its ∼70 species have been domesticated (i.e., Phaseolus vulgaris, P. coccineus, P. dumosus, P. acutifolius, and P. lunatus), and in addition, for P. vulgaris and P. lunatus, the wild forms are distributed in both Mesoamerica and South America, where at least two independent and isolated episodes of domestication occurred. Thus, at least seven independent domestication events occurred, which provides the possibility to unravel the genetic basis of the domestication process not only among species of the same genus, but also between gene pools within the same species. Along with this, other interesting features makes Phaseolus crops very useful in the study of evolution, including: (i) their recent divergence, and the high level of collinearity and synteny among their genomes; (ii) their different breeding systems and life history traits, from annual and autogamous, to perennial and allogamous; and (iii) their adaptation to different environments, not only in their centers of origin, but also out of the Americas, following their introduction and wide spread through different countries. In particular for P. vulgaris this resulted in the breaking of the spatial isolation of the Mesoamerican and Andean gene pools, which allowed spontaneous hybridization, thus increasing of the possibility of novel genotypes and phenotypes. This knowledge that is associated to the genetic resources that have been conserved ex situ and in

  20. Global Rebalancing of Cellular Resources by Pleiotropic Point Mutations Illustrates a Multi-scale Mechanism of Adaptive Evolution

    DEFF Research Database (Denmark)

    Utrilla, José; O'Brien, Edward J.; Chen, Ke

    2016-01-01

    Pleiotropic regulatory mutations affect diverse cellular processes, posing a challenge to our understanding of genotype-phenotype relationships across multiple biological scales. Adaptive laboratory evolution (ALE) allows for such mutations to be found and characterized in the context of clear se...

  1. Modeling the secular evolution of migrating planet pairs

    Science.gov (United States)

    Michtchenko, T. A.; Rodríguez, A.

    2011-10-01

    The secular regime of motion of multi-planetary systems is universal; in contrast with the 'accidental' resonant motion, characteristic only for specific configurations of the planets, secular motion is present everywhere in phase space, even inside the resonant region. The secular behavior of a pair of planets evolving under dissipative forces is the principal subject of this study, particularly, the case when the dissipative forces affect the planetary semi-major axes and the planets move inward/outward the central star, the process known as planet migration. Based on the fundamental concepts of conservative and dissipative dynamics of the three-body problem, we develop a qualitative model of the secular evolution of the migrating planetary pair. Our approach is based on analysis of the energy and the orbital angular momentum exchange between the two-planet system and an external medium; thus no specific kind of dissipative forces is invoked. We show that, under assumption that dissipation is weak and slow, the evolutionary routes of the migrating planets are traced by the Mode I and Mode II stationary solutions of the conservative secular problem. The ultimate convergence and the evolution of the system along one of these secular modes of motion is determined uniquely by the condition that the dissipation rate is sufficiently smaller than the proper secular frequency of the system. We show that it is possible to reassemble the starting configurations and migration history of the systems on the basis of their final states and consequently to constrain the parameters of the physical processes involved.

  2. Joint Clustering and Component Analysis of Correspondenceless Point Sets: Application to Cardiac Statistical Modeling.

    Science.gov (United States)

    Gooya, Ali; Lekadir, Karim; Alba, Xenia; Swift, Andrew J; Wild, Jim M; Frangi, Alejandro F

    2015-01-01

    Construction of Statistical Shape Models (SSMs) from arbitrary point sets is a challenging problem due to significant shape variation and lack of explicit point correspondence across the training data set. In medical imaging, point sets can generally represent different shape classes that span healthy and pathological exemplars. In such cases, the constructed SSM may not generalize well, largely because the probability density function (pdf) of the point sets deviates from the underlying assumption of Gaussian statistics. To this end, we propose a generative model for unsupervised learning of the pdf of point sets as a mixture of distinctive classes. A Variational Bayesian (VB) method is proposed for making joint inferences on the labels of point sets, and the principal modes of variations in each cluster. The method provides a flexible framework to handle point sets with no explicit point-to-point correspondences. We also show that by maximizing the marginalized likelihood of the model, the optimal number of clusters of point sets can be determined. We illustrate this work in the context of understanding the anatomical phenotype of the left and right ventricles in heart. To this end, we use a database containing hearts of healthy subjects, patients with Pulmonary Hypertension (PH), and patients with Hypertrophic Cardiomyopathy (HCM). We demonstrate that our method can outperform traditional PCA in both generalization and specificity measures.

  3. Finding Non-Zero Stable Fixed Points of the Weighted Kuramoto model is NP-hard

    OpenAIRE

    Taylor, Richard

    2015-01-01

    The Kuramoto model when considered over the full space of phase angles [$0,2\\pi$) can have multiple stable fixed points which form basins of attraction in the solution space. In this paper we illustrate the fundamentally complex relationship between the network topology and the solution space by showing that determining the possibility of multiple stable fixed points from the network topology is NP-hard for the weighted Kuramoto Model. In the case of the unweighted model this problem is shown...

  4. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  5. TESTING MODELS OF MAGNETIC FIELD EVOLUTION OF NEUTRON STARS WITH THE STATISTICAL PROPERTIES OF THEIR SPIN EVOLUTIONS

    International Nuclear Information System (INIS)

    Zhang Shuangnan; Xie Yi

    2012-01-01

    We test models for the evolution of neutron star (NS) magnetic fields (B). Our model for the evolution of the NS spin is taken from an analysis of pulsar timing noise presented by Hobbs et al.. We first test the standard model of a pulsar's magnetosphere in which B does not change with time and magnetic dipole radiation is assumed to dominate the pulsar's spin-down. We find that this model fails to predict both the magnitudes and signs of the second derivatives of the spin frequencies (ν-double dot). We then construct a phenomenological model of the evolution of B, which contains a long-term decay (LTD) modulated by short-term oscillations; a pulsar's spin is thus modified by its B-evolution. We find that an exponential LTD is not favored by the observed statistical properties of ν-double dot for young pulsars and fails to explain the fact that ν-double dot is negative for roughly half of the old pulsars. A simple power-law LTD can explain all the observed statistical properties of ν-double dot. Finally, we discuss some physical implications of our results to models of the B-decay of NSs and suggest reliable determination of the true ages of many young NSs is needed, in order to constrain further the physical mechanisms of their B-decay. Our model can be further tested with the measured evolutions of ν-dot and ν-double dot for an individual pulsar; the decay index, oscillation amplitude, and period can also be determined this way for the pulsar.

  6. Preliminary conceptual model for mineral evolution in Yucca Mountain

    International Nuclear Information System (INIS)

    Duffy, C.J.

    1993-12-01

    A model is presented for mineral alteration in Yucca Mountain, Nevada, that suggests that the mineral transformations observed there are primarily controlled by the activity of aqueous silica. The rate of these reactions is related to the rate of evolution of the metastable silica polymorphs opal-CT and cristobalite assuming that a SiO 2(aq) is fixed at the equilibrium solubility of the most soluble silica polymorph present. The rate equations accurately predict the present depths of disappearance of opal-CT and cristobalite. The rate equations have also been used to predict the extent of future mineral alteration that may result from emplacement of a high-level nuclear waste repository in Yucca Mountain. Relatively small changes in mineralogy are predicted, but these predictions are based on the assumption that emplacement of a repository would not increase the pH of water in Yucca Mountain nor increase its carbonate content. Such changes may significantly increase mineral alteration. Some of the reactions currently occurring in Yucca Mountain consume H + and CO 3 2- . Combining reaction rate models for these reactions with water chemistry data may make it possible to estimate water flux through the basal vitrophyre of the Topopah Spring Member and to help confirm the direction and rate of flow of groundwater in Yucca Mountain

  7. Orchestrated structure evolution: modeling growth-regulated nanomanufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Abbasi, Shaghayegh; Boehringer, Karl F [Department of Electrical Engineering, University of Washington, Seattle, WA 98195-2500 (United States); Kitayaporn, Sathana; Schwartz, Daniel T, E-mail: karlb@washington.edu [Department of Chemical Engineering, University of Washington, Seattle, WA 98195-2500 (United States)

    2011-04-22

    Orchestrated structure evolution (OSE) is a scalable manufacturing method that combines the advantages of top-down (tool-directed) and bottom-up (self-propagating) approaches. The method consists of a seed patterning step that defines where material nucleates, followed by a growth step that merges seeded islands into the final patterned thin film. We develop a model to predict the completed pattern based on a computationally efficient approximate Green's function solution of the diffusion equation plus a Voronoi diagram based approach that defines the final grain boundary structure. Experimental results rely on electron beam lithography to pattern the seeds, followed by the mass transfer limited growth of copper via electrodeposition. The seed growth model is compared with experimental results to quantify nearest neighbor seed-to-seed interactions as well as how seeds interact with the pattern boundary to impact the local growth rate. Seed-to-seed and seed-to-pattern interactions are shown to result in overgrowth of seeds on edges and corners of the shape, where seeds have fewer neighbors. We explore how local changes to the seed location can be used to improve the patterning quality without increasing the manufacturing cost. OSE is shown to enable a unique set of trade-offs between the cost, time, and quality of thin film patterning.

  8. Molecular modeling of the microstructure evolution during carbon fiber processing

    Science.gov (United States)

    Desai, Saaketh; Li, Chunyu; Shen, Tongtong; Strachan, Alejandro

    2017-12-01

    The rational design of carbon fibers with desired properties requires quantitative relationships between the processing conditions, microstructure, and resulting properties. We developed a molecular model that combines kinetic Monte Carlo and molecular dynamics techniques to predict the microstructure evolution during the processes of carbonization and graphitization of polyacrylonitrile (PAN)-based carbon fibers. The model accurately predicts the cross-sectional microstructure of the fibers with the molecular structure of the stabilized PAN fibers and physics-based chemical reaction rates as the only inputs. The resulting structures exhibit key features observed in electron microcopy studies such as curved graphitic sheets and hairpin structures. In addition, computed X-ray diffraction patterns are in good agreement with experiments. We predict the transverse moduli of the resulting fibers between 1 GPa and 5 GPa, in good agreement with experimental results for high modulus fibers and slightly lower than those of high-strength fibers. The transverse modulus is governed by sliding between graphitic sheets, and the relatively low value for the predicted microstructures can be attributed to their perfect longitudinal texture. Finally, the simulations provide insight into the relationships between chemical kinetics and the final microstructure; we observe that high reaction rates result in porous structures with lower moduli.

  9. A simple model for research interest evolution patterns

    Science.gov (United States)

    Jia, Tao; Wang, Dashun; Szymanski, Boleslaw

    Sir Isaac Newton supposedly remarked that in his scientific career he was like ``...a boy playing on the sea-shore ...finding a smoother pebble or a prettier shell than ordinary''. His remarkable modesty and famous understatement motivate us to seek regularities in how scientists shift their research focus as the career develops. Indeed, despite intensive investigations on how microscopic factors, such as incentives and risks, would influence a scientist's choice of research agenda, little is known on the macroscopic patterns in the research interest change undertaken by individual scientists throughout their careers. Here we make use of over 14,000 authors' publication records in physics. By quantifying statistical characteristics in the interest evolution, we model scientific research as a random walk, which reproduces patterns in individuals' careers observed empirically. Despite myriad of factors that shape and influence individual choices of research subjects, we identified regularities in this dynamical process that are well captured by a simple statistical model. The results advance our understanding of scientists' behaviors during their careers and open up avenues for future studies in the science of science.

  10. Package models and the information crisis of prebiotic evolution.

    Science.gov (United States)

    Silvestre, Daniel A M M; Fontanari, José F

    2008-05-21

    The coexistence between different types of templates has been the choice solution to the information crisis of prebiotic evolution, triggered by the finding that a single RNA-like template cannot carry enough information to code for any useful replicase. In principle, confining d distinct templates of length L in a package or protocell, whose survival depends on the coexistence of the templates it holds in, could resolve this crisis provided that d is made sufficiently large. Here we review the prototypical package model of Niesert et al. [1981. Origin of life between Scylla and Charybdis. J. Mol. Evol. 17, 348-353] which guarantees the greatest possible region of viability of the protocell population, and show that this model, and hence the entire package approach, does not resolve the information crisis. In particular, we show that the total information stored in a viable protocell (Ld) tends to a constant value that depends only on the spontaneous error rate per nucleotide of the template replication mechanism. As a result, an increase of d must be followed by a decrease of L, so that the net information gain is null.

  11. The evolution of menstruation: A new model for genetic assimilation

    Science.gov (United States)

    Emera, D.; Romero, R.; Wagner, G.

    2012-01-01

    Why do humans menstruate while most mammals do not? Here, we present our answer to this long-debated question, arguing that (i) menstruation occurs as a mechanistic consequence of hormone-induced differentiation of the endometrium (referred to as spontaneous decidualization, or SD); (ii) SD evolved because of maternal-fetal conflict; and (iii) SD evolved by genetic assimilation of the decidualization reaction, which is induced by the fetus in non-menstruating species. The idea that menstruation occurs as a consequence of SD has been proposed in the past, but here we present a novel hypothesis on how SD evolved. We argue that decidualization became genetically stabilized in menstruating lineages, allowing females to prepare for pregnancy without any signal from the fetus. We present three models for the evolution of SD by genetic assimilation, based on recent advances in our understanding of the mechanisms of endometrial differentiation and implantation. Testing these models will ultimately shed light on the evolutionary significance of menstruation, as well as on the etiology of human reproductive disorders like endometriosis and recurrent pregnancy loss. PMID:22057551

  12. A numerical model for meltwater channel evolution in glaciers

    Directory of Open Access Journals (Sweden)

    A. H. Jarosch

    2012-04-01

    Full Text Available Meltwater channels form an integral part of the hydrological system of a glacier. Better understanding of how meltwater channels develop and evolve is required to fully comprehend supraglacial and englacial meltwater drainage. Incision of supraglacial stream channels and subsequent roof closure by ice deformation has been proposed in recent literature as a possible englacial conduit formation process. Field evidence for supraglacial stream incision has been found in Svalbard and Nepal. In Iceland, where volcanic activity provides meltwater with temperatures above 0 °C, rapid enlargement of supraglacial channels has been observed. Supraglacial channels provide meltwater through englacial passages to the subglacial hydrological systems of big ice sheets, which in turn affects ice sheet motion and their contribution to eustatic sea level change. By coupling, for the first time, a numerical ice dynamic model to a hydraulic model which includes heat transfer, we investigate the evolution of meltwater channels and their incision behaviour. We present results for different, constant meltwater fluxes, different channel slopes, different meltwater temperatures, different melt rate distributions in the channel as well as temporal variations in meltwater flux. The key parameters governing incision rate and depth are channel slope, meltwater temperature loss to the ice and meltwater flux. Channel width and geometry are controlled by melt rate distribution along the channel wall. Calculated Nusselt numbers suggest that turbulent mixing is the main heat transfer mechanism in the meltwater channels studied.

  13. A coupled geomorphic and ecological model of tidal marsh evolution.

    Science.gov (United States)

    Kirwan, Matthew L; Murray, A Brad

    2007-04-10

    The evolution of tidal marsh platforms and interwoven channel networks cannot be addressed without treating the two-way interactions that link biological and physical processes. We have developed a 3D model of tidal marsh accretion and channel network development that couples physical sediment transport processes with vegetation biomass productivity. Tidal flow tends to cause erosion, whereas vegetation biomass, a function of bed surface depth below high tide, influences the rate of sediment deposition and slope-driven transport processes such as creek bank slumping. With a steady, moderate rise in sea level, the model builds a marsh platform and channel network with accretion rates everywhere equal to the rate of sea-level rise, meaning water depths and biological productivity remain temporally constant. An increase in the rate of sea-level rise, or a reduction in sediment supply, causes marsh-surface depths, biomass productivity, and deposition rates to increase while simultaneously causing the channel network to expand. Vegetation on the marsh platform can promote a metastable equilibrium where the platform maintains elevation relative to a rapidly rising sea level, although disturbance to vegetation could cause irreversible loss of marsh habitat.

  14. A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis

    OpenAIRE

    Masataka, SUZUKI; Yoshihiko, YAMAZAKI; Yumiko, TANIGUCHI; Department of Psychology, Kinjo Gakuin University; Department of Health and Physical Education, Nagoya Institute of Technology; College of Human Life and Environment, Kinjo Gakuin University

    2003-01-01

    SUZUKI,M., YAMAZAKI,Y. and TANIGUCHI,Y., A Biomechanical Model of Single-joint Arm Movement Control Based on the Equilibrium Point Hypothesis. Adv. Exerc. Sports Physiol., Vol.9, No.1 pp.7-25, 2003. According to the equilibrium point hypothesis of motor control, control action of muscles is not explicitly computed, but rather arises as a consequence of interaction among moving equilibrium point, reflex feedback and muscle mechanical properties. This approach is attractive as it obviates the n...

  15. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  16. Point kinetics model with one-dimensional (radial) heat conduction formalism

    International Nuclear Information System (INIS)

    Jain, V.K.

    1989-01-01

    A point-kinetics model with one-dimensional (radial) heat conduction formalism has been developed. The heat conduction formalism is based on corner-mesh finite difference method. To get average temperatures in various conducting regions, a novel weighting scheme has been devised. The heat conduction model has been incorporated in the point-kinetics code MRTF-FUEL. The point-kinetics equations are solved using the method of real integrating factors. It has been shown by analysing the simulation of hypothetical loss of regulation accident in NAPP reactor that the model is superior to the conventional one in accuracy and speed of computation. (author). 3 refs., 3 tabs

  17. Prediction model for initial point of net vapor generation for low-flow boiling

    International Nuclear Information System (INIS)

    Sun Qi; Zhao Hua; Yang Ruichang

    2003-01-01

    The prediction of the initial point of net vapor generation is significant for the calculation of phase distribution in sub-cooled boiling. However, most of the investigations were developed in high-flow boiling, and there is no common model that could be successfully applied for the low-flow boiling. A predictive model for the initial point of net vapor generation for low-flow forced convection and natural circulation is established here, by the analysis of evaporation and condensation heat transfer. The comparison between experimental data and calculated results shows that this model can predict the net vapor generation point successfully in low-flow sub-cooled boiling

  18. Point-Structured Human Body Modeling Based on 3D Scan Data

    Directory of Open Access Journals (Sweden)

    Ming-June Tsai

    2018-01-01

    Full Text Available A novel point-structured geometrical modelling for realistic human body is introduced in this paper. This technique is based on the feature extraction from the 3D body scan data. Anatomic feature such as the neck, the arm pits, the crotch points, and other major feature points are recognized. The body data is then segmented into 6 major parts. A body model is then constructed by re-sampling the scanned data to create a point-structured mesh. The body model contains body geodetic landmarks in latitudinal and longitudinal curves passing through those feature points. The body model preserves the perfect body shape and all the body dimensions but requires little space. Therefore, the body model can be used as a mannequin in garment industry, or as a manikin in various human factor designs, but the most important application is to use as a virtue character to animate the body motion in mocap (motion capture systems. By adding suitable joint freedoms between the segmented body links, kinematic and dynamic properties of the motion theories can be applied to the body model. As a result, a 3D virtual character that is fully resembled the original scanned individual is vividly animating the body motions. The gaps between the body segments due to motion can be filled up by skin blending technique using the characteristic of the point-structured model. The model has the potential to serve as a standardized datatype to archive body information for all custom-made products.

  19. The importance of topographically corrected null models for analyzing ecological point processes.

    Science.gov (United States)

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  20. Experimental study and modelling of the well-mixing length. Application to the representativeness of sampling points in duct

    International Nuclear Information System (INIS)

    Alengry, Jonathan

    2014-01-01

    Monitoring of gaseous releases from nuclear installations in the environment and air cleaning efficiency measurement are based on regular measurements of concentrations of contaminants in outlet chimneys and ventilation systems. The concentration distribution may be heterogeneous at the measuring point if the distance setting of the mixing is not sufficient. The question is about the set up of the measuring point in duct and the error compared to the homogeneous concentration in case of non-compliance with this distance. This study defines the so-called 'well mixing length' from laboratory experiments. The bench designed for these tests allowed to reproduce flows in long circular and rectangular ducts, each including a bend. An optical measurement technique has been developed, calibrated and used to measure the concentration distribution of a tracer injected in the flow. The experimental results in cylindrical duct have validated an analytical model based on the convection-diffusion equation of a tracer, and allowed to propose models of good mixing length and representativeness of sampling points. In rectangular duct, the acquired measures constitute a first database on the evolution of the homogenization of a tracer, in the perspective of numerical simulations exploring more realistic conditions for measurements in situ. (author) [fr

  1. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    Science.gov (United States)

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  2. Modelling the temperature evolution of bone under high intensity focused ultrasound

    Science.gov (United States)

    ten Eikelder, H. M. M.; Bošnački, D.; Elevelt, A.; Donato, K.; Di Tullio, A.; Breuer, B. J. T.; van Wijk, J. H.; van Dijk, E. V. M.; Modena, D.; Yeo, S. Y.; Grüll, H.

    2016-02-01

    Magnetic resonance-guided high intensity focused ultrasound (MR-HIFU) has been clinically shown to be effective for palliative pain management in patients suffering from skeletal metastasis. The underlying mechanism is supposed to be periosteal denervation caused by ablative temperatures reached through ultrasound heating of the cortex. The challenge is exact temperature control during sonication as MR-based thermometry approaches for bone tissue are currently not available. Thus, in contrast to the MR-HIFU ablation of soft tissue, a thermometry feedback to the HIFU is lacking, and the treatment of bone metastasis is entirely based on temperature information acquired in the soft tissue adjacent to the bone surface. However, heating of the adjacent tissue depends on the exact sonication protocol and requires extensive modelling to estimate the actual temperature of the cortex. Here we develop a computational model to calculate the spatial temperature evolution in bone and the adjacent tissue during sonication. First, a ray-tracing technique is used to compute the heat production in each spatial point serving as a source term for the second part, where the actual temperature is calculated as a function of space and time by solving the Pennes bio-heat equation. Importantly, our model includes shear waves that arise at the bone interface as well as all geometrical considerations of transducer and bone geometry. The model was compared with a theoretical approach based on the far field approximation and an MR-HIFU experiment using a bone phantom. Furthermore, we investigated the contribution of shear waves to the heat production and resulting temperatures in bone. The temperature evolution predicted by our model was in accordance with the far field approximation and agreed well with the experimental data obtained in phantoms. Our model allows the simulation of the HIFU treatments of bone metastasis in patients and can be extended to a planning tool prior to MR

  3. Modelling the temperature evolution of bone under high intensity focused ultrasound

    International Nuclear Information System (INIS)

    Ten Eikelder, H M M; Bošnački, D; Breuer, B J T; Van Wijk, J H; Van Dijk, E V M; Modena, D; Yeo, S Y; Grüll, H; Elevelt, A; Donato, K; Di Tullio, A

    2016-01-01

    Magnetic resonance-guided high intensity focused ultrasound (MR-HIFU) has been clinically shown to be effective for palliative pain management in patients suffering from skeletal metastasis. The underlying mechanism is supposed to be periosteal denervation caused by ablative temperatures reached through ultrasound heating of the cortex. The challenge is exact temperature control during sonication as MR-based thermometry approaches for bone tissue are currently not available. Thus, in contrast to the MR-HIFU ablation of soft tissue, a thermometry feedback to the HIFU is lacking, and the treatment of bone metastasis is entirely based on temperature information acquired in the soft tissue adjacent to the bone surface. However, heating of the adjacent tissue depends on the exact sonication protocol and requires extensive modelling to estimate the actual temperature of the cortex. Here we develop a computational model to calculate the spatial temperature evolution in bone and the adjacent tissue during sonication. First, a ray-tracing technique is used to compute the heat production in each spatial point serving as a source term for the second part, where the actual temperature is calculated as a function of space and time by solving the Pennes bio-heat equation. Importantly, our model includes shear waves that arise at the bone interface as well as all geometrical considerations of transducer and bone geometry. The model was compared with a theoretical approach based on the far field approximation and an MR-HIFU experiment using a bone phantom. Furthermore, we investigated the contribution of shear waves to the heat production and resulting temperatures in bone. The temperature evolution predicted by our model was in accordance with the far field approximation and agreed well with the experimental data obtained in phantoms. Our model allows the simulation of the HIFU treatments of bone metastasis in patients and can be extended to a planning tool prior to MR

  4. On Religion and Language Evolutions Seen Through Mathematical and Agent Based Models

    Science.gov (United States)

    Ausloos, M.

    Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also obvious characteristics of the human species. Religions, languages appear though also disappear. All religions and languages evolve and survive when they adapt to the society developments. On the other hand, the number of adherents of a given religion, or the number of persons speaking a language is not fixed in time, - nor space. Several questions can be raised. E.g. from a oscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve? From a "microscopic" view point: can one invent agent based models to describe oscopic aspects? Do simple evolution equations exist? How complicated must be a model? These aspects are considered in the present note. Basic evolution equations are outlined and critically, though briefly, discussed. Similarities and differences between religions and languages are summarized. Cases can be illustrated with historical facts and data. It is stressed that characteristic time scales are different. It is emphasized that "external fields" are historically very relevant in the case of religions, rending the study more " interesting" within a mechanistic approach based on parity and symmetry of clusters concepts. Yet the modern description of human societies through networks in reported simulations is still lacking some mandatory ingredients, i.e. the non scalar nature of the nodes, and the non binary aspects of nodes and links, though for the latter this is already often taken into account, including directions. From an analytical point of view one can consider a population independently of the others. It is intuitively accepted, but also found from the statistical analysis of the frequency distribution that an

  5. Phase-field modeling of the microstructure evolution and heterogeneous nucleation in solidifying ternary Al–Cu–Ni alloys

    International Nuclear Information System (INIS)

    Kundin, Julia; Pogorelov, Evgeny; Emmerich, Heike

    2015-01-01

    We have investigated the microstructure evolution during the isothermal and non-isothermal solidification of ternary Al–Cu–Ni alloys by means of a general multi-phase-field model for an arbitrary number of phases. The stability requirements for the model functions on every dual interface guarantee the absence of “ghost” phases. The aim was to generate a realistic microstructure by coupling the thermodynamic parameters of the phases and the thermodynamically consistent phase-field evolution equations. It is shown that the specially constructed thermal noise terms disturb the stability on the dual interfaces and can produce heterogeneous nucleation of product phases at energetically favorable points. Similar behavior can be observed in triple junctions where the heterogeneous nucleation of a fourth phase is more favorable. Finally, the model predicts the growth of a combined eutectic-like and peritectic-like structure that is comparable to the observed experimental microstructure in various alloys

  6. Statistical imitation system using relational interest points and Gaussian mixture models

    CSIR Research Space (South Africa)

    Claassens, J

    2009-11-01

    Full Text Available The author proposes an imitation system that uses relational interest points (RIPs) and Gaussian mixture models (GMMs) to characterize a behaviour. The system's structure is inspired by the Robot Programming by Demonstration (RDP) paradigm...

  7. A Process Model of Partnership Evolution Around New IT Initiatives

    Science.gov (United States)

    Kestilä, Timo; Salmivalli, Lauri; Salmela, Hannu; Vahtera, Annukka

    Prior research on inter-organizational information systems has focused primarily on dyadic network relationships, where agreements about information exchange are made between two organizations. The focus of this research is on the processes through which IT decisions are made within larger inter-organizational networks with several network parties. The research draws from network theories in organization science to identify three alternative mechanisms for making network level commitments: contracts, rules and values. In addition, theoretical concepts are searched from dynamic network models, which identify different cycles and stages in network evolution. The empirical research was conducted in two networks. The first one comprises of four municipalities which began collaboration in the deployment of IT in early childhood education (ECE). The second network involves a case where several organizations, both private and public, initiated a joint effort to implement a national level electronic prescription system (EPS). The frameworks and concepts drawn from organizational theories are used to explain success of the first case and the failure of the latter case. The paper contributes to prior IOS research by providing a new theory-based framework for the analysis of early stages of building organizational networks around innovative IT initiatives.

  8. Enrichment of Zinc in Galactic Chemodynamical Evolution Models

    Science.gov (United States)

    Hirai, Yutaka; Saitoh, Takayuki R.; Ishimaru, Yuhri; Wanajo, Shinya

    2018-03-01

    The heaviest iron-peak element Zinc (Zn) has been used as an important tracer of cosmic chemical evolution. Spectroscopic observations of the metal-poor stars in Local Group galaxies show an increasing trend of [Zn/Fe] ratios toward lower metallicity. However, the enrichment of Zn in galaxies is not well understood due to poor knowledge of astrophysical sites of Zn, as well as metal mixing in galaxies. Here we show possible explanations for the observed trend by taking into account electron-capture supernovae (ECSNe) as one of the sources of Zn in our chemodynamical simulations of dwarf galaxies. We find that the ejecta from ECSNe contribute to stars with [Zn/Fe] ≳ 0.5. We also find that scatters of [Zn/Fe] in higher metallicities originate from the ejecta of type Ia supernovae. On the other hand, it appears difficult to explain the observed trends if we do not consider ECSNe as a source of Zn. These results come from an inhomogeneous spatial metallicity distribution due to the inefficiency of the metal mixing. We find that the optimal value of the scaling factor for the metal diffusion coefficient is ∼0.01 in the shear-based metal mixing model in smoothed particle hydrodynamics simulations. These results suggest that ECSNe could be one of the contributors of the enrichment of Zn in galaxies.

  9. Model of climate evolution based on continental drift and polar wandering

    Science.gov (United States)

    Donn, W. L.; Shaw, D. M.

    1977-01-01

    The thermodynamic meteorologic model of Adem is used to trace the evolution of climate from Triassic to present time by applying it to changing geography as described by continental drift and polar wandering. Results show that the gross changes of climate in the Northern Hemisphere can be fully explained by the strong cooling in high latitudes as continents moved poleward. High-latitude mean temperatures in the Northern Hemisphere dropped below the freezing point 10 to 15 m.y. ago, thereby accounting for the late Cenozoic glacial age. Computed meridional temperature gradients for the Northern Hemisphere steepened from 20 to 40 C over the 200-m.y. period, an effect caused primarily by the high-latitude temperature decrease. The primary result of the work is that the cooling that has occurred since the warm Mesozoic period and has culminated in glaciation is explainable wholly by terrestrial processes.

  10. Nutrient-dependent/pheromone-controlled adaptive evolution: a model

    Directory of Open Access Journals (Sweden)

    James Vaughn Kohl

    2013-06-01

    Full Text Available Background: The prenatal migration of gonadotropin-releasing hormone (GnRH neurosecretory neurons allows nutrients and human pheromones to alter GnRH pulsatility, which modulates the concurrent maturation of the neuroendocrine, reproductive, and central nervous systems, thus influencing the development of ingestive behavior, reproductive sexual behavior, and other behaviors. Methods: This model details how chemical ecology drives adaptive evolution via: (1 ecological niche construction, (2 social niche construction, (3 neurogenic niche construction, and (4 socio-cognitive niche construction. This model exemplifies the epigenetic effects of olfactory/pheromonal conditioning, which alters genetically predisposed, nutrient-dependent, hormone-driven mammalian behavior and choices for pheromones that control reproduction via their effects on luteinizing hormone (LH and systems biology. Results: Nutrients are metabolized to pheromones that condition behavior in the same way that food odors condition behavior associated with food preferences. The epigenetic effects of olfactory/pheromonal input calibrate and standardize molecular mechanisms for genetically predisposed receptor-mediated changes in intracellular signaling and stochastic gene expression in GnRH neurosecretory neurons of brain tissue. For example, glucose and pheromones alter the hypothalamic secretion of GnRH and LH. A form of GnRH associated with sexual orientation in yeasts links control of the feedback loops and developmental processes required for nutrient acquisition, movement, reproduction, and the diversification of species from microbes to man. Conclusion: An environmental drive evolved from that of nutrient ingestion in unicellular organisms to that of pheromone-controlled socialization in insects. In mammals, food odors and pheromones cause changes in hormones such as LH, which has developmental affects on pheromone-controlled sexual behavior in nutrient-dependent reproductively

  11. Numerical modelling of the atmospheric mixing-layer diurnal evolution

    International Nuclear Information System (INIS)

    Molnary, L. de.

    1990-03-01

    This paper introduce a numeric procedure to determine the temporal evolution of the height, potential temperature and mixing ratio in the atmospheric mixing layer. The time and spatial derivatives were evaluated via forward in time scheme to predict the local evolution of the mixing-layer parameters, and a forward in time, upstream in space scheme to predict the evolution of the mixing-layer over a flat region with a one-dimensional advection component. The surface turbulent fluxes of sensible and latent heat were expressed using a simple sine wave that is function of the hour day and kind of the surface (water or country). (author) [pt

  12. On the Asymptotic Capacity of Dual-Aperture FSO Systems with a Generalized Pointing Error Model

    KAUST Repository

    Al-Quwaiee, Hessa

    2016-06-28

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantify the effect of these two factors on FSO system performance, we need an effective mathematical model for them. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive a generic expression of the asymptotic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. Finally, the asymptotic channel capacity formula are extended to quantify the FSO systems performance with selection and switched-and-stay diversity.

  13. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying

    2014-07-15

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  14. A Matérn model of the spatial covariance structure of point rain rates

    KAUST Repository

    Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali

    2014-01-01

    It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.

  15. Communicative Modelling of Cultural Transmission and Evolution Through a Holographic Cognition Model

    Directory of Open Access Journals (Sweden)

    Ambjörn Naeve

    2012-12-01

    Full Text Available This article presents communicative ways to model the transmission and evolution of the processes and artefacts of a culture as the result of ongoing interactions between its members - both at the tacit and the explicit level. The purpose is not to model the entire cultural process, but to provide semantically rich “conceptual placeholders” for modelling any cultural activity that is considered important enough within a certain context. The general purpose of communicative modelling is to create models that improve the quality of communication between people. In order to capture the subjective aspects of Gregory Bateson’s definition of information as “a difference that makes a difference,” the article introduces a Holographic Cognition Model that uses optical holography as an analogy for human cognition, with the object beam of holography corresponding to the first difference (the situation that the cognitive agent encounters, and the reference beam of holography corresponding to the subjective experiences and biases that the agent brings to the situation, and which makes the second difference (the interference/interpretation pattern unique for each agent. By combining the HCM with a semantically rich and recursive form of process modelling, based on the SECI-theory of knowledge creation, we arrive at way to model the cultural transmission and evolution process that is consistent with the Unified Theory of Information (the Triple-C model with its emphasis on intra-, inter- and supra-actions.

  16. Spatial Mixture Modelling for Unobserved Point Processes: Examples in Immunofluorescence Histology.

    Science.gov (United States)

    Ji, Chunlin; Merl, Daniel; Kepler, Thomas B; West, Mike

    2009-12-04

    We discuss Bayesian modelling and computational methods in analysis of indirectly observed spatial point processes. The context involves noisy measurements on an underlying point process that provide indirect and noisy data on locations of point outcomes. We are interested in problems in which the spatial intensity function may be highly heterogenous, and so is modelled via flexible nonparametric Bayesian mixture models. Analysis aims to estimate the underlying intensity function and the abundance of realized but unobserved points. Our motivating applications involve immunological studies of multiple fluorescent intensity images in sections of lymphatic tissue where the point processes represent geographical configurations of cells. We are interested in estimating intensity functions and cell abundance for each of a series of such data sets to facilitate comparisons of outcomes at different times and with respect to differing experimental conditions. The analysis is heavily computational, utilizing recently introduced MCMC approaches for spatial point process mixtures and extending them to the broader new context here of unobserved outcomes. Further, our example applications are problems in which the individual objects of interest are not simply points, but rather small groups of pixels; this implies a need to work at an aggregate pixel region level and we develop the resulting novel methodology for this. Two examples with with immunofluorescence histology data demonstrate the models and computational methodology.

  17. Evolution of Randomized Trials in Advanced/Metastatic Soft Tissue Sarcoma: End Point Selection, Surrogacy, and Quality of Reporting.

    Science.gov (United States)

    Zer, Alona; Prince, Rebecca M; Amir, Eitan; Abdul Razak, Albiruni

    2016-05-01

    Randomized controlled trials (RCTs) in soft tissue sarcoma (STS) have used varying end points. The surrogacy of intermediate end points, such as progression-free survival (PFS), response rate (RR), and 3-month and 6-month PFS (3moPFS and 6moPFS) with overall survival (OS), remains unknown. The quality of efficacy and toxicity reporting in these studies is also uncertain. A systematic review of systemic therapy RCTs in STS was performed. Surrogacy between intermediate end points and OS was explored using weighted linear regression for the hazard ratio for OS with the hazard ratio for PFS or the odds ratio for RR, 3moPFS, and 6moPFS. The quality of reporting for efficacy and toxicity was also evaluated. Fifty-two RCTs published between 1974 and 2014, comprising 9,762 patients, met the inclusion criteria. There were significant correlations between PFS and OS (R = 0.61) and between RR and OS (R = 0.51). Conversely, there were nonsignificant correlations between 3moPFS and 6moPFS with OS. A reduction in the use of RR as the primary end point was observed over time, favoring time-based events (P for trend = .02). In 14% of RCTs, the primary end point was not met, but the study was reported as being positive. Toxicity was comprehensively reported in 47% of RCTs, whereas 14% inadequately reported toxicity. In advanced STS, PFS and RR seem to be appropriate surrogates for OS. There is poor correlation between OS and both 3moPFS and 6moPFS. As such, caution is urged with the use of these as primary end points in randomized STS trials. The quality of toxicity reporting and interpretation of results is suboptimal. © 2016 by American Society of Clinical Oncology.

  18. One loop beta functions and fixed points in higher derivative sigma models

    International Nuclear Information System (INIS)

    Percacci, Roberto; Zanusso, Omar

    2010-01-01

    We calculate the one loop beta functions of nonlinear sigma models in four dimensions containing general two- and four-derivative terms. In the O(N) model there are four such terms and nontrivial fixed points exist for all N≥4. In the chiral SU(N) models there are in general six couplings, but only five for N=3 and four for N=2; we find fixed points only for N=2, 3. In the approximation considered, the four-derivative couplings are asymptotically free but the coupling in the two-derivative term has a nonzero limit. These results support the hypothesis that certain sigma models may be asymptotically safe.

  19. Some application of the model of partition points on a one-dimensional lattice

    International Nuclear Information System (INIS)

    Mejdani, R.

    1991-07-01

    We have shown that by using a model of the gas of partition points on one-dimensional lattice, we can find some results about the enzyme kinetics or the average domain-size, which we have obtained before by using a correlated Walks' theory or a probabilistic (combinatoric) way. We have discussed also the problem related with the spread of an infection of disease and the stochastic model of partition points. We think that this model, as a very simple model and mathematically transparent, can be advantageous for other theoretical investigations in chemistry or modern biology. (author). 14 refs, 6 figs, 1 tab

  20. Simulating the evolution of industries using a dynamic behavioural model

    OpenAIRE

    Kunc, Martin

    2004-01-01

    Investment decisions determine that not only the evolution of industries is hard to forecast with certainty but also industries may have different dynamic behaviour and evolutionary paths. In this paper we present a behavioural framework to simulate the evolution of industries. Two factors determine the dynamic behaviour of an industry: managerial decision-making and the interconnected set of resources. Managerial decision-making significantly affects the dynamic behaviour of firms. Bounded r...

  1. Benchmark models, planes lines and points for future SUSY searches at the LHC

    International Nuclear Information System (INIS)

    AbdusSalam, S.S.; Allanach, B.C.; Dreiner, H.K.

    2012-03-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  2. Benchmark models, planes lines and points for future SUSY searches at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    AbdusSalam, S.S. [The Abdus Salam International Centre for Theoretical Physics, Trieste (Italy); Allanach, B.C. [Cambridge Univ. (United Kingdom). Dept. of Applied Mathematics and Theoretical Physics; Dreiner, H.K. [Bonn Univ. (DE). Bethe Center for Theoretical Physics and Physikalisches Inst.] (and others)

    2012-03-15

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  3. Benchmark Models, Planes, Lines and Points for Future SUSY Searches at the LHC

    CERN Document Server

    AbdusSalam, S S; Dreiner, H K; Ellis, J; Ellwanger, U; Gunion, J; Heinemeyer, S; Krämer, M; Mangano, M L; Olive, K A; Rogerson, S; Roszkowski, L; Schlaffer, M; Weiglein, G

    2011-01-01

    We define benchmark models for SUSY searches at the LHC, including the CMSSM, NUHM, mGMSB, mAMSB, MM-AMSB and p19MSSM, as well as models with R-parity violation and the NMSSM. Within the parameter spaces of these models, we propose benchmark subspaces, including planes, lines and points along them. The planes may be useful for presenting results of the experimental searches in different SUSY scenarios, while the specific benchmark points may serve for more detailed detector performance tests and comparisons. We also describe algorithms for defining suitable benchmark points along the proposed lines in the parameter spaces, and we define a few benchmark points motivated by recent fits to existing experimental data.

  4. Model to the evolution of the organic matter in the pampa's soil. Relation with cultivation systems

    International Nuclear Information System (INIS)

    Andriulo, Adrian; Mary, Bruno; Guerif, Jerome; Balesdent, Jerome

    1996-08-01

    The objective of the work is to present a model to describe the evolution of the organic matter in soils of the Argentine's pampa. This model can be utilised to evaluate the evolution of the soil's fertility in the agricultural production at this moment. Three kinds of assay were done. The determination of organic carbon made possible to prove the Henin-Dupuis model and a derived model

  5. Robust non-rigid point set registration using student's-t mixture model.

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhou

    Full Text Available The Student's-t mixture model, which is heavily tailed and more robust than the Gaussian mixture model, has recently received great attention on image processing. In this paper, we propose a robust non-rigid point set registration algorithm using the Student's-t mixture model. Specifically, first, we consider the alignment of two point sets as a probability density estimation problem and treat one point set as Student's-t mixture model centroids. Then, we fit the Student's-t mixture model centroids to the other point set which is treated as data. Finally, we get the closed-form solutions of registration parameters, leading to a computationally efficient registration algorithm. The proposed algorithm is especially effective for addressing the non-rigid point set registration problem when significant amounts of noise and outliers are present. Moreover, less registration parameters have to be set manually for our algorithm compared to the popular coherent points drift (CPD algorithm. We have compared our algorithm with other state-of-the-art registration algorithms on both 2D and 3D data with noise and outliers, where our non-rigid registration algorithm showed accurate results and outperformed the other algorithms.

  6. IMPLICATIONS OF NON-LOCALITY OF TRANSPORT IN GEOMORPHIC TRANSPORT LAWS: HILLSLOPES AND LANDSCAPE EVOLUTION MODELING

    Science.gov (United States)

    Foufoula-Georgiou, E.; Ganti, V. K.; Dietrich, W. E.

    2009-12-01

    Sediment transport on hillslopes can be thought of as a hopping process, where the sediment moves in a series of jumps. A wide range of processes shape the hillslopes which can move sediment to a large distance in the downslope direction, thus, resulting in a broad-tail in the probability density function (PDF) of hopping lengths. Here, we argue that such a broad-tailed distribution calls for a non-local computation of sediment flux, where the sediment flux is not only a function of local topographic quantities but is an integral flux which takes into account the upslope topographic “memory” of the point of interest. We encapsulate this non-local behavior into a simple fractional diffusive model that involves fractional (non-integer) derivatives. We present theoretical predictions from this nonlocal model and demonstrate a nonlinear dependence of sediment flux on local gradient, consistent with observations. Further, we demonstrate that the non-local model naturally eliminates the scale-dependence exhibited by any local (linear or nonlinear) sediment transport model. An extension to a 2-D framework, where the fractional derivative can be cast into a mixture of directional derivatives, is discussed together with the implications of introducing non-locality into existing landscape evolution models.

  7. Evolution of Scientific Management Towards Performance Measurement and Managing Systems for Sustainable Performance in Industrial Assets: Philosophical Point of View

    Directory of Open Access Journals (Sweden)

    R.M. Chandima Ratnayake

    2009-05-01

    Full Text Available Even though remarkable progress has been made over recent years in the design of performance measurement frameworks and systems, many companies are still primarily relying on traditional financial performance measures. This paper presents an overview of modern descendents and historical antecedents of performance measurement and attempts to give philosophical definition, in fact addressed the evolution of traditional ways of measuring performance. The paper suggests that modern frameworks have indeed addressed the organizations external to them while satisfying the conditions internal to them and providing an analogy of the notion of kuhn’s scientific paradigm. This analogy is consistent with the fundamental proposition of Kuhnian philosophy of science, that progress only happens thorough successive and abrupt shifts of paradigm.

  8. A simple model for the microstructural evolution of solids under irradiation

    International Nuclear Information System (INIS)

    Valentin, P.P.; Martin, G.

    1982-01-01

    The coupled evolutions of the void and dislocation populations in crystals under high-temperature irradiation are studied by a simple heuristic theoretical approach: rate equations are used for describing both defect concentrations and microstructural variables, and the trajectories of the point representative of the microstructure in the appropriate state space are studied. Qualitatively different microstructural evolutions are found depending on the irradiation flux and temperature. Non-trivial behaviours are revealed such as: transient swelling, effect of cold work on the incubation dose for swelling, large dose divergence of evolutions which looked similar at low dose (which should result in large swelling heterogeneities) and radiation-enhanced sintering. (author)

  9. Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.

    Science.gov (United States)

    Pang, Xufang; Song, Zhan; Xie, Wuyuan

    2013-01-01

    3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.

  10. Identifying influential data points in hydrological model calibration and their impact on streamflow predictions

    Science.gov (United States)

    Wright, David; Thyer, Mark; Westra, Seth

    2015-04-01

    Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this

  11. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  12. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  13. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    Science.gov (United States)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  14. A Riccati-Based Interior Point Method for Efficient Model Predictive Control of SISO Systems

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Johansson, Rolf; Bagterp Jørgensen, John

    2017-01-01

    model parts separate. The controller is designed based on the deterministic model, while the Kalman filter results from the stochastic part. The controller is implemented as a primal-dual interior point (IP) method using Riccati recursion and the computational savings possible for SISO systems...

  15. Markov Random Field Restoration of Point Correspondences for Active Shape Modelling

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2004-01-01

    In this paper it is described how to build a statistical shape model using a training set with a sparse of landmarks. A well defined model mesh is selected and fitted to all shapes in the training set using thin plate spline warping. This is followed by a projection of the points of the warped...

  16. Point vortex modelling of the wake dynamics behind asymmetric vortex generator arrays

    NARCIS (Netherlands)

    Baldacchino, D.; Simao Ferreira, C.; Ragni, D.; van Bussel, G.J.W.

    2016-01-01

    In this work, we present a simple inviscid point vortex model to study the dynamics of asymmetric vortex rows, as might appear behind misaligned vortex generator vanes. Starting from the existing solution of the in_nite vortex cascade, a numerical model of four base-vortices is chosen to represent

  17. An application of a discrete fixed point theorem to the Cournot model

    OpenAIRE

    Sato, Junichi

    2008-01-01

    In this paper, we apply a discrete fixed point theorem of [7] to the Cournot model [1]. Then we can deal with the Cournot model where the production of the enterprises is discrete. To handle it, we define a discrete Cournot-Nash equilibrium, and prove its existence.

  18. Three-dimensional point-cloud room model in room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  19. Three-dimensional point-cloud room model for room acoustics simulations

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    acquisition and its representation with a 3D point-cloud model, as well as utilization of such a model for the room acoustics simulations. A room is scanned with a commercially available input device (Kinect for Xbox360) in two different ways; the first one involves the device placed in the middle of the room...... and rotated around the vertical axis while for the second one the device is moved within the room. Benefits of both approaches were analyzed. The device's depth sensor provides a set of points in a three-dimensional coordinate system which represents scanned surfaces of the room interior. These data are used...... to build a 3D point-cloud model of the room. Several models are created to meet requirements of different room acoustics simulation algorithms: plane fitting and uniform voxel grid for geometric methods and triangulation mesh for the numerical methods. Advantages of the proposed method over the traditional...

  20. A travel time forecasting model based on change-point detection method

    Science.gov (United States)

    LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei

    2017-06-01

    Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.

  1. Petrologic Modeling of Magmatic Evolution in The Elysium Volcanic Province

    Science.gov (United States)

    Susko, D.; Karunatillake, S.; Hood, D.

    2017-12-01

    The Elysium Volcanic Province (EVP) on Mars is a massive expanse of land made up of many hundreds of lava flows of various ages1. The variable surface ages within this volcanic province have distinct elemental compositions based on the derived values from the Gamma Ray Spectrometer (GRS) suite2. Without seismic data or ophiolite sequences on Mars, the compositions of lavas on the surface provide some of the only information to study the properties of the interior of the planet. The Amazonian surface age and isolated nature of the EVP in the northern lowlands of Mars make it ideal for analyzing the mantle beneath Elysium during the most recent geologic era on Mars. The MELTS algorithm is one of the most commonly used programs for simulating compositions and mineral phases of basaltic melt crystallization3. It has been used extensively for both terrestrial applications4 and for other planetary bodies3,5. The pMELTS calibration of the algorithm allows for higher pressure (10-30 kbars) regimes, and is more appropriate for modeling melt compositions and equilibrium conditions for a source within the martian mantle. We use the pMELTS program to model how partial melting of the martian mantle could evolve magmas into the surface compositions derived from the GRS instrument, and how the mantle beneath Elysium has changed over time. We attribute changes to lithospheric loading by long term, episodic volcanism within the EVP throughout its history. 1. Vaucher, J. et al. The volcanic history of central Elysium Planitia: Implications for martian magmatism. Icarus 204, 418-442 (2009). 2. Susko, D. et al. A record of igneous evolution in Elysium, a major martian volcanic province. Scientific Reports 7, 43177 (2017). 3. El Maarry, M. R. et al. Gamma-ray constraints on the chemical composition of the martian surface in the Tharsis region: A signature of partial melting of the mantle? Journal of Volcanology and Geothermal Research 185, 116-122 (2009). 4. Ding, S. & Dasgupta, R. The

  2. Bayesian Modeling for Identification and Estimation of the Learning Effects of Pointing Tasks

    Science.gov (United States)

    Kyo, Koki

    Recently, in the field of human-computer interaction, a model containing the systematic factor and human factor has been proposed to evaluate the performance of the input devices of a computer. This is called the SH-model. In this paper, in order to extend the range of application of the SH-model, we propose some new models based on the Box-Cox transformation and apply a Bayesian modeling method for identification and estimation of the learning effects of pointing tasks. We consider the parameters describing the learning effect as random variables and introduce smoothness priors for them. Illustrative results show that the newly-proposed models work well.

  3. Model for a Ferromagnetic Quantum Critical Point in a 1D Kondo Lattice

    Science.gov (United States)

    Komijani, Yashar; Coleman, Piers

    2018-04-01

    Motivated by recent experiments, we study a quasi-one-dimensional model of a Kondo lattice with ferromagnetic coupling between the spins. Using bosonization and dynamical large-N techniques, we establish the presence of a Fermi liquid and a magnetic phase separated by a local quantum critical point, governed by the Kondo breakdown picture. Thermodynamic properties are studied and a gapless charged mode at the quantum critical point is highlighted.

  4. Sigma models in the presence of dynamical point-like defects

    International Nuclear Information System (INIS)

    Doikou, Anastasia; Karaiskos, Nikos

    2013-01-01

    Point-like Liouville integrable dynamical defects are introduced in the context of the Landau–Lifshitz and Principal Chiral (Faddeev–Reshetikhin) models. Based primarily on the underlying quadratic algebra we identify the first local integrals of motion, the associated Lax pairs as well as the relevant sewing conditions around the defect point. The involution of the integrals of motion is shown taking into account the sewing conditions.

  5. Quantifying and Validating Rapid Floodplain Geomorphic Evolution, a Monitoring and Modelling Case Study

    Science.gov (United States)

    Scott, R.; Entwistle, N. S.

    2017-12-01

    Gravel bed rivers and their associated wider systems present an ideal subject for development and improvement of rapid monitoring tools, with features dynamic enough to evolve within relatively short-term timescales. For detecting and quantifying topographical evolution, UAV based remote sensing has manifested as a reliable, low cost, and accurate means of topographic data collection. Here we present some validated methodologies for detection of geomorphic change at resolutions down to 0.05 m, building on the work of Wheaton et al. (2009) and Milan et al. (2007), to generate mesh based and pointcloud comparison data to produce a reliable picture of topographic evolution. Results are presented for the River Glen, Northumberland, UK. Recent channel avulsion and floodplain interaction, resulting in damage to flood defence structures make this site a particularly suitable case for application of geomorphic change detection methods, with the UAV platform at its centre. We compare multi-temporal, high-resolution point clouds derived from SfM processing, cross referenced with aerial LiDAR data, over a 1.5 km reach of the watercourse. Changes detected included bank erosion, bar and splay deposition, vegetation stripping and incipient channel avulsion. Utilisation of the topographic data for numerical modelling, carried out using CAESAR-Lisflood predicted the avulsion of the main channel, resulting in erosion of and potentially complete circumvention of original channel and flood levees. A subsequent UAV survey highlighted topographic change and reconfiguration of the local sedimentary conveyor as we predicted with preliminary modelling. The combined monitoring and modelling approach has allowed probable future geomorphic configurations to be predicted permitting more informed implementation of channel and floodplain management strategies.

  6. An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians

    International Nuclear Information System (INIS)

    Hughes, Ciaran; Mehta, Dhagash; Wales, David J.

    2014-01-01

    Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems

  7. Computational Modeling of Microstructural-Evolution in AISI 1005 Steel During Gas Metal Arc Butt Welding

    Science.gov (United States)

    2013-05-01

    H.K.D.H. Bhadeshia, A Model for the Microstruc- ture of Some Advanced Bainitic Steels , Mater. Trans., 1991, 32, p 689–696 19. G.J. Davies and J.G. Garland...REPORT Computational Modeling of Microstructural-Evolution in AISI 1005 Steel During Gas Metal Arc Butt Welding 14. ABSTRACT 16. SECURITY...Computational Modeling of Microstructural-Evolution in AISI 1005 Steel During Gas Metal Arc Butt Welding Report Title ABSTRACT A fully coupled (two-way

  8. Modeling Marek's disease virus transmission: A framework for evaluating the impact of farming practices and evolution

    OpenAIRE

    David A. Kennedy; Patricia A. Dunn; Andrew F. Read

    2018-01-01

    Marek's disease virus (MDV) is a pathogen of chickens whose control has twice been undermined by pathogen evolution. Disease ecology is believed to be the main driver of this evolution, yet mathematical models of MDV disease ecology have never been confronted with data to test their reliability. Here, we develop a suite of MDV models that differ in the ecological mechanisms they include. We fit these models with maximum likelihood using iterated filtering in ‘pomp’ to data on MDV concentratio...

  9. Topological bifurcations in the evolution of coherent structures in a convection model

    DEFF Research Database (Denmark)

    Dam, Magnus; Rasmussen, Jens Juul; Naulin, Volker

    2017-01-01

    Blob filaments are coherent structures in a turbulent plasma flow. Understanding the evolution of these structures is important to improve magnetic plasma confinement. Three state variables describe blob filaments in a plasma convection model. A dynamical systems approach analyzes the evolution...

  10. Elastic-plastic adhesive contact of rough surfaces using n-point asperity model

    International Nuclear Information System (INIS)

    Sahoo, Prasanta; Mitra, Anirban; Saha, Kashinath

    2009-01-01

    This study considers an analysis of the elastic-plastic contact of rough surfaces in the presence of adhesion using an n-point asperity model. The multiple-point asperity model, developed by Hariri et al (2006 Trans ASME: J. Tribol. 128 505-14) is integrated into the elastic-plastic adhesive contact model developed by Roy Chowdhury and Ghosh (1994 Wear 174 9-19). This n-point asperity model differs from the conventional Greenwood and Williamson model (1966 Proc. R. Soc. Lond. A 295 300-19) in considering the asperities not as fixed entities but as those that change through the contact process, and hence it represents the asperities in a more realistic manner. The newly defined adhesion index and plasticity index defined for the n-point asperity model are used to consider the different conditions that arise because of varying load, surface and material parameters. A comparison between the load-separation behaviour of the new model and the conventional one shows a significant difference between the two depending on combinations of mean separation, adhesion index and plasticity index.

  11. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-09-01

    Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.

  12. Quantum time evolution of a closed Friedmann model

    CERN Document Server

    Hinterleitner, F

    2002-01-01

    We consider a quantized dust-filled closed Friedmann universe in Ashtekar-type variables. Due to the presence of matter, the 'timelessness problem' of quantum gravity can be solved in this case by using the following approach to the Hamiltonian operator. 1. The arising Wheeler-DeWitt equation appears as an eigenvalue equation for discrete values of the total mass. 2. Its gravitational part is considered as the generator of the time evolution of geometry. 3. Superpositions of different eigenfunctions with time behaviour governed by the corresponding eigenvalues of mass are admitted. Following these lines, a time evolution with a correct classical limit is obtained.

  13. Geophysics and geochemistry intertwined: Modeling the internal evolution of Ceres, Pluto, and Charon

    Science.gov (United States)

    Neveu, Marc; Desch, Steven J.; Castillo-Rogez, Julie C.

    2015-11-01

    Liquid water likely shaped dwarf planet evolution: observations [1,2] and models [3-5] suggest aqueous alteration of silicates or volatiles accreted by these worlds. Driven by thermo-physical settings, aqueous alteration also feeds back on dwarf planet evolution in unconstrained ways. Can rocky dwarf planet cores crack, increasing the water-rock interface? Might radionuclides be leached into fluids, changing the distribution of this chief heat source? What is the fate of antifreezes, on which may hinge long-term liquid persistence? Is volcanism favored or impeded? What are predicted cryomagma compositions?We have modeled silicate core fracturing [6], geochemical equilibria between chondritic rock and aqueous fluids [7], and prerequisites for cryovolcanism [8]. These models, coupled to an evolution code [3], allow us to study geophysics/chemistry feedbacks inside dwarf planets.Ice-rock differentiation, even partial [9,10], yields a rocky, brittle core cracked by thermal stresses; liquid circulation through core cracks transports heat into the ice mantle, yielding runaway melting that quickly ceases once convection cools the mantle to its freezing point [6]. Hot fluids can leach radionuclides at high water:rock ratios (W:R); NH3 antifreeze can turn into NH4-minerals at low W:R [7]. Volatile (chiefly CO) exsolution enables explosive cryovolcanism [8]; this may explain Pluto’s young, CO-rich Tombaugh Regio.Applied to Ceres, such models are consistent with pre-Dawn and Dawn data [11] provided Ceres partially differentiated into a rocky core and muddy mantle [10]. They suggest Ceres’ hydrated surface [2] was emplaced during a 26Al-fueled active phase, and predict its bright spots result from cryovolcanic fluids squeezed by mantle refreezing and effusing through pre-existing subsurface cracks [11].[1] Cook et al. 2007 ApJ 663:1406[2] Milliken & Rivkin 2009 Nat Geosc 2:258[3] Desch et al. 2009 Icarus 202:694[4] Castillo-Rogez et al. 2010 Icarus 205:443[5] Robuchon

  14. Modeling and measurement of boiling point elevation during water vaporization from aqueous urea for SCR applications

    International Nuclear Information System (INIS)

    Dan, Ho Jin; Lee, Joon Sik

    2016-01-01

    Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation

  15. Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Yu, Y. H.

    2012-05-01

    During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.

  16. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    Energy Technology Data Exchange (ETDEWEB)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C., E-mail: david.goes@poli.ufrj.br, E-mail: aquilino@lmp.ufrj.br, E-mail: alessandro@con.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Departamento de Engenharia Nuclear

    2017-11-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  17. Empiric model for mean generation time adjustment factor for classic point kinetics equations

    International Nuclear Information System (INIS)

    Goes, David A.B.V. de; Martinez, Aquilino S.; Goncalves, Alessandro da C.

    2017-01-01

    Point reactor kinetics equations are the easiest way to observe the neutron production time behavior in a nuclear reactor. These equations are derived from the neutron transport equation using an approximation called Fick's law leading to a set of first order differential equations. The main objective of this study is to review classic point kinetics equation in order to approximate its results to the case when it is considered the time variation of the neutron currents. The computational modeling used for the calculations is based on the finite difference method. The results obtained with this model are compared with the reference model and then it is determined an empirical adjustment factor that modifies the point reactor kinetics equation to the real scenario. (author)

  18. Improving the Pattern Reproducibility of Multiple-Point-Based Prior Models Using Frequency Matching

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2014-01-01

    Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data...... events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads...... to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re...

  19. Tricritical point in quantum phase transitions of the Coleman–Weinberg model at Higgs mass

    International Nuclear Information System (INIS)

    Fiolhais, Miguel C.N.; Kleinert, Hagen

    2013-01-01

    The tricritical point, which separates first and second order phase transitions in three-dimensional superconductors, is studied in the four-dimensional Coleman–Weinberg model, and the similarities as well as the differences with respect to the three-dimensional result are exhibited. The position of the tricritical point in the Coleman–Weinberg model is derived and found to be in agreement with the Thomas–Fermi approximation in the three-dimensional Ginzburg–Landau theory. From this we deduce a special role of the tricritical point for the Standard Model Higgs sector in the scope of the latest experimental results, which suggests the unexpected relevance of tricritical behavior in the electroweak interactions.

  20. Modeling and measurement of boiling point elevation during water vaporization from aqueous urea for SCR applications

    Energy Technology Data Exchange (ETDEWEB)

    Dan, Ho Jin; Lee, Joon Sik [Seoul National University, Seoul (Korea, Republic of)

    2016-03-15

    Understanding of water vaporization is the first step to anticipate the conversion process of urea into ammonia in the exhaust stream. As aqueous urea is a mixture and the urea in the mixture acts as a non-volatile solute, its colligative properties should be considered during water vaporization. The elevation of boiling point for urea water solution is measured with respect to urea mole fraction. With the boiling-point elevation relation, a model for water vaporization is proposed underlining the correction of the heat of vaporization of water in the urea water mixture due to the enthalpy of urea dissolution in water. The model is verified by the experiments of water vaporization as well. Finally, the water vaporization model is applied to the water vaporization of aqueous urea droplets. It is shown that urea decomposition can begin before water evaporation finishes due to the boiling-point elevation.

  1. Supervised Outlier Detection in Large-Scale Mvs Point Clouds for 3d City Modeling Applications

    Science.gov (United States)

    Stucker, C.; Richard, A.; Wegner, J. D.; Schindler, K.

    2018-05-01

    We propose to use a discriminative classifier for outlier detection in large-scale point clouds of cities generated via multi-view stereo (MVS) from densely acquired images. What makes outlier removal hard are varying distributions of inliers and outliers across a scene. Heuristic outlier removal using a specific feature that encodes point distribution often delivers unsatisfying results. Although most outliers can be identified correctly (high recall), many inliers are erroneously removed (low precision), too. This aggravates object 3D reconstruction due to missing data. We thus propose to discriminatively learn class-specific distributions directly from the data to achieve high precision. We apply a standard Random Forest classifier that infers a binary label (inlier or outlier) for each 3D point in the raw, unfiltered point cloud and test two approaches for training. In the first, non-semantic approach, features are extracted without considering the semantic interpretation of the 3D points. The trained model approximates the average distribution of inliers and outliers across all semantic classes. Second, semantic interpretation is incorporated into the learning process, i.e. we train separate inlieroutlier classifiers per semantic class (building facades, roof, ground, vegetation, fields, and water). Performance of learned filtering is evaluated on several large SfM point clouds of cities. We find that results confirm our underlying assumption that discriminatively learning inlier-outlier distributions does improve precision over global heuristics by up to ≍ 12 percent points. Moreover, semantically informed filtering that models class-specific distributions further improves precision by up to ≍ 10 percent points, being able to remove very isolated building, roof, and water points while preserving inliers on building facades and vegetation.

  2. The Jukes-Cantor Model of Molecular Evolution

    Science.gov (United States)

    Erickson, Keith

    2010-01-01

    The material in this module introduces students to some of the mathematical tools used to examine molecular evolution. This topic is standard fare in many mathematical biology or bioinformatics classes, but could also be suitable for classes in linear algebra or probability. While coursework in matrix algebra, Markov processes, Monte Carlo…

  3. Modeling Wind Wave Evolution from Deep to Shallow Water

    Science.gov (United States)

    2014-09-30

    W.H. Hui, 1979; Nonlinear energy transfer in narrow gravity wave spectrum. Proc. Roy. Soc. London A368, 239–265. Gagnaire-Renou, E., M. Benoit , and P...at the 2013 WISE meeting, Camp Springs, MA , USA. Smit P. B. and T. T. Janssen, 2013; The evolution of inhomogeneous wave statistics through a

  4. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  5. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    International Nuclear Information System (INIS)

    Chigvintsev, A Yu; Iosilevskiy, I L; Noginova, L Yu

    2016-01-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z . Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval ( Z 1 < Z < Z 2 ). The most remarkable is appearance of pseudo-critical points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941). (paper)

  6. Pseudo-critical point in anomalous phase diagrams of simple plasma models

    Science.gov (United States)

    Chigvintsev, A. Yu; Iosilevskiy, I. L.; Noginova, L. Yu

    2016-11-01

    Anomalous phase diagrams in subclass of simplified (“non-associative”) Coulomb models is under discussion. The common feature of this subclass is absence on definition of individual correlations for charges of opposite sign. It is e.g. modified OCP of ions on uniformly compressible background of ideal Fermi-gas of electrons OCP(∼), or a superposition of two non-ideal OCP(∼) models of ions and electrons etc. In contrast to the ordinary OCP model on non-compressible (“rigid”) background OCP(#) two new phase transitions with upper critical point, boiling and sublimation, appear in OCP(∼) phase diagram in addition to the well-known Wigner crystallization. The point is that the topology of phase diagram in OCP(∼) becomes anomalous at high enough value of ionic charge number Z. Namely, the only one unified crystal- fluid phase transition without critical point exists as continuous superposition of melting and sublimation in OCP(∼) at the interval (Z 1 points at both boundary values Z = Z 1 ≈ 35.5 and Z = Z 2 ≈ 40.0. It should be stressed that critical isotherm is exactly cubic in both these pseudo-critical points. In this study we have improved our previous calculations and utilized more complicated model components equation of state provided by Chabrier and Potekhin (1998 Phys. Rev. E 58 4941).

  7. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  8. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Youshan, E-mail: ysliu@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Teng, Jiwen, E-mail: jwteng@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); Xu, Tao, E-mail: xutao@mail.iggcas.ac.cn [State Key Laboratory of Lithospheric Evolution, Institute of Geology and Geophysics, Chinese Academy of Sciences, Beijing, 100029 (China); CAS Center for Excellence in Tibetan Plateau Earth Sciences, Beijing, 100101 (China); Badal, José, E-mail: badal@unizar.es [Physics of the Earth, Sciences B, University of Zaragoza, Pedro Cerbuna 12, 50009 Zaragoza (Spain)

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  9. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    International Nuclear Information System (INIS)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-01-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant–Friedrichs–Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  10. Reduction of bias in neutron multiplicity assay using a weighted point model

    Energy Technology Data Exchange (ETDEWEB)

    Geist, W. H. (William H.); Krick, M. S. (Merlyn S.); Mayo, D. R. (Douglas R.)

    2004-01-01

    Accurate assay of most common plutonium samples was the development goal for the nondestructive assay technique of neutron multiplicity counting. Over the past 20 years the technique has been proven for relatively pure oxides and small metal items. Unfortunately, the technique results in large biases when assaying large metal items. Limiting assumptions, such as unifoh multiplication, in the point model used to derive the multiplicity equations causes these biases for large dense items. A weighted point model has been developed to overcome some of the limitations in the standard point model. Weighting factors are detemiined from Monte Carlo calculations using the MCNPX code. Monte Carlo calculations give the dependence of the weighting factors on sample mass and geometry, and simulated assays using Monte Carlo give the theoretical accuracy of the weighted-point-model assay. Measured multiplicity data evaluated with both the standard and weighted point models are compared to reference values to give the experimental accuracy of the assay. Initial results show significant promise for the weighted point model in reducing or eliminating biases in the neutron multiplicity assay of metal items. The negative biases observed in the assay of plutonium metal samples are caused by variations in the neutron multiplication for neutrons originating in various locations in the sample. The bias depends on the mass and shape of the sample and depends on the amount and energy distribution of the ({alpha},n) neutrons in the sample. When the standard point model is used, this variable-multiplication bias overestimates the multiplication and alpha values of the sample, and underestimates the plutonium mass. The weighted point model potentially can provide assay accuracy of {approx}2% (1 {sigma}) for cylindrical plutonium metal samples < 4 kg with {alpha} < 1 without knowing the exact shape of the samples, provided that the ({alpha},n) source is uniformly distributed throughout the

  11. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  12. Modeling the time evolution of the nanoparticle-protein corona in a body fluid.

    Directory of Open Access Journals (Sweden)

    Daniele Dell'Orco

    Full Text Available BACKGROUND: Nanoparticles in contact with biological fluids interact with proteins and other biomolecules, thus forming a dynamic corona whose composition varies over time due to continuous protein association and dissociation events. Eventually equilibrium is reached, at which point the continued exchange will not affect the composition of the corona. RESULTS: We developed a simple and effective dynamic model of the nanoparticle protein corona in a body fluid, namely human plasma. The model predicts the time evolution and equilibrium composition of the corona based on affinities, stoichiometries and rate constants. An application to the interaction of human serum albumin, high density lipoprotein (HDL and fibrinogen with 70 nm N-iso-propylacrylamide/N-tert-butylacrylamide copolymer nanoparticles is presented, including novel experimental data for HDL. CONCLUSIONS: The simple model presented here can easily be modified to mimic the interaction of the nanoparticle protein corona with a novel biological fluid or compartment once new data will be available, thus opening novel applications in nanotoxicity and nanomedicine.

  13. A mathematical model for hydrogen evolution in an electrochemical cell and experimental validation

    International Nuclear Information System (INIS)

    Mahmut D Mat; Yuksel Kaplan; Beycan Ibrahimoglu; Nejat Veziroglu; Rafig Alibeyli; Sadiq Kuliyev

    2006-01-01

    Electrochemical reaction is largely employed in various industrial areas such as hydrogen production, chlorate process, electroplating, metal purification etc. Most of these processes often take place with gas evaluation on the electrodes. Presence of gas phase in the liquid phase makes the problem two-phase flow which is much knowledge available from heat transfer and fluid mechanics studies. The motivation of this study is to investigate hydrogen release in an electrolysis processes from two-phase flow point of view and investigate effect of gas release on the electrolysis process. Hydrogen evolution, flow field and current density distribution in an electrochemical cell are investigated with a two-phase flow model. The mathematical model involves solutions of transport equations for the variables of each phase with allowance for inter phase transfer of mass and momentum. An experimental set-up is established to collect data to validate and improve the mathematical model. Void fraction is determined from measurement of resistivity changes in the system due to the presence of bubbles. A good agreement is obtained between numerical results and experimental data. (authors)

  14. Accurate corresponding point search using sphere-attribute-image for statistical bone model generation

    International Nuclear Information System (INIS)

    Saito, Toki; Nakajima, Yoshikazu; Sugita, Naohiko; Mitsuishi, Mamoru; Hashizume, Hiroyuki; Kuramoto, Kouichi; Nakashima, Yosio

    2011-01-01

    Statistical deformable model based two-dimensional/three-dimensional (2-D/3-D) registration is a promising method for estimating the position and shape of patient bone in the surgical space. Since its accuracy depends on the statistical model capacity, we propose a method for accurately generating a statistical bone model from a CT volume. Our method employs the Sphere-Attribute-Image (SAI) and has improved the accuracy of corresponding point search in statistical model generation. At first, target bone surfaces are extracted as SAIs from the CT volume. Then the textures of SAIs are classified to some regions using Maximally-stable-extremal-regions methods. Next, corresponding regions are determined using Normalized cross-correlation (NCC). Finally, corresponding points in each corresponding region are determined using NCC. The application of our method to femur bone models was performed, and worked well in the experiments. (author)

  15. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Science.gov (United States)

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  16. A Spatio-Temporal Enhanced Metadata Model for Interdisciplinary Instant Point Observations in Smart Cities

    Directory of Open Access Journals (Sweden)

    Nengcheng Chen

    2017-02-01

    Full Text Available Due to the incomprehensive and inconsistent description of spatial and temporal information for city data observed by sensors in various fields, it is a great challenge to share the massive, multi-source and heterogeneous interdisciplinary instant point observation data resources. In this paper, a spatio-temporal enhanced metadata model for point observation data sharing was proposed. The proposed Data Meta-Model (DMM focused on the spatio-temporal characteristics and formulated a ten-tuple information description structure to provide a unified and spatio-temporal enhanced description of the point observation data. To verify the feasibility of the point observation data sharing based on DMM, a prototype system was established, and the performance improvement of Sensor Observation Service (SOS for the instant access and insertion of point observation data was realized through the proposed MongoSOS, which is a Not Only SQL (NoSQL SOS based on the MongoDB database and has the capability of distributed storage. For example, the response time of the access and insertion for navigation and positioning data can be realized at the millisecond level. Case studies were conducted, including the gas concentrations monitoring for the gas leak emergency response and the smart city public vehicle monitoring based on BeiDou Navigation Satellite System (BDS used for recording the dynamic observation information. The results demonstrated the versatility and extensibility of the DMM, and the spatio-temporal enhanced sharing for interdisciplinary instant point observations in smart cities.

  17. Boiling points of halogenated ethanes: an explanatory model implicating weak intermolecular hydrogen-halogen bonding.

    Science.gov (United States)

    Beauchamp, Guy

    2008-10-23

    This study explores via structural clues the influence of weak intermolecular hydrogen-halogen bonds on the boiling point of halogenated ethanes. The plot of boiling points of 86 halogenated ethanes versus the molar refraction (linked to polarizability) reveals a series of straight lines, each corresponding to one of nine possible arrangements of hydrogen and halogen atoms on the two-carbon skeleton. A multiple linear regression model of the boiling points could be designed based on molar refraction and subgroup structure as independent variables (R(2) = 0.995, standard error of boiling point 4.2 degrees C). The model is discussed in view of the fact that molar refraction can account for approximately 83.0% of the observed variation in boiling point, while 16.5% could be ascribed to weak C-X...H-C intermolecular interactions. The difference in the observed boiling point of molecules having similar molar refraction values but differing in hydrogen-halogen intermolecular bonds can reach as much as 90 degrees C.

  18. On folivory, competition, and intelligence: generalisms, overgeneralizations, and models of primate evolution.

    Science.gov (United States)

    Sayers, Ken

    2013-04-01

    Considerations of primate behavioral evolution often proceed by assuming the ecological and competitive milieus of particular taxa via their relative exploitation of gross food types, such as fruits versus leaves. Although this "fruit/leaf dichotomy" has been repeatedly criticized, it continues to be implicitly invoked in discussions of primate socioecology and female social relationships and is explicitly invoked in models of brain evolution. An expanding literature suggests that such views have severely limited our knowledge of the social and ecological complexities of primate folivory. This paper examines the behavior of primate folivore-frugivores, with particular emphasis on gray langurs (traditionally, Semnopithecus entellus) within the broader context of evolutionary ecology. Although possessing morphological characteristics that have been associated with folivory and constrained activity patterns, gray langurs are known for remarkable plasticity in ecology and behavior. Their diets are generally quite broad and can be discussed in relation to Liem's Paradox, the odd coupling of anatomical feeding specializations with a generalist foraging strategy. Gray langurs, not coincidentally, inhabit arguably the widest range of habitats for a nonhuman primate, including high elevations in the Himalayas. They provide an excellent focal point for examining the assumptions and predictions of behavioral, socioecological, and cognitive evolutionary models. Contrary to the classical descriptions of the primate folivore, Himalayan and other gray langurs-and, in actuality, many leaf-eating primates-range widely, engage in resource competition (both of which have previously been noted for primate folivores), and solve ecological problems rivaling those of more frugivorous primates (which has rarely been argued for primate folivores). It is maintained that questions of primate folivore adaptation, temperate primate adaptation, and primate evolution more generally cannot be

  19. A new statistical scission-point model fed with microscopic ingredients to predict fission fragments distributions

    International Nuclear Information System (INIS)

    Heinrich, S.

    2006-01-01

    Nucleus fission process is a very complex phenomenon and, even nowadays, no realistic models describing the overall process are available. The work presented here deals with a theoretical description of fission fragments distributions in mass, charge, energy and deformation. We have reconsidered and updated the B.D. Wilking Scission Point model. Our purpose was to test if this statistic model applied at the scission point and by introducing new results of modern microscopic calculations allows to describe quantitatively the fission fragments distributions. We calculate the surface energy available at the scission point as a function of the fragments deformations. This surface is obtained from a Hartree Fock Bogoliubov microscopic calculation which guarantee a realistic description of the potential dependence on the deformation for each fragment. The statistic balance is described by the level densities of the fragment. We have tried to avoid as much as possible the input of empirical parameters in the model. Our only parameter, the distance between each fragment at the scission point, is discussed by comparison with scission configuration obtained from full dynamical microscopic calculations. Also, the comparison between our results and experimental data is very satisfying and allow us to discuss the success and limitations of our approach. We finally proposed ideas to improve the model, in particular by applying dynamical corrections. (author)

  20. Detection and localization of change points in temporal networks with the aid of stochastic block models

    Science.gov (United States)

    De Ridder, Simon; Vandermarliere, Benjamin; Ryckebusch, Jan

    2016-11-01

    A framework based on generalized hierarchical random graphs (GHRGs) for the detection of change points in the structure of temporal networks has recently been developed by Peel and Clauset (2015 Proc. 29th AAAI Conf. on Artificial Intelligence). We build on this methodology and extend it to also include the versatile stochastic block models (SBMs) as a parametric family for reconstructing the empirical networks. We use five different techniques for change point detection on prototypical temporal networks, including empirical and synthetic ones. We find that none of the considered methods can consistently outperform the others when it comes to detecting and locating the expected change points in empirical temporal networks. With respect to the precision and the recall of the results of the change points, we find that the method based on a degree-corrected SBM has better recall properties than other dedicated methods, especially for sparse networks and smaller sliding time window widths.

  1. Two- and three-point functions in the D=1 matrix model

    International Nuclear Information System (INIS)

    Ben-Menahem, S.

    1991-01-01

    The critical behavior of the genus-zero two-point function in the D=1 matrix model is carefully analyzed for arbitrary embedding-space momentum. Kostov's result is recovered for momenta below a certain value P 0 (which is 1/√α' in the continuum language), with a non-universal form factor which is expressed simply in terms of the critical fermion trajectory. For momenta above P 0 , the Kostov scaling term is found to be subdominant. We then extend the large-N WKB treatment to calculate the genus-zero three-point function, and elucidate its critical behavior when all momenta are below P 0 . The resulting universal scaling behavior, as well as the non-universal form factor for the three-point function, are related to the two-point functions of the individual external momenta, through the factorization familiar from continuum conformal field theories. (orig.)

  2. Ferrimagnetism and compensation points in a decorated 3D Ising model

    International Nuclear Information System (INIS)

    Oitmaa, J.; Zheng, W.

    2003-01-01

    Full text: Ferrimagnets are materials where ions on different sublattices have opposing magnetic moments which do not exactly cancel even at zero temperature. An intriguing possibility then is the existence of a compensation point, below the Curie temperature, where the net moment changes sign. This has obvious technological significance. Most theoretical studies of such systems have used mean-field approaches, making it difficult to distinguish real properties of the model from artefacts of the approximation. For this reason a number of simpler models have been proposed, where treatments beyond mean-field theory are possible. Of particular interest are decorated systems, which can be mapped exactly onto simpler models and, in this way, either solved exactly or to a high degree of numerical precision. We use this approach to study a ferrimagnetic Ising system with spins 1/2 at the sites of a simple cubic lattice and spins S=1 or 3/2 located on the bonds. Our results, which are exact to high numerical precision, show a number of surprising and interesting features: for S=1 the possibility of zero, one or two compensation points, re-entrant behaviour, and up to three critical points; for S=3/2 always a simple critical point and zero or one compensation point

  3. Microstructure evolution during cyclic tests on EUROFER 97 at room temperature. TEM observation and modelling

    Czech Academy of Sciences Publication Activity Database

    Giordana, M. F.; Giroux, P. F.; Alvarez; Armas, I.; Sauzay, M.; Armas, A.; Kruml, Tomáš

    2012-01-01

    Roč. 550, JUL (2012), s. 103-111 ISSN 0921-5093 Institutional support: RVO:68081723 Keywords : martensitic steels * softening behaviour * microstructural evolution * modelling Subject RIV: JL - Materials Fatigue, Friction Mechanics Impact factor: 2.108, year: 2012

  4. Numerical study of the evolution of a magnetized plasma by means of a hybrid model

    Energy Technology Data Exchange (ETDEWEB)

    Dinu, L [Institutul de Matematica, Bucharest (Romania); Vlad, M [Institutul de Fizica si Tehnologia Aparatelor cu Radiatii, Bucharest (Romania)

    1979-01-01

    A numerical solution of the Vlasov-fluid model describing a time and space plasma evolution is presented. This should be compared with J.P. Frjedberg's analysis (1), (2) which provides growth rates for instabilities and some stability criteria.

  5. Gas-evolution oscillators. 10. A model based on a delay equation

    Energy Technology Data Exchange (ETDEWEB)

    Bar-Eli, K.; Noyes, R.M. [Univ. of Oregon, Eugene, OR (United States)

    1992-09-17

    This paper develops a simplified method to model the behavior of a gas-evolution oscillator with two differential delay equations in two unknowns consisting of the population of dissolved molecules in solution and the pressure of the gas.

  6. Gas-evolution oscillators. 10. A model based on a delay equation

    International Nuclear Information System (INIS)

    Bar-Eli, K.; Noyes, R.M.

    1992-01-01

    This paper develops a simplified method to model the behavior of a gas-evolution oscillator with two differential delay equations in two unknowns consisting of the population of dissolved molecules in solution and the pressure of the gas

  7. Evolution dynamics modeling and simulation of logistics enterprise's core competence based on service innovation

    Science.gov (United States)

    Yang, Bo; Tong, Yuting

    2017-04-01

    With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.

  8. Application of the nudged elastic band method to the point-to-point radio wave ray tracing in IRI modeled ionosphere

    Science.gov (United States)

    Nosikov, I. A.; Klimenko, M. V.; Bessarab, P. F.; Zhbankov, G. A.

    2017-07-01

    Point-to-point ray tracing is an important problem in many fields of science. While direct variational methods where some trajectory is transformed to an optimal one are routinely used in calculations of pathways of seismic waves, chemical reactions, diffusion processes, etc., this approach is not widely known in ionospheric point-to-point ray tracing. We apply the Nudged Elastic Band (NEB) method to a radio wave propagation problem. In the NEB method, a chain of points which gives a discrete representation of the radio wave ray is adjusted iteratively to an optimal configuration satisfying the Fermat's principle, while the endpoints of the trajectory are kept fixed according to the boundary conditions. Transverse displacements define the radio ray trajectory, while springs between the points control their distribution along the ray. The method is applied to a study of point-to-point ionospheric ray tracing, where the propagation medium is obtained with the International Reference Ionosphere model taking into account traveling ionospheric disturbances. A 2-dimensional representation of the optical path functional is developed and used to gain insight into the fundamental difference between high and low rays. We conclude that high and low rays are minima and saddle points of the optical path functional, respectively.

  9. Ups and Downs: Modeling the Visual Evolution of Fashion Trends with One-Class Collaborative Filtering

    OpenAIRE

    He, Ruining; McAuley, Julian

    2016-01-01

    Building a successful recommender system depends on understanding both the dimensions of people's preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underly...

  10. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  11. Room acoustics modeling using a point-cloud representation of the room geometry

    DEFF Research Database (Denmark)

    Markovic, Milos; Olesen, Søren Krarup; Hammershøi, Dorte

    2013-01-01

    Room acoustics modeling is usually based on the room geometry that is parametrically described prior to a sound transmission calculation. This is a highly room-specific task and rather time consuming if a complex geometry is to be described. Here, a run time generic method for an arbitrary room...... geometry acquisition is presented. The method exploits a depth sensor of the Kinect device that provides a point based information of a scanned room interior. After post-processing of the Kinect output data, a 3D point-cloud model of the room is obtained. Sound transmission between two selected points...... level of user immersion by a real time acoustical simulation of a dynamic scenes....

  12. An Analytical Model for the Evolution of the Protoplanetary Disks

    Energy Technology Data Exchange (ETDEWEB)

    Khajenabi, Fazeleh; Kazrani, Kimia; Shadmehri, Mohsen, E-mail: f.khajenabi@gu.ac.ir [Department of Physics, Faculty of Sciences, Golestan University, Gorgan 49138-15739 (Iran, Islamic Republic of)

    2017-06-01

    We obtain a new set of analytical solutions for the evolution of a self-gravitating accretion disk by holding the Toomre parameter close to its threshold and obtaining the stress parameter from the cooling rate. In agreement with the previous numerical solutions, furthermore, the accretion rate is assumed to be independent of the disk radius. Extreme situations where the entire disk is either optically thick or optically thin are studied independently, and the obtained solutions can be used for exploring the early or the final phases of a protoplanetary disk evolution. Our solutions exhibit decay of the accretion rate as a power-law function of the age of the system, with exponents −0.75 and −1.04 for optically thick and thin cases, respectively. Our calculations permit us to explore the evolution of the snow line analytically. The location of the snow line in the optically thick regime evolves as a power-law function of time with the exponent −0.16; however, when the disk is optically thin, the location of the snow line as a function of time with the exponent −0.7 has a stronger dependence on time. This means that in an optically thin disk inward migration of the snow line is faster than an optically thick disk.

  13. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...

  14. SIMPLE MODELS OF THREE COUPLED PT -SYMMETRIC WAVE GUIDES ALLOWING FOR THIRD-ORDER EXCEPTIONAL POINTS

    Directory of Open Access Journals (Sweden)

    Jan Schnabel

    2017-12-01

    Full Text Available We study theoretical models of three coupled wave guides with a PT-symmetric distribution of gain and loss. A realistic matrix model is developed in terms of a three-mode expansion. By comparing with a previously postulated matrix model it is shown how parameter ranges with good prospects of finding a third-order exceptional point (EP3 in an experimentally feasible arrangement of semiconductors can be determined. In addition it is demonstrated that continuous distributions of exceptional points, which render the discovery of the EP3 difficult, are not only a feature of extended wave guides but appear also in an idealised model of infinitely thin guides shaped by delta functions.

  15. MODELLING AND SIMULATION OF A NEUROPHYSIOLOGICAL EXPERIMENT BY SPATIO-TEMPORAL POINT PROCESSES

    Directory of Open Access Journals (Sweden)

    Viktor Beneš

    2011-05-01

    Full Text Available We present a stochastic model of an experimentmonitoring the spiking activity of a place cell of hippocampus of an experimental animal moving in an arena. Doubly stochastic spatio-temporal point process is used to model and quantify overdispersion. Stochastic intensity is modelled by a Lévy based random field while the animal path is simplified to a discrete random walk. In a simulation study first a method suggested previously is used. Then it is shown that a solution of the filtering problem yields the desired inference to the random intensity. Two approaches are suggested and the new one based on finite point process density is applied. Using Markov chain Monte Carlo we obtain numerical results from the simulated model. The methodology is discussed.

  16. Higher-order triangular spectral element method with optimized cubature points for seismic wavefield modeling

    Science.gov (United States)

    Liu, Youshan; Teng, Jiwen; Xu, Tao; Badal, José

    2017-05-01

    The mass-lumped method avoids the cost of inverting the mass matrix and simultaneously maintains spatial accuracy by adopting additional interior integration points, known as cubature points. To date, such points are only known analytically in tensor domains, such as quadrilateral or hexahedral elements. Thus, the diagonal-mass-matrix spectral element method (SEM) in non-tensor domains always relies on numerically computed interpolation points or quadrature points. However, only the cubature points for degrees 1 to 6 are known, which is the reason that we have developed a p-norm-based optimization algorithm to obtain higher-order cubature points. In this way, we obtain and tabulate new cubature points with all positive integration weights for degrees 7 to 9. The dispersion analysis illustrates that the dispersion relation determined from the new optimized cubature points is comparable to that of the mass and stiffness matrices obtained by exact integration. Simultaneously, the Lebesgue constant for the new optimized cubature points indicates its surprisingly good interpolation properties. As a result, such points provide both good interpolation properties and integration accuracy. The Courant-Friedrichs-Lewy (CFL) numbers are tabulated for the conventional Fekete-based triangular spectral element (TSEM), the TSEM with exact integration, and the optimized cubature-based TSEM (OTSEM). A complementary study demonstrates the spectral convergence of the OTSEM. A numerical example conducted on a half-space model demonstrates that the OTSEM improves the accuracy by approximately one order of magnitude compared to the conventional Fekete-based TSEM. In particular, the accuracy of the 7th-order OTSEM is even higher than that of the 14th-order Fekete-based TSEM. Furthermore, the OTSEM produces a result that can compete in accuracy with the quadrilateral SEM (QSEM). The high accuracy of the OTSEM is also tested with a non-flat topography model. In terms of computational

  17. Spacing distribution functions for the one-dimensional point-island model with irreversible attachment

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2011-07-01

    We study the configurational structure of the point-island model for epitaxial growth in one dimension. In particular, we calculate the island gap and capture zone distributions. Our model is based on an approximate description of nucleation inside the gaps. Nucleation is described by the joint probability density pnXY(x,y), which represents the probability density to have nucleation at position x within a gap of size y. Our proposed functional form for pnXY(x,y) describes excellently the statistical behavior of the system. We compare our analytical model with extensive numerical simulations. Our model retains the most relevant physical properties of the system.

  18. Numerical Solution of Fractional Neutron Point Kinetics Model in Nuclear Reactor

    Directory of Open Access Journals (Sweden)

    Nowak Tomasz Karol

    2014-06-01

    Full Text Available This paper presents results concerning solutions of the fractional neutron point kinetics model for a nuclear reactor. Proposed model consists of a bilinear system of fractional and ordinary differential equations. Three methods to solve the model are presented and compared. The first one entails application of discrete Grünwald-Letnikov definition of the fractional derivative in the model. Second involves building an analog scheme in the FOMCON Toolbox in MATLAB environment. Third is the method proposed by Edwards. The impact of selected parameters on the model’s response was examined. The results for typical input were discussed and compared.

  19. Soft modes at the critical end point in the chiral effective models

    International Nuclear Information System (INIS)

    Fujii, Hirotsugu; Ohtani, Munehisa

    2004-01-01

    At the critical end point in QCD phase diagram, the scalar, vector and entropy susceptibilities are known to diverge. The dynamic origin of this divergence is identified within the chiral effective models as softening of a hydrodynamic mode of the particle-hole-type motion, which is a consequence of the conservation law of the baryon number and the energy. (author)

  20. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  1. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    super parameters), and that the structural errors caused by using pilot points and super parameters to parameterize the highly heterogeneous log-transmissivity field can be significant. For the test case much effort is put into studying how the calibrated model's ability to make accurate predictions...

  2. Using many pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    over the model area. Singular value decomposition (SVD) of the normal matrix is used to reduce the large number of pilot point parameters to a smaller number of so-called super parameters that can be estimated by nonlinear regression from the available observations. A number of eigenvectors...

  3. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Science.gov (United States)

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  4. Evaluating Change in Behavioral Preferences: Multidimensional Scaling Single-Ideal Point Model

    Science.gov (United States)

    Ding, Cody

    2016-01-01

    The purpose of the article is to propose a multidimensional scaling single-ideal point model as a method to evaluate changes in individuals' preferences under the explicit methodological framework of behavioral preference assessment. One example is used to illustrate the approach for a clear idea of what this approach can accomplish.

  5. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  6. TARDEC FIXED HEEL POINT (FHP): DRIVER CAD ACCOMMODATION MODEL VERIFICATION REPORT

    Science.gov (United States)

    2017-11-09

    Public Release Disclaimer: Reference herein to any specific commercial company, product , process, or service by trade name, trademark, manufacturer , or...not actively engaged HSI until MSB or the Engineering Manufacturing and Development (EMD) Phase, resulting in significant design and cost changes...and shall not be used for advertising or product endorsement purposes. TARDEC Fixed Heel Point (FHP): Driver CAD Accommodation Model Verification

  7. On Lie point symmetry of classical Wess-Zumino-Witten model

    International Nuclear Information System (INIS)

    Maharana, Karmadeva

    2001-06-01

    We perform the group analysis of Witten's equations of motion for a particle moving in the presence of a magnetic monopole, and also when constrained to move on the surface of a sphere, which is the classical example of Wess-Zumino-Witten model. We also consider variations of this model. Our analysis gives the generators of the corresponding Lie point symmetries. The Lie symmetry corresponding to Kepler's third law is obtained in two related examples. (author)

  8. A random point process model for the score in sport matches

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2009-01-01

    Roč. 20, č. 2 (2009), s. 121-131 ISSN 1471-678X R&D Projects: GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z10750506 Keywords : sport statistics * scoring intensity * Cox’s regression model Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2009/SI/volf-a random point process model for the score in sport matches.pdf

  9. A model for the two-point velocity correlation function in turbulent channel flow

    International Nuclear Information System (INIS)

    Sahay, A.; Sreenivasan, K.R.

    1996-01-01

    A relatively simple analytical expression is presented to approximate the equal-time, two-point, double-velocity correlation function in turbulent channel flow. To assess the accuracy of the model, we perform the spectral decomposition of the integral operator having the model correlation function as its kernel. Comparisons of the empirical eigenvalues and eigenfunctions with those constructed from direct numerical simulations data show good agreement. copyright 1996 American Institute of Physics

  10. Developing, choosing and using landscape evolution models to inform field-based landscape reconstruction studies : Developing, choosing and using landscape evolution models

    NARCIS (Netherlands)

    Temme, A.j.a.m.; Armitage, J.; Attal, M.; Van Gorp, W.; Coulthard, T.j.; Schoorl, J.m.

    2017-01-01

    Landscape evolution models (LEMs) are an increasingly popular resource for geomorphologists as they can operate as virtual laboratories where the implications of hypotheses about processes over human to geological timescales can be visualized at spatial scales from catchments to mountain ranges.

  11. Set points, settling points and some alternative models: theoretical options to understand how genes and environments combine to regulate body adiposity

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2011-11-01

    Full Text Available The close correspondence between energy intake and expenditure over prolonged time periods, coupled with an apparent protection of the level of body adiposity in the face of perturbations of energy balance, has led to the idea that body fatness is regulated via mechanisms that control intake and energy expenditure. Two models have dominated the discussion of how this regulation might take place. The set point model is rooted in physiology, genetics and molecular biology, and suggests that there is an active feedback mechanism linking adipose tissue (stored energy to intake and expenditure via a set point, presumably encoded in the brain. This model is consistent with many of the biological aspects of energy balance, but struggles to explain the many significant environmental and social influences on obesity, food intake and physical activity. More importantly, the set point model does not effectively explain the ‘obesity epidemic’ – the large increase in body weight and adiposity of a large proportion of individuals in many countries since the 1980s. An alternative model, called the settling point model, is based on the idea that there is passive feedback between the size of the body stores and aspects of expenditure. This model accommodates many of the social and environmental characteristics of energy balance, but struggles to explain some of the biological and genetic aspects. The shortcomings of these two models reflect their failure to address the gene-by-environment interactions that dominate the regulation of body weight. We discuss two additional models – the general intake model and the dual intervention point model – that address this issue and might offer better ways to understand how body fatness is controlled.

  12. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  13. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  14. Fitting models of continuous trait evolution to incompletely sampled comparative data using approximate Bayesian computation.

    Science.gov (United States)

    Slater, Graham J; Harmon, Luke J; Wegmann, Daniel; Joyce, Paul; Revell, Liam J; Alfaro, Michael E

    2012-03-01

    In recent years, a suite of methods has been developed to fit multiple rate models to phylogenetic comparative data. However, most methods have limited utility at broad phylogenetic scales because they typically require complete sampling of both the tree and the associated phenotypic data. Here, we develop and implement a new, tree-based method called MECCA (Modeling Evolution of Continuous Characters using ABC) that uses a hybrid likelihood/approximate Bayesian computation (ABC)-Markov-Chain Monte Carlo approach to simultaneously infer rates of diversification and trait evolution from incompletely sampled phylogenies and trait data. We demonstrate via simulation that MECCA has considerable power to choose among single versus multiple evolutionary rate models, and thus can be used to test hypotheses about changes in the rate of trait evolution across an incomplete tree of life. We finally apply MECCA to an empirical example of body size evolution in carnivores, and show that there is no evidence for an elevated rate of body size evolution in the pinnipeds relative to terrestrial carnivores. ABC approaches can provide a useful alternative set of tools for future macroevolutionary studies where likelihood-dependent approaches are lacking. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.

  15. Improving Gastric Cancer Outcome Prediction Using Single Time-Point Artificial Neural Network Models

    Science.gov (United States)

    Nilsaz-Dezfouli, Hamid; Abu-Bakar, Mohd Rizam; Arasan, Jayanthi; Adam, Mohd Bakri; Pourhoseingholi, Mohamad Amin

    2017-01-01

    In cancer studies, the prediction of cancer outcome based on a set of prognostic variables has been a long-standing topic of interest. Current statistical methods for survival analysis offer the possibility of modelling cancer survivability but require unrealistic assumptions about the survival time distribution or proportionality of hazard. Therefore, attention must be paid in developing nonlinear models with less restrictive assumptions. Artificial neural network (ANN) models are primarily useful in prediction when nonlinear approaches are required to sift through the plethora of available information. The applications of ANN models for prognostic and diagnostic classification in medicine have attracted a lot of interest. The applications of ANN models in modelling the survival of patients with gastric cancer have been discussed in some studies without completely considering the censored data. This study proposes an ANN model for predicting gastric cancer survivability, considering the censored data. Five separate single time-point ANN models were developed to predict the outcome of patients after 1, 2, 3, 4, and 5 years. The performance of ANN model in predicting the probabilities of death is consistently high for all time points according to the accuracy and the area under the receiver operating characteristic curve. PMID:28469384

  16. Improved cost models for optimizing CO2 pipeline configuration for point-to-point pipelines and simple networks

    NARCIS (Netherlands)

    Knoope, M. M. J.|info:eu-repo/dai/nl/364248149; Guijt, W.; Ramirez, A.|info:eu-repo/dai/nl/284852414; Faaij, A. P. C.

    In this study, a new cost model is developed for CO2 pipeline transport, which starts with the physical properties of CO2 transport and includes different kinds of steel grades and up-to-date material and construction costs. This pipeline cost model is used for a new developed tool to determine the

  17. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    Energy Technology Data Exchange (ETDEWEB)

    Omar, M.S., E-mail: dr_m_s_omar@yahoo.com [Department of Physics, College of Science, University of Salahaddin-Erbil, Arbil, Kurdistan (Iraq)

    2012-11-15

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å{sup 3} for bulk to 57 Å{sup 3} for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10{sup −6} K{sup −1} for a bulk crystal down to a minimum value of 0.1 × 10{sup −6} K{sup −1} for a 6 nm diameter nanoparticle.

  18. Models for mean bonding length, melting point and lattice thermal expansion of nanoparticle materials

    International Nuclear Information System (INIS)

    Omar, M.S.

    2012-01-01

    Graphical abstract: Three models are derived to explain the nanoparticles size dependence of mean bonding length, melting temperature and lattice thermal expansion applied on Sn, Si and Au. The following figures are shown as an example for Sn nanoparticles indicates hilly applicable models for nanoparticles radius larger than 3 nm. Highlights: ► A model for a size dependent mean bonding length is derived. ► The size dependent melting point of nanoparticles is modified. ► The bulk model for lattice thermal expansion is successfully used on nanoparticles. -- Abstract: A model, based on the ratio number of surface atoms to that of its internal, is derived to calculate the size dependence of lattice volume of nanoscaled materials. The model is applied to Si, Sn and Au nanoparticles. For Si, that the lattice volume is increases from 20 Å 3 for bulk to 57 Å 3 for a 2 nm size nanocrystals. A model, for calculating melting point of nanoscaled materials, is modified by considering the effect of lattice volume. A good approach of calculating size-dependent melting point begins from the bulk state down to about 2 nm diameter nanoparticle. Both values of lattice volume and melting point obtained for nanosized materials are used to calculate lattice thermal expansion by using a formula applicable for tetrahedral semiconductors. Results for Si, change from 3.7 × 10 −6 K −1 for a bulk crystal down to a minimum value of 0.1 × 10 −6 K −1 for a 6 nm diameter nanoparticle.

  19. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    Science.gov (United States)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  20. Modeling evolution of crosstalk in noisy signal transduction networks

    Science.gov (United States)

    Tareen, Ammar; Wingreen, Ned S.; Mukhopadhyay, Ranjan

    2018-02-01

    Signal transduction networks can form highly interconnected systems within cells due to crosstalk between constituent pathways. To better understand the evolutionary design principles underlying such networks, we study the evolution of crosstalk for two parallel signaling pathways that arise via gene duplication. We use a sequence-based evolutionary algorithm and evolve the network based on two physically motivated fitness functions related to information transmission. We find that one fitness function leads to a high degree of crosstalk while the other leads to pathway specificity. Our results offer insights on the relationship between network architecture and information transmission for noisy biomolecular networks.