WorldWideScience

Sample records for maximum limit model

  1. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    Science.gov (United States)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  2. Experimental studies to validate model calculations and maximum solubility limits for Plutonium and Americium

    International Nuclear Information System (INIS)

    2017-01-01

    This report focuses on studies of KIT-INE to derive a significantly improved description of the chemical behaviour of Americium and Plutonium in saline NaCl, MgCl 2 and CaCl 2 brine systems. The studies are based on new experimental data and aim at deriving reliable Am and Pu solubility limits for the investigated systems as well as deriving comprehensive thermodynamic model descriptions. Both aspects are of high relevance in the context of potential source term estimations for Americium and Plutonium in aqueous brine systems and related scenarios. Americium and Plutonium are long-lived alpha emitting radionuclides which due to their high radiotoxicity need to be accounted for in a reliable and traceable way. The hydrolysis of trivalent actinides and the effect of highly alkaline pH conditions on the solubility of trivalent actinides in calcium chloride rich brine solutions were investigated and a thermodynamic model derived. The solubility of Plutonium in saline brine systems was studied under reducing and non-reducing conditions and is described within a new thermodynamic model. The influence of dissolved carbonate on Americium and Plutonium solubility in MgCl 2 solutions was investigated and quantitative information on Am and Pu solubility limits in these systems derived. Thermodynamic constants and model parameter derived in this work are implemented in the Thermodynamic Reference Database THEREDA owned by BfS. According to the quality assurance approach in THEREDA, is was necessary to publish parts of this work in peer-reviewed scientific journals. The publications are focused on solubility experiments, spectroscopy of aquatic and solid species and thermodynamic data. (Neck et al., Pure Appl. Chem., Vol. 81, (2009), pp. 1555-1568., Altmaier et al., Radiochimica Acta, 97, (2009), pp. 187-192., Altmaier et al., Actinide Research Quarterly, No 2., (2011), pp. 29-32.).

  3. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    Science.gov (United States)

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  4. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  5. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  6. Prey size and availability limits maximum size of rainbow trout in a large tailwater: insights from a drift-foraging bioenergetics model

    Science.gov (United States)

    Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W

    2016-01-01

    The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.

  7. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  8. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  9. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  10. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  11. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  12. Experimental studies to validate model calculations and maximum solubility limits for Plutonium and Americium; Experimentelle Arbeiten zur Absicherung von Modellrechnungen und Maximalkonzentrationen fuer Plutonium und Americium

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2017-02-16

    This report focuses on studies of KIT-INE to derive a significantly improved description of the chemical behaviour of Americium and Plutonium in saline NaCl, MgCl{sub 2} and CaCl{sub 2} brine systems. The studies are based on new experimental data and aim at deriving reliable Am and Pu solubility limits for the investigated systems as well as deriving comprehensive thermodynamic model descriptions. Both aspects are of high relevance in the context of potential source term estimations for Americium and Plutonium in aqueous brine systems and related scenarios. Americium and Plutonium are long-lived alpha emitting radionuclides which due to their high radiotoxicity need to be accounted for in a reliable and traceable way. The hydrolysis of trivalent actinides and the effect of highly alkaline pH conditions on the solubility of trivalent actinides in calcium chloride rich brine solutions were investigated and a thermodynamic model derived. The solubility of Plutonium in saline brine systems was studied under reducing and non-reducing conditions and is described within a new thermodynamic model. The influence of dissolved carbonate on Americium and Plutonium solubility in MgCl{sub 2} solutions was investigated and quantitative information on Am and Pu solubility limits in these systems derived. Thermodynamic constants and model parameter derived in this work are implemented in the Thermodynamic Reference Database THEREDA owned by BfS. According to the quality assurance approach in THEREDA, is was necessary to publish parts of this work in peer-reviewed scientific journals. The publications are focused on solubility experiments, spectroscopy of aquatic and solid species and thermodynamic data. (Neck et al., Pure Appl. Chem., Vol. 81, (2009), pp. 1555-1568., Altmaier et al., Radiochimica Acta, 97, (2009), pp. 187-192., Altmaier et al., Actinide Research Quarterly, No 2., (2011), pp. 29-32.).

  13. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  14. Global Harmonization of Maximum Residue Limits for Pesticides.

    Science.gov (United States)

    Ambrus, Árpád; Yang, Yong Zhen

    2016-01-13

    International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.

  15. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  16. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  17. Feedback Limits to Maximum Seed Masses of Black Holes

    International Nuclear Information System (INIS)

    Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea

    2017-01-01

    The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  18. Feedback Limits to Maximum Seed Masses of Black Holes

    Energy Technology Data Exchange (ETDEWEB)

    Pacucci, Fabio; Natarajan, Priyamvada [Department of Physics, Yale University, P.O. Box 208121, New Haven, CT 06520 (United States); Ferrara, Andrea [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2017-02-01

    The most massive black holes observed in the universe weigh up to ∼10{sup 10} M {sub ⊙}, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M {sub •} ≳ 10{sup 4} M {sub ⊙}) hosted in small isolated halos ( M {sub h} ≲ 10{sup 9} M {sub ⊙}) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M {sub •}– σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10{sup 4–6} M {sub ⊙}, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  19. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  20. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  1. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  2. Modelling maximum likelihood estimation of availability

    International Nuclear Information System (INIS)

    Waller, R.A.; Tietjen, G.L.; Rock, G.W.

    1975-01-01

    Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)

  3. Reduced oxygen at high altitude limits maximum size.

    Science.gov (United States)

    Peck, L S; Chapelle, G

    2003-11-07

    The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).

  4. Vehicle Maximum Weight Limitation Based on Intelligent Weight Sensor

    Science.gov (United States)

    Raihan, W.; Tessar, R. M.; Ernest, C. O. S.; E Byan, W. R.; Winda, A.

    2017-03-01

    Vehicle weight is an important factor to be maintained for transportation safety. A weight limitation system is proposed to make sure the vehicle weight is always below its designation prior the vehicle is being used by the driver. The proposed system is divided into two systems, namely vehicle weight confirmation system and weight warning system. In vehicle weight confirmation system, the weight sensor work for the first time after the ignition switch is turned on. When the weight is under the weight limit, the starter engine can be switched on to start the engine system, otherwise it will be locked. The seconds system, will operated after checking all the door at close position, once the door of the car is closed, the weight warning system will check once again the weight during runing engine condition. The results of these two systems, vehicle weight confirmation system and weight warning system have 100 % accuracy, respectively. These show that the proposed vehicle weight limitation system operate well.

  5. Maximum total organic carbon limits at different DWPF melter feed maters (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1996-01-01

    The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes

  6. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  7. Modeling Complex Time Limits

    Directory of Open Access Journals (Sweden)

    Oleg Svatos

    2013-01-01

    Full Text Available In this paper we analyze complexity of time limits we can find especially in regulated processes of public administration. First we review the most popular process modeling languages. There is defined an example scenario based on the current Czech legislature which is then captured in discussed process modeling languages. Analysis shows that the contemporary process modeling languages support capturing of the time limit only partially. This causes troubles to analysts and unnecessary complexity of the models. Upon unsatisfying results of the contemporary process modeling languages we analyze the complexity of the time limits in greater detail and outline lifecycles of a time limit using the multiple dynamic generalizations pattern. As an alternative to the popular process modeling languages there is presented PSD process modeling language, which supports the defined lifecycles of a time limit natively and therefore allows keeping the models simple and easy to understand.

  8. Modeling maximum daily temperature using a varying coefficient regression model

    Science.gov (United States)

    Han Li; Xinwei Deng; Dong-Yum Kim; Eric P. Smith

    2014-01-01

    Relationships between stream water and air temperatures are often modeled using linear or nonlinear regression methods. Despite a strong relationship between water and air temperatures and a variety of models that are effective for data summarized on a weekly basis, such models did not yield consistently good predictions for summaries such as daily maximum temperature...

  9. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  10. Maximum entropy models of ecosystem functioning

    International Nuclear Information System (INIS)

    Bertram, Jason

    2014-01-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example

  11. Maximum entropy models of ecosystem functioning

    Energy Technology Data Exchange (ETDEWEB)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  12. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  13. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  14. Maximum β limited by ideal MHD ballooning instabilites in JT-60

    International Nuclear Information System (INIS)

    Seki, Shogo; Azumi, Masashi

    1986-03-01

    Maximum β limited by ideal MHD ballooning instabilities is investigated on divertor configurations in JT-60. Maximum β against ballooning modes in JT-60 has strong dependecy on the distribution of the safety factor over the magnetic surfaces. Maximum β is ∼ 2 % for q 0 = 1.0, while more than 3 % for q 0 = 1.5. These results suggest that the profile control of the safety factor, especially on the magnetic axis, is attractive to the higher β operation in JT-60. (author)

  15. Physical Limits on Hmax, the Maximum Height of Glaciers and Ice Sheets

    Science.gov (United States)

    Lipovsky, B. P.

    2017-12-01

    The longest glaciers and ice sheets on Earth never achieve a topographic relief, or height, greater than about Hmax = 4 km. What laws govern this apparent maximum height to which a glacier or ice sheet may rise? Two types of answer appear possible: one relating to geological process and the other to ice dynamics. In the first type of answer, one might suppose that if Earth had 100 km tall mountains then there would be many 20 km tall glaciers. The counterpoint to this argument is that recent evidence suggests that glaciers themselves limit the maximum height of mountain ranges. We turn, then, to ice dynamical explanations for Hmax. The classical ice dynamical theory of Nye (1951), however, does not predict any break in scaling to give rise to a maximum height, Hmax. I present a simple model for the height of glaciers and ice sheets. The expression is derived from a simplified representation of a thermomechanically coupled ice sheet that experiences a basal shear stress governed by Coulomb friction (i.e., a stress proportional to the overburden pressure minus the water pressure). I compare this model to satellite-derived digital elevation map measurements of glacier surface height profiles for the 200,000 glaciers in the Randolph Glacier Inventory (Pfeffer et al., 2014) as well as flowlines from the Greenland and Antarctic Ice Sheets. The simplified model provides a surprisingly good fit to these global observations. Small glaciers less than 1 km in length are characterized by having negligible influence of basal melt water, cold ( -15C) beds, and high surface slopes ( 30 deg). Glaciers longer than a critical distance 30km are characterized by having an ice-bed interface that is weakened by the presence of meltwater and is therefore not capable of supporting steep surface slopes. The simplified model makes predictions of ice volume change as a function of surface temperature, accumulation rate, and geothermal heat flux. For this reason, it provides insights into

  16. Theoretical and experimental investigations of the limits to the maximum output power of laser diodes

    International Nuclear Information System (INIS)

    Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G

    2010-01-01

    The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.

  17. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  18. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  19. Longitudinal and transverse space charge limitations on transport of maximum power beams

    International Nuclear Information System (INIS)

    Khoe, T.K.; Martin, R.L.

    1977-01-01

    The maximum transportable beam power is a critical issue in selecting the most favorable approach to generating ignition pulses for inertial fusion with high energy accelerators. Maschke and Courant have put forward expressions for the limits on transport power for quadrupole and solenoidal channels. Included in a more general way is the self consistent effect of space charge defocusing on the power limit. The results show that no limits on transmitted power exist in principal. In general, quadrupole transport magnets appear superior to solenoids except for transport of very low energy and highly charged particles. Longitudinal space charge effects are very significant for transport of intense beams

  20. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  1. The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun

    Science.gov (United States)

    Gorobets, A. Y.; Berdyugina, S. V.; Riethmüller, T. L.; Blanco Rodríguez, J.; Solanki, S. K.; Barthol, P.; Gandorfer, A.; Gizon, L.; Hirzberger, J.; van Noort, M.; Del Toro Iniesta, J. C.; Orozco Suárez, D.; Schmidt, W.; Martínez Pillet, V.; Knölker, M.

    2017-11-01

    The observed magnetic field on the solar surface is characterized by a very complex spatial and temporal behavior. Although feature-tracking algorithms have allowed us to deepen our understanding of this behavior, subjectivity plays an important role in the identification and tracking of such features. In this paper, we continue studies of the temporal stochasticity of the magnetic field on the solar surface without relying either on the concept of magnetic features or on subjective assumptions about their identification and interaction. We propose a data analysis method to quantify fluctuations of the line-of-sight magnetic field by means of reducing the temporal field’s evolution to the regular Markov process. We build a representative model of fluctuations converging to the unique stationary (equilibrium) distribution in the long time limit with maximum entropy. We obtained different rates of convergence to the equilibrium at fixed noise cutoff for two sets of data. This indicates a strong influence of the data spatial resolution and mixing-polarity fluctuations on the relaxation process. The analysis is applied to observations of magnetic fields of the relatively quiet areas around an active region carried out during the second flight of the Sunrise/IMaX and quiet Sun areas at the disk center from the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory satellite.

  2. Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power

    DEFF Research Database (Denmark)

    Yang, Yongheng; Wang, Huai; Blaabjerg, Frede

    2014-01-01

    Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level...... devices, allowing a quantitative prediction of the power device lifetime. A study case on a 3 kW single-phase PV inverter has demonstrated the advantages of the CPG control in terms of improved reliability.......Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level....... The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...

  3. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  4. Maximum Throughput in a C-RAN Cluster with Limited Fronthaul Capacity

    OpenAIRE

    Duan , Jialong; Lagrange , Xavier; Guilloud , Frédéric

    2016-01-01

    International audience; Centralized/Cloud Radio Access Network (C-RAN) is a promising future mobile network architecture which can ease the cooperation between different cells to manage interference. However, the feasibility of C-RAN is limited by the large bit rate requirement in the fronthaul. This paper study the maximum throughput of different transmission strategies in a C-RAN cluster with transmission power constraints and fronthaul capacity constraints. Both transmission strategies wit...

  5. Avinash-Shukla mass limit for the maximum dust mass supported against gravity by electric fields

    Science.gov (United States)

    Avinash, K.

    2010-08-01

    The existence of a new class of astrophysical objects, where gravity is balanced by the shielded electric fields associated with the electric charge on the dust, is shown. Further, a mass limit MA for the maximum dust mass that can be supported against gravitational collapse by these fields is obtained. If the total mass of the dust in the interstellar cloud MD > MA, the dust collapses, while if MD < MA, stable equilibrium may be achieved. Heuristic arguments are given to show that the physics of the mass limit is similar to the Chandrasekar's mass limit for compact objects and the similarity of these dust configurations with neutron and white dwarfs is pointed out. The effect of grain size distribution on the mass limit and strong correlation effects in the core of such objects is discussed. Possible location of these dust configurations inside interstellar clouds is pointed out.

  6. Mechanical limits to maximum weapon size in a giant rhinoceros beetle.

    Science.gov (United States)

    McCullough, Erin L

    2014-07-07

    The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Optimal item discrimination and maximum information for logistic IRT models

    NARCIS (Netherlands)

    Veerkamp, W.J.J.; Veerkamp, Wim J.J.; Berger, Martijn P.F.; Berger, Martijn

    1999-01-01

    Items with the highest discrimination parameter values in a logistic item response theory model do not necessarily give maximum information. This paper derives discrimination parameter values, as functions of the guessing parameter and distances between person parameters and item difficulty, that

  8. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  9. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  10. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  11. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Stimulus-dependent maximum entropy models of neural population codes.

    Directory of Open Access Journals (Sweden)

    Einat Granot-Atedgi

    Full Text Available Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME model-a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.

  13. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  14. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Reliability of buildings in service limit state for maximum horizontal displacements

    Directory of Open Access Journals (Sweden)

    A. G. B. Corelhano

    Full Text Available Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984 [10] are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.

  16. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  17. Maximum Entropy Discrimination Poisson Regression for Software Reliability Modeling.

    Science.gov (United States)

    Chatzis, Sotirios P; Andreou, Andreas S

    2015-11-01

    Reliably predicting software defects is one of the most significant tasks in software engineering. Two of the major components of modern software reliability modeling approaches are: 1) extraction of salient features for software system representation, based on appropriately designed software metrics and 2) development of intricate regression models for count data, to allow effective software reliability data modeling and prediction. Surprisingly, research in the latter frontier of count data regression modeling has been rather limited. More specifically, a lack of simple and efficient algorithms for posterior computation has made the Bayesian approaches appear unattractive, and thus underdeveloped in the context of software reliability modeling. In this paper, we try to address these issues by introducing a novel Bayesian regression model for count data, based on the concept of max-margin data modeling, effected in the context of a fully Bayesian model treatment with simple and efficient posterior distribution updates. Our novel approach yields a more discriminative learning technique, making more effective use of our training data during model inference. In addition, it allows of better handling uncertainty in the modeled data, which can be a significant problem when the training data are limited. We derive elegant inference algorithms for our model under the mean-field paradigm and exhibit its effectiveness using the publicly available benchmark data sets.

  18. Modeling Mediterranean Ocean climate of the Last Glacial Maximum

    Directory of Open Access Journals (Sweden)

    U. Mikolajewicz

    2011-03-01

    Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.

  19. Higher renewable energy integration into the existing energy system of Finland – Is there any maximum limit?

    International Nuclear Information System (INIS)

    Zakeri, Behnam; Syri, Sanna; Rinne, Samuli

    2015-01-01

    Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.

  20. On the maximum-entropy/autoregressive modeling of time series

    Science.gov (United States)

    Chao, B. F.

    1984-01-01

    The autoregressive (AR) model of a random process is interpreted in the light of the Prony's relation which relates a complex conjugate pair of poles of the AR process in the z-plane (or the z domain) on the one hand, to the complex frequency of one complex harmonic function in the time domain on the other. Thus the AR model of a time series is one that models the time series as a linear combination of complex harmonic functions, which include pure sinusoids and real exponentials as special cases. An AR model is completely determined by its z-domain pole configuration. The maximum-entropy/autogressive (ME/AR) spectrum, defined on the unit circle of the z-plane (or the frequency domain), is nothing but a convenient, but ambiguous visual representation. It is asserted that the position and shape of a spectral peak is determined by the corresponding complex frequency, and the height of the spectral peak contains little information about the complex amplitude of the complex harmonic functions.

  1. The Betz-Joukowsky limit for the maximum power coefficient of wind turbines

    DEFF Research Database (Denmark)

    Okulov, Valery; van Kuik, G.A.M.

    2009-01-01

    The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...

  2. Detection of maximum loadability limits and weak buses using Chaotic PSO considering security constraints

    International Nuclear Information System (INIS)

    Acharjee, P.; Mallick, S.; Thakur, S.S.; Ghoshal, S.P.

    2011-01-01

    Highlights: → The unique cost function is derived considering practical Security Constraints. → New innovative formulae of PSO parameters are developed for better performance. → The inclusion and implementation of chaos in PSO technique is original and unique. → Weak buses are identified where FACTS devices can be implemented. → The CPSO technique gives the best performance for all the IEEE standard test systems. - Abstract: In the current research chaotic search is used with the optimization technique for solving non-linear complicated power system problems because Chaos can overcome the local optima problem of optimization technique. Power system problem, more specifically voltage stability, is one of the practical examples of non-linear, complex, convex problems. Smart grid, restructured energy system and socio-economic development fetch various uncertain events in power systems and the level of uncertainty increases to a great extent day by day. In this context, analysis of voltage stability is essential. The efficient method to assess the voltage stability is maximum loadability limit (MLL). MLL problem is formulated as a maximization problem considering practical security constraints (SCs). Detection of weak buses is also important for the analysis of power system stability. Both MLL and weak buses are identified by PSO methods and FACTS devices can be applied to the detected weak buses for the improvement of stability. Three particle swarm optimization (PSO) techniques namely General PSO (GPSO), Adaptive PSO (APSO) and Chaotic PSO (CPSO) are presented for the comparative study with obtaining MLL and weak buses under different SCs. In APSO method, PSO-parameters are made adaptive with the problem and chaos is incorporated in CPSO method to obtain reliable convergence and better performances. All three methods are applied on standard IEEE 14 bus, 30 bus, 57 bus and 118 bus test systems to show their comparative computing effectiveness and

  3. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  4. The impact of regulations, safety considerations and physical limitations on research progress at maximum biocontainment.

    Science.gov (United States)

    Shurtleff, Amy C; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S; Bavari, Sina

    2012-12-01

    We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.

  5. The Impact of Regulations, Safety Considerations and Physical Limitations on Research Progress at Maximum Biocontainment

    Directory of Open Access Journals (Sweden)

    Jean Patterson

    2012-12-01

    Full Text Available We describe herein, limitations on research at biosafety level 4 (BSL-4 containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.

  6. The Impact of Regulations, Safety Considerations and Physical Limitations on Research Progress at Maximum Biocontainment

    Science.gov (United States)

    Shurtleff, Amy C.; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S.; Bavari, Sina

    2012-01-01

    We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review. PMID:23342380

  7. 40 CFR 130.7 - Total maximum daily loads (TMDL) and individual water quality-based effluent limitations.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Total maximum daily loads (TMDL) and individual water quality-based effluent limitations. 130.7 Section 130.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY PLANNING AND MANAGEMENT § 130.7 Total...

  8. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  9. The logistic model-generated carrying capacities, maximum ...

    African Journals Online (AJOL)

    This paper deals with the derivation of logistic models for cattle, sheep and goats in a commercial ranching system in Machakos District, Kenya, a savannah ecosystem with average annual rainfall of 589.3 ± 159.3mm and an area of 10 117ha. It involves modelling livestock population dynamics as discrete-time logistic ...

  10. Extracting maximum petrophysical and geological information from a limited reservoir database

    Energy Technology Data Exchange (ETDEWEB)

    Ali, M.; Chawathe, A.; Ouenes, A. [New Mexico Institute of Mining and Technology, Socorro, NM (United States)] [and others

    1997-08-01

    The characterization of old fields lacking sufficient core and log data is a challenging task. This paper describes a methodology that uses new and conventional tools to build a reliable reservoir model for the Sulimar Queen field. At the fine scale, permeability measured on a fine grid with a minipermeameter was used in conjunction with the petrographic data collected on multiple thin sections. The use of regression analysis and a newly developed fuzzy logic algorithm led to the identification of key petrographic elements which control permeability. At the log scale, old gamma ray logs were first rescaled/calibrated throughout the entire field for consistency and reliability using only four modem logs. Using data from one cored well and the rescaled gamma ray logs, correlations between core porosity, permeability, total water content and gamma ray were developed to complete the small scale characterization. At the reservoir scale, outcrop data and the rescaled gamma logs were used to define the reservoir structure over an area of ten square miles where only 36 wells were available. Given the structure, the rescaled gamma ray logs were used to build the reservoir volume by identifying the flow units and their continuity. Finally, history-matching results constrained to the primary production were used to estimate the dynamic reservoir properties such as relative permeabilities to complete the characterization. The obtained reservoir model was tested by forecasting the waterflood performance and which was in good agreement with the actual performance.

  11. Pushing desalination recovery to the maximum limit: Membrane and thermal processes integration

    KAUST Repository

    Shahzad, Muhammad Wakil

    2017-05-05

    The economics of seawater desalination processes has been continuously improving as a result of desalination market expansion. Presently, reverse osmosis (RO) processes are leading in global desalination with 53% share followed by thermally driven technologies 33%, but in Gulf Cooperation Council (GCC) countries their shares are 42% and 56% respectively due to severe feed water quality. In RO processes, intake, pretreatment and brine disposal cost 25% of total desalination cost at 30–35% recovery. We proposed a tri-hybrid system to enhance overall recovery up to 81%. The conditioned brine leaving from RO processes supplied to proposed multi-evaporator adsorption cycle driven by low temperature industrial waste heat sources or solar energy. RO membrane simulation has been performed using WinFlow and IMSDesign commercial softwares developed by GE and Nitto. Detailed mathematical model of overall system is developed and simulation has been conducted in FORTRAN. The final brine reject concentration from tri-hybrid cycle can vary from 166,000ppm to 222,000ppm if RO retentate concentration varies from 45,000ppm to 60,000ppm. We also conducted economic analysis and showed that the proposed tri-hybrid cycle can achieve highest recovery, 81%, and lowest energy consumption, 1.76kWhelec/m3, for desalination reported in the literature up till now.

  12. Quantum Gravity and Maximum Attainable Velocities in the Standard Model

    International Nuclear Information System (INIS)

    Alfaro, Jorge

    2007-01-01

    A main difficulty in the quantization of the gravitational field is the lack of experiments that discriminate among the theories proposed to quantize gravity. Recently we showed that the Standard Model(SM) itself contains tiny Lorentz invariance violation(LIV) terms coming from QG. All terms depend on one arbitrary parameter α that set the scale of QG effects. In this talk we review the LIV for mesons nucleons and leptons and apply it to study several effects, including the GZK anomaly

  13. How fast can we learn maximum entropy models of neural populations?

    Energy Technology Data Exchange (ETDEWEB)

    Ganmor, Elad; Schneidman, Elad [Department of Neuroscience, Weizmann Institute of Science, Rehovot 76100 (Israel); Segev, Ronen, E-mail: elad.ganmor@weizmann.ac.i, E-mail: elad.schneidman@weizmann.ac.i [Department of Life Sciences and Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel)

    2009-12-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  14. How fast can we learn maximum entropy models of neural populations?

    International Nuclear Information System (INIS)

    Ganmor, Elad; Schneidman, Elad; Segev, Ronen

    2009-01-01

    Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

  15. The simplest maximum entropy model for collective behavior in a neural network

    International Nuclear Information System (INIS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Bialek, William; Berry II, Michael J

    2013-01-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. (paper)

  16. Compact stars with a small electric charge: the limiting radius to mass relation and the maximum mass for incompressible matter

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)

    2015-02-01

    One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)

  17. Limits with modeling data and modeling data with limits

    Directory of Open Access Journals (Sweden)

    Lionello Pogliani

    2006-01-01

    Full Text Available Modeling of the solubility of amino acids and purine and pyrimidine bases with a set of sixteen molecular descriptors has been thoroughly analyzed to detect and understand the reasons for anomalies in the description of this property for these two classes of compounds. Unsatisfactory modeling can be ascribed to incomplete collateral data, i.e, to the fact that there is insufficient data known about the behavior of these compounds in solution. This is usually because intermolecular forces cannot be modeled. The anomalous modeling can be detected from the rather large values of the standard deviation of the estimates of the whole set of compounds, and from the unsatisfactory modeling of some of the subsets of these compounds. Thus the detected abnormalities can be used (i to get an idea about weak intermolecular interactions such as hydration, self-association, the hydrogen-bond phenomena in solution, and (ii to reshape the molecular descriptors with the introduction of parameters that allow better modeling. This last procedure should be used with care, bearing in mind that the solubility phenomena is rather complex.

  18. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    Energy Technology Data Exchange (ETDEWEB)

    Gopich, Irina V. [Laboratory of Chemical Physics, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, Maryland 20892 (United States)

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.

  19. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  20. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  1. Existence and uniqueness of the maximum likelihood estimator for models with a Kronecker product covariance structure

    NARCIS (Netherlands)

    Ros, B.P.; Bijma, F.; de Munck, J.C.; de Gunst, M.C.M.

    2016-01-01

    This paper deals with multivariate Gaussian models for which the covariance matrix is a Kronecker product of two matrices. We consider maximum likelihood estimation of the model parameters, in particular of the covariance matrix. There is no explicit expression for the maximum likelihood estimator

  2. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  3. Modeling of Maximum Power Point Tracking Controller for Solar Power System

    Directory of Open Access Journals (Sweden)

    Aryuanto Soetedjo

    2012-09-01

    Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.

  4. 24 CFR 203.18c - One-time or up-front mortgage insurance premium excluded from limitations on maximum mortgage...

    Science.gov (United States)

    2010-04-01

    ... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...

  5. Molecular Sticker Model Stimulation on Silicon for a Maximum Clique Problem

    Directory of Open Access Journals (Sweden)

    Jianguo Ning

    2015-06-01

    Full Text Available Molecular computers (also called DNA computers, as an alternative to traditional electronic computers, are smaller in size but more energy efficient, and have massive parallel processing capacity. However, DNA computers may not outperform electronic computers owing to their higher error rates and some limitations of the biological laboratory. The stickers model, as a typical DNA-based computer, is computationally complete and universal, and can be viewed as a bit-vertically operating machine. This makes it attractive for silicon implementation. Inspired by the information processing method on the stickers computer, we propose a novel parallel computing model called DEM (DNA Electronic Computing Model on System-on-a-Programmable-Chip (SOPC architecture. Except for the significant difference in the computing medium—transistor chips rather than bio-molecules—the DEM works similarly to DNA computers in immense parallel information processing. Additionally, a plasma display panel (PDP is used to show the change of solutions, and helps us directly see the distribution of assignments. The feasibility of the DEM is tested by applying it to compute a maximum clique problem (MCP with eight vertices. Owing to the limited computing sources on SOPC architecture, the DEM could solve moderate-size problems in polynomial time.

  6. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    International Nuclear Information System (INIS)

    Zhou, W; Oodo, S O; He, H; Qiu, G Y

    2013-01-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  7. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    Science.gov (United States)

    Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.

    2013-03-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  8. Setting limits on supersymmetry using simplified models

    CERN Document Server

    Gutschow, C.

    2012-01-01

    Experimental limits on supersymmetry and similar theories are difficult to set because of the enormous available parameter space and difficult to generalize because of the complexity of single points. Therefore, more phenomenological, simplified models are becoming popular for setting experimental limits, as they have clearer physical implications. The use of these simplified model limits to set a real limit on a concrete theory has not, however, been demonstrated. This paper recasts simplified model limits into limits on a specific and complete supersymmetry model, minimal supergravity. Limits obtained under various physical assumptions are comparable to those produced by directed searches. A prescription is provided for calculating conservative and aggressive limits on additional theories. Using acceptance and efficiency tables along with the expected and observed numbers of events in various signal regions, LHC experimental results can be re-cast in this manner into almost any theoretical framework, includ...

  9. Paired maximum inspiratory and expiratory plain chest radiographs for assessment of airflow limitation in chronic obstructive pulmonary disease

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Takashi, E-mail: tkino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Kawayama, Tomotaka, E-mail: kawayama_tomotaka@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Imamura, Youhei, E-mail: mamura_youhei@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Sakazaki, Yuki, E-mail: sakazaki@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Hirai, Ryo, E-mail: hirai_ryou@kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Ishii, Hidenobu, E-mail: shii_hidenobu@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Suetomo, Masashi, E-mail: jin_t_f_c@yahoo.co.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Matsunaga, Kazuko, E-mail: kmatsunaga@kouhoukai.or.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Azuma, Koichi, E-mail: azuma@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Fujimoto, Kiminori, E-mail: kimichan@med.kurume-u.ac.jp [Department of Radiology, Kurume University School of Medicine, Kurume (Japan); Hoshino, Tomoaki, E-mail: hoshino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan)

    2015-04-15

    Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and

  10. Paired maximum inspiratory and expiratory plain chest radiographs for assessment of airflow limitation in chronic obstructive pulmonary disease

    International Nuclear Information System (INIS)

    Kinoshita, Takashi; Kawayama, Tomotaka; Imamura, Youhei; Sakazaki, Yuki; Hirai, Ryo; Ishii, Hidenobu; Suetomo, Masashi; Matsunaga, Kazuko; Azuma, Koichi; Fujimoto, Kiminori; Hoshino, Tomoaki

    2015-01-01

    Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and

  11. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    Science.gov (United States)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  12. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  13. Effects of lag and maximum growth in contaminant transport and biodegradation modeling

    International Nuclear Information System (INIS)

    Wood, B.D.; Dawson, C.N.

    1992-06-01

    The effects of time lag and maximum microbial growth on biodegradation in contaminant transport are discussed. A mathematical model is formulated that accounts for these effects, and a numerical case study is presented that demonstrates how lag influences biodegradation

  14. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  15. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario

    International Nuclear Information System (INIS)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-01-01

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)

  16. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario.

    Science.gov (United States)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-07-07

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.

  17. Predicting the distribution of the Asian tapir in Peninsular Malaysia using maximum entropy modeling.

    Science.gov (United States)

    Clements, Gopalasamy Reuben; Rayan, D Mark; Aziz, Sheema Abdul; Kawanishi, Kae; Traeholt, Carl; Magintan, David; Yazi, Muhammad Fadlli Abdul; Tingley, Reid

    2012-12-01

    In 2008, the IUCN threat status of the Asian tapir (Tapirus indicus) was reclassified from 'vulnerable' to 'endangered'. The latest distribution map from the IUCN Red List suggests that the tapirs' native range is becoming increasingly fragmented in Peninsular Malaysia, but distribution data collected by local researchers suggest a more extensive geographical range. Here, we compile a database of 1261 tapir occurrence records within Peninsular Malaysia, and demonstrate that this species, indeed, has a much broader geographical range than the IUCN range map suggests. However, extreme spatial and temporal bias in these records limits their utility for conservation planning. Therefore, we used maximum entropy (MaxEnt) modeling to elucidate the potential extent of the Asian tapir's occurrence in Peninsular Malaysia while accounting for bias in existing distribution data. Our MaxEnt model predicted that the Asian tapir has a wider geographic range than our fine-scale data and the IUCN range map both suggest. Approximately 37% of Peninsular Malaysia contains potentially suitable tapir habitats. Our results justify a revision to the Asian tapir's extent of occurrence in the IUCN Red List. Furthermore, our modeling demonstrated that selectively logged forests encompass 45% of potentially suitable tapir habitats, underscoring the importance of these habitats for the conservation of this species in Peninsular Malaysia. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  18. An extended heterogeneous car-following model accounting for anticipation driving behavior and mixed maximum speeds

    Science.gov (United States)

    Sun, Fengxin; Wang, Jufeng; Cheng, Rongjun; Ge, Hongxia

    2018-02-01

    The optimal driving speeds of the different vehicles may be different for the same headway. In the optimal velocity function of the optimal velocity (OV) model, the maximum speed vmax is an important parameter determining the optimal driving speed. A vehicle with higher maximum speed is more willing to drive faster than that with lower maximum speed in similar situation. By incorporating the anticipation driving behavior of relative velocity and mixed maximum speeds of different percentages into optimal velocity function, an extended heterogeneous car-following model is presented in this paper. The analytical linear stable condition for this extended heterogeneous traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulted from the cooperation between anticipation driving behavior and heterogeneous maximum speeds in the optimal velocity function. The analytical and numerical results all demonstrate that strengthening driver's anticipation effect can improve the stability of heterogeneous traffic flow, and increasing the lowest value in the mixed maximum speeds will result in more instability, but increasing the value or proportion of the part already having higher maximum speed will cause different stabilities at high or low traffic densities.

  19. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  20. Modelling non-stationary annual maximum flood heights in the lower Limpopo River basin of Mozambique

    Directory of Open Access Journals (Sweden)

    Daniel Maposa

    2016-05-01

    Full Text Available In this article we fit a time-dependent generalised extreme value (GEV distribution to annual maximum flood heights at three sites: Chokwe, Sicacate and Combomune in the lower Limpopo River basin of Mozambique. A GEV distribution is fitted to six annual maximum time series models at each site, namely: annual daily maximum (AM1, annual 2-day maximum (AM2, annual 5-day maximum (AM5, annual 7-day maximum (AM7, annual 10-day maximum (AM10 and annual 30-day maximum (AM30. Non-stationary time-dependent GEV models with a linear trend in location and scale parameters are considered in this study. The results show lack of sufficient evidence to indicate a linear trend in the location parameter at all three sites. On the other hand, the findings in this study reveal strong evidence of the existence of a linear trend in the scale parameter at Combomune and Sicacate, whilst the scale parameter had no significant linear trend at Chokwe. Further investigation in this study also reveals that the location parameter at Sicacate can be modelled by a nonlinear quadratic trend; however, the complexity of the overall model is not worthwhile in fit over a time-homogeneous model. This study shows the importance of extending the time-homogeneous GEV model to incorporate climate change factors such as trend in the lower Limpopo River basin, particularly in this era of global warming and a changing climate. Keywords: nonstationary extremes; annual maxima; lower Limpopo River; generalised extreme value

  1. Maximum heart rate in brown trout (Salmo trutta fario) is not limited by firing rate of pacemaker cells.

    Science.gov (United States)

    Haverinen, Jaakko; Abramochkin, Denis V; Kamkin, Andre; Vornanen, Matti

    2017-02-01

    Temperature-induced changes in cardiac output (Q̇) in fish are largely dependent on thermal modulation of heart rate (f H ), and at high temperatures Q̇ collapses due to heat-dependent depression of f H This study tests the hypothesis that firing rate of sinoatrial pacemaker cells sets the upper thermal limit of f H in vivo. To this end, temperature dependence of action potential (AP) frequency of enzymatically isolated pacemaker cells (pacemaker rate, f PM ), spontaneous beating rate of isolated sinoatrial preparations (f SA ), and in vivo f H of the cold-acclimated (4°C) brown trout (Salmo trutta fario) were compared under acute thermal challenges. With rising temperature, f PM steadily increased because of the acceleration of diastolic depolarization and shortening of AP duration up to the break point temperature (T BP ) of 24.0 ± 0.37°C, at which point the electrical activity abruptly ceased. The maximum f PM at T BP was much higher [193 ± 21.0 beats per minute (bpm)] than the peak f SA (94.3 ± 6.0 bpm at 24.1°C) or peak f H (76.7 ± 2.4 at 15.7 ± 0.82°C) (P brown trout in vivo. Copyright © 2017 the American Physiological Society.

  2. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  3. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  4. Hanford defined waste model limitations and improvements

    International Nuclear Information System (INIS)

    HARMSEN, R.W.

    1999-01-01

    Recommendation 93-5 Implementation Plan, Milestone 5,6.3.1.i requires issuance of this report which addresses ''updates to the tank contents model''. This report summarizes the review of the Hanford Defined Waste, Revision 4, model limitations and provides conclusions and recommendations for potential updates to the model

  5. Transport methods: general. 6. A Flux-Limited Diffusion Theory Derived from the Maximum Entropy Eddington Factor

    International Nuclear Information System (INIS)

    Yin, Chukai; Su, Bingjing

    2001-01-01

    The Minerbo's maximum entropy Eddington factor (MEEF) method was proposed as a low-order approximation to transport theory, in which the first two moment equations are closed for the scalar flux f and the current F through a statistically derived nonlinear Eddington factor f. This closure has the ability to handle various degrees of anisotropy of angular flux and is well justified both numerically and theoretically. Thus, a lot of efforts have been made to use this approximation in transport computations, especially in the radiative transfer and astrophysics communities. However, the method suffers numerical instability and may lead to anomalous solutions if the equations are solved by certain commonly used (implicit) mesh schemes. Studies on numerical stability in one-dimensional cases show that the MEEF equations can be solved satisfactorily by an implicit scheme (of treating δΦ/δx) if the angular flux is not too anisotropic so that f 32 , the classic diffusion solution P 1 , the MEEF solution f M obtained by Riemann solvers, and the NFLD solution D M for the two problems, respectively. In Fig. 1, NFLD and MEEF quantitatively predict very close results. However, the NFLD solution is qualitatively better because it is continuous while MEEF predicts unphysical jumps near the middle of the slab. In Fig. 2, the NFLD and MEEF solutions are almost identical, except near the material interface. In summary, the flux-limited diffusion theory derived from the MEEF description is quantitatively as accurate as the MEEF method. However, it is more qualitatively correct and user-friendly than the MEEF method and can be applied efficiently to various steady-state problems. Numerical tests show that this method is widely valid and overall predicts better results than other low-order approximations for various kinds of problems, including eigenvalue problems. Thus, it is an appealing approximate solution technique that is fast computationally and yet is accurate enough for a

  6. Modelling of particles collection by vented limiters

    International Nuclear Information System (INIS)

    Tsitrone, E.; Pegourie, B.; Granata, G.

    1995-01-01

    This document deals with the use of vented limiters for the collection of neutral particles in Tore Supra. The model developed for experiments is presented together with its experimental validation. Some possible improvements to the present limiter are also proposed. (TEC). 5 refs., 3 figs

  7. A mathematical model of the maximum power density attainable in an alkaline hydrogen/oxygen fuel cell

    Science.gov (United States)

    Kimble, Michael C.; White, Ralph E.

    1991-01-01

    A mathematical model of a hydrogen/oxygen alkaline fuel cell is presented that can be used to predict the polarization behavior under various power loads. The major limitations to achieving high power densities are indicated and methods to increase the maximum attainable power density are suggested. The alkaline fuel cell model describes the phenomena occurring in the solid, liquid, and gaseous phases of the anode, separator, and cathode regions based on porous electrode theory applied to three phases. Fundamental equations of chemical engineering that describe conservation of mass and charge, species transport, and kinetic phenomena are used to develop the model by treating all phases as a homogeneous continuum.

  8. %lrasch_mml: A SAS Macro for Marginal Maximum Likelihood Estimation in Longitudinal Polytomous Rasch Models

    Directory of Open Access Journals (Sweden)

    Maja Olsbjerg

    2015-10-01

    Full Text Available Item response theory models are often applied when a number items are used to measure a unidimensional latent variable. Originally proposed and used within educational research, they are also used when focus is on physical functioning or psychological wellbeing. Modern applications often need more general models, typically models for multidimensional latent variables or longitudinal models for repeated measurements. This paper describes a SAS macro that fits two-dimensional polytomous Rasch models using a specification of the model that is sufficiently flexible to accommodate longitudinal Rasch models. The macro estimates item parameters using marginal maximum likelihood estimation. A graphical presentation of item characteristic curves is included.

  9. Matrix model calculations beyond the spherical limit

    International Nuclear Information System (INIS)

    Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.

    1993-01-01

    We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)

  10. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  11. Prediction of transient maximum heat flux based on a simple liquid layer evaporation model

    International Nuclear Information System (INIS)

    Serizawa, A.; Kataoka, I.

    1981-01-01

    A model of liquid layer evaporation with considerable supply of liquid has been formulated to predict burnout characteristics (maximum heat flux, life, etc.) during an increase of the power. The analytical description of the model is built upon the visual and photographic observations of the boiling configuration at near peak heat flux reported by other investigators. The prediction compares very favourably with water data presently available. It is suggested from the work reported here that the maximum heat flux occurs because of a balance between the consumption of the liquid film on the heated surface and the supply of liquid. Thickness of the liquid film is also very important. (author)

  12. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    Ning, A; Dykes, K

    2014-01-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  13. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  14. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  15. Global-scale high-resolution ( 1 km) modelling of mean, maximum and minimum annual streamflow

    Science.gov (United States)

    Barbarossa, Valerio; Huijbregts, Mark; Hendriks, Jan; Beusen, Arthur; Clavreul, Julie; King, Henry; Schipper, Aafke

    2017-04-01

    Quantifying mean, maximum and minimum annual flow (AF) of rivers at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. AF metrics can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict AF metrics based on climate and catchment characteristics. Yet, so far, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. We developed global-scale regression models that quantify mean, maximum and minimum AF as function of catchment area and catchment-averaged slope, elevation, and mean, maximum and minimum annual precipitation and air temperature. We then used these models to obtain global 30 arc-seconds (˜ 1 km) maps of mean, maximum and minimum AF for each year from 1960 through 2015, based on a newly developed hydrologically conditioned digital elevation model. We calibrated our regression models based on observations of discharge and catchment characteristics from about 4,000 catchments worldwide, ranging from 100 to 106 km2 in size, and validated them against independent measurements as well as the output of a number of process-based global hydrological models (GHMs). The variance explained by our regression models ranged up to 90% and the performance of the models compared well with the performance of existing GHMs. Yet, our AF maps provide a level of spatial detail that cannot yet be achieved by current GHMs.

  16. Computer modelling of superconductive fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Weller, R.A.; Campbell, A.M.; Coombs, T.A.; Cardwell, D.A.; Storey, R.J. [Cambridge Univ. (United Kingdom). Interdisciplinary Research Centre in Superconductivity (IRC); Hancox, J. [Rolls Royce, Applied Science Division, Derby (United Kingdom)

    1998-05-01

    Investigations are being carried out on the use of superconductors for fault current limiting applications. A number of computer programs are being developed to predict the behavior of different `resistive` fault current limiter designs under a variety of fault conditions. The programs achieve solution by iterative methods based around real measured data rather than theoretical models in order to achieve accuracy at high current densities. (orig.) 5 refs.

  17. Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia

    Science.gov (United States)

    Nathan R. Beane; James S. Rentch; Thomas M. Schuler

    2013-01-01

    Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...

  18. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  19. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    Science.gov (United States)

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  20. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    Energy Technology Data Exchange (ETDEWEB)

    Donnelly, Catherine, E-mail: C.Donnelly@hw.ac.uk [Heriot-Watt University, Department of Actuarial Mathematics and Statistics (United Kingdom)

    2011-10-15

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  1. Sufficient Stochastic Maximum Principle in a Regime-Switching Diffusion Model

    International Nuclear Information System (INIS)

    Donnelly, Catherine

    2011-01-01

    We prove a sufficient stochastic maximum principle for the optimal control of a regime-switching diffusion model. We show the connection to dynamic programming and we apply the result to a quadratic loss minimization problem, which can be used to solve a mean-variance portfolio selection problem.

  2. Maximum Recommended Dosage of Lithium for Pregnant Women Based on a PBPK Model for Lithium Absorption

    Directory of Open Access Journals (Sweden)

    Scott Horton

    2012-01-01

    Full Text Available Treatment of bipolar disorder with lithium therapy during pregnancy is a medical challenge. Bipolar disorder is more prevalent in women and its onset is often concurrent with peak reproductive age. Treatment typically involves administration of the element lithium, which has been classified as a class D drug (legal to use during pregnancy, but may cause birth defects and is one of only thirty known teratogenic drugs. There is no clear recommendation in the literature on the maximum acceptable dosage regimen for pregnant, bipolar women. We recommend a maximum dosage regimen based on a physiologically based pharmacokinetic (PBPK model. The model simulates the concentration of lithium in the organs and tissues of a pregnant woman and her fetus. First, we modeled time-dependent lithium concentration profiles resulting from lithium therapy known to have caused birth defects. Next, we identified maximum and average fetal lithium concentrations during treatment. Then, we developed a lithium therapy regimen to maximize the concentration of lithium in the mother’s brain, while maintaining the fetal concentration low enough to reduce the risk of birth defects. This maximum dosage regimen suggested by the model was 400 mg lithium three times per day.

  3. Animal models of preeclampsia; uses and limitations.

    LENUS (Irish Health Repository)

    McCarthy, F P

    2012-01-31

    Preeclampsia remains a leading cause of maternal and fetal morbidity and mortality and has an unknown etiology. The limited progress made regarding new treatments to reduce the incidence and severity of preeclampsia has been attributed to the difficulties faced in the development of suitable animal models for the mechanistic research of this disease. In addition, animal models need hypotheses on which to be based and the slow development of testable hypotheses has also contributed to this poor progress. The past decade has seen significant advances in our understanding of preeclampsia and the development of viable reproducible animal models has contributed significantly to these advances. Although many of these models have features of preeclampsia, they are still poor overall models of the human disease and limited due to lack of reproducibility and because they do not include the complete spectrum of pathophysiological changes associated with preeclampsia. This review aims to provide a succinct and comprehensive assessment of current animal models of preeclampsia, their uses and limitations with particular attention paid to the best validated and most comprehensive models, in addition to those models which have been utilized to investigate potential therapeutic interventions for the treatment or prevention of preeclampsia.

  4. Genetic Analysis of Daily Maximum Milking Speed by a Random Walk Model in Dairy Cows

    DEFF Research Database (Denmark)

    Karacaören, Burak; Janss, Luc; Kadarmideen, Haja

    Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models of ...... filter applications: random walk model could give online prediction of breeding values. Hence without waiting for whole lactation records, genetic evaluation could be made when the daily or monthly data is available......Data were obtained from dairy cows stationed at research farm ETH Zurich for maximum milking speed. The main aims of this paper are a) to evaluate if the Wood curve is suitable to model mean lactation curve b) to predict longitudinal breeding values by random regression and random walk models...... of maximum milking speed. Wood curve did not provide a good fit to the data set. Quadratic random regressions gave better predictions compared with the random walk model. However random walk model does not need to be evaluated for different orders of regression coefficients. In addition with the Kalman...

  5. Limit Cycles in Predator-Prey Models

    OpenAIRE

    Puchuri Medina, Liliana

    2017-01-01

    The classic Lotka-Volterra model belongs to a family of differential equations known as “Generalized Lotka-Volterra”, which is part of a classification of four models of quadratic fields with center. These models have been studied to address the Hilbert infinitesimal problem, which consists in determine the number of limit cycles of a perturbed hamiltonian system with center. In this work, we first present an alternative proof of the existence of centers in Lotka-Volterra predator-prey models...

  6. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  7. Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events

    DEFF Research Database (Denmark)

    Madsen, Henrik; Rasmussen, Peter F.; Rosbjerg, Dan

    1997-01-01

    Two different models for analyzing extreme hydrologic events, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto distribution for modeling threshold exceedances corresponding to a generalized extreme value......). In the case of ML estimation, the PDS model provides the most efficient T-year event estimator. In the cases of MOM and PWM estimation, the PDS model is generally preferable for negative shape parameters, whereas the AMS model yields the most efficient estimator for positive shape parameters. A comparison...... of the considered methods reveals that in general, one should use the PDS model with MOM estimation for negative shape parameters, the PDS model with exponentially distributed exceedances if the shape parameter is close to zero, the AMS model with MOM estimation for moderately positive shape parameters, and the PDS...

  8. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    Science.gov (United States)

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  9. Limited Commitment Models of the Labour Market

    OpenAIRE

    Jonathan Thomas; Tim Worrall

    2007-01-01

    We present an overview of models of long-term self-enforcing labour con- tracts in which risk-sharing is the dominant motive for contractual solutions. A base model is developed which is sufficiently general to encompass the two-agent problem central to most of the literature, including variable hours. We consider two-sided limited commitment and look at its implications for aggregate labour market variables. We consider the implications for empirical testing and the available empirical evide...

  10. Maximum likelihood pixel labeling using a spatially variant finite mixture model

    International Nuclear Information System (INIS)

    Gopal, S.S.; Hebert, T.J.

    1996-01-01

    We propose a spatially-variant mixture model for pixel labeling. Based on this spatially-variant mixture model we derive an expectation maximization algorithm for maximum likelihood estimation of the pixel labels. While most algorithms using mixture models entail the subsequent use of a Bayes classifier for pixel labeling, the proposed algorithm yields maximum likelihood estimates of the labels themselves and results in unambiguous pixel labels. The proposed algorithm is fast, robust, easy to implement, flexible in that it can be applied to any arbitrary image data where the number of classes is known and, most importantly, obviates the need for an explicit labeling rule. The algorithm is evaluated both quantitatively and qualitatively on simulated data and on clinical magnetic resonance images of the human brain

  11. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    Science.gov (United States)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  12. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  13. The Peierls model: Progress and limitations

    International Nuclear Information System (INIS)

    Schoeck, Gunther

    2005-01-01

    The basic features of the Peierls model are reviewed. The original model is based on the concept of balance of stresses in 1D and has serious limitations. These limitations can be overcome by a treatment as a variational problem on the energy level in 2D. The fundamental equations are given and applications to determine displacement profiles for dislocations and their dissociations are discussed. When the core misfit has a planar extension and the misfit energy in the glide plane - the γ-surface - is determined from ab initio methods, very reliable core configurations can be determined. For dislocations along close-packed lattice directions the misfit energy can be obtained by a summing procedure using Euler coordinates. When these dislocations are dissociated multiple equilibrium configurations with different splitting widths can exist, but the values of energy difference in between - the Peierls energy - are too small to be determined reliably, considering the simplifying assumptions of the model

  14. Using maximum topology matching to explore differences in species distribution models

    Science.gov (United States)

    Poco, Jorge; Doraiswamy, Harish; Talbert, Marian; Morisette, Jeffrey; Silva, Claudio

    2015-01-01

    Species distribution models (SDM) are used to help understand what drives the distribution of various plant and animal species. These models are typically high dimensional scalar functions, where the dimensions of the domain correspond to predictor variables of the model algorithm. Understanding and exploring the differences between models help ecologists understand areas where their data or understanding of the system is incomplete and will help guide further investigation in these regions. These differences can also indicate an important source of model to model uncertainty. However, it is cumbersome and often impractical to perform this analysis using existing tools, which allows for manual exploration of the models usually as 1-dimensional curves. In this paper, we propose a topology-based framework to help ecologists explore the differences in various SDMs directly in the high dimensional domain. In order to accomplish this, we introduce the concept of maximum topology matching that computes a locality-aware correspondence between similar extrema of two scalar functions. The matching is then used to compute the similarity between two functions. We also design a visualization interface that allows ecologists to explore SDMs using their topological features and to study the differences between pairs of models found using maximum topological matching. We demonstrate the utility of the proposed framework through several use cases using different data sets and report the feedback obtained from ecologists.

  15. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  16. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    Science.gov (United States)

    Tan, Elcin

    physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  17. Estimation of Financial Agent-Based Models with Simulated Maximum Likelihood

    Czech Academy of Sciences Publication Activity Database

    Kukačka, Jiří; Baruník, Jozef

    2017-01-01

    Roč. 85, č. 1 (2017), s. 21-45 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GBP402/12/G097 Institutional support: RVO:67985556 Keywords : heterogeneous agent model, * simulated maximum likelihood * switching Subject RIV: AH - Economics OBOR OECD: Finance Impact factor: 1.000, year: 2016 http://library.utia.cas.cz/separaty/2017/E/kukacka-0478481.pdf

  18. Bayesian, Maximum Parsimony and UPGMA Models for Inferring the Phylogenies of Antelopes Using Mitochondrial Markers

    OpenAIRE

    Khan, Haseeb A.; Arif, Ibrahim A.; Bahkali, Ali H.; Al Farhan, Ahmad H.; Al Homaidan, Ali A.

    2008-01-01

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to B...

  19. Conceptual model to determine maximum activity of radioactive waste in near-surface disposal facilities

    International Nuclear Information System (INIS)

    Iarmosh, I.; Olkhovyk, Yu.

    2016-01-01

    For development of the management strategy for radioactive waste to be placed in near - surface disposal facilities (NSDF), it is necessary to justify long - term safety of such facilities. Use of mathematical modelling methods for long - term forecasts of radwaste radiation impact and assessment of radiation risks from radionuclides migration can help to resolve this issue. The purpose of the research was to develop the conceptual model for determining the maximum activity of radwaste to be safely disposed in the NSDF and to test it in the case of Lot 3 Vector NSDF (Chornobyl exclusion zone). This paper describes an approach to the development of such a model. The conceptual model of "9"0 Sr migration from Lot 3 through aeration zone and aquifer soils was developed. The results of modelling are shown. The proposals on further steps for the model improvement were developed

  20. Modelling streambank erosion potential using maximum entropy in a central Appalachian watershed

    Directory of Open Access Journals (Sweden)

    J. Pitchford

    2015-03-01

    Full Text Available We used maximum entropy to model streambank erosion potential (SEP in a central Appalachian watershed to help prioritize sites for management. Model development included measuring erosion rates, application of a quantitative approach to locate Target Eroding Areas (TEAs, and creation of maps of boundary conditions. We successfully constructed a probability distribution of TEAs using the program Maxent. All model evaluation procedures indicated that the model was an excellent predictor, and that the major environmental variables controlling these processes were streambank slope, soil characteristics, bank position, and underlying geology. A classification scheme with low, moderate, and high levels of SEP derived from logistic model output was able to differentiate sites with low erosion potential from sites with moderate and high erosion potential. A major application of this type of modelling framework is to address uncertainty in stream restoration planning, ultimately helping to bridge the gap between restoration science and practice.

  1. Metabolic expenditures of lunge feeding rorquals across scale: implications for the evolution of filter feeding and the limits to maximum body size.

    Directory of Open Access Journals (Sweden)

    Jean Potvin

    Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting

  2. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    Science.gov (United States)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  3. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  4. Possible ecosystem impacts of applying maximum sustainable yield policy in food chain models.

    Science.gov (United States)

    Ghosh, Bapan; Kar, T K

    2013-07-21

    This paper describes the possible impacts of maximum sustainable yield (MSY) and maximum sustainable total yield (MSTY) policy in ecosystems. In general it is observed that exploitation at MSY (of single species) or MSTY (of multispecies) level may cause the extinction of several species. In particular, for traditional prey-predator system, fishing under combined harvesting effort at MSTY (if it exists) level may be a sustainable policy, but if MSTY does not exist then it is due to the extinction of the predator species only. In generalist prey-predator system, harvesting of any one of the species at MSY level is always a sustainable policy, but harvesting of both the species at MSTY level may or may not be a sustainable policy. In addition, we have also investigated the MSY and MSTY policy in a traditional tri-trophic and four trophic food chain models. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Using information and communication technology (ICT) to the maximum: learning and teaching biology with limited digital technologies

    Science.gov (United States)

    Van Rooy, Wilhelmina S.

    2012-04-01

    Background: The ubiquity, availability and exponential growth of digital information and communication technology (ICT) creates unique opportunities for learning and teaching in the senior secondary school biology curriculum. Digital technologies make it possible for emerging disciplinary knowledge and understanding of biological processes previously too small, large, slow or fast to be taught. Indeed, much of bioscience can now be effectively taught via digital technology, since its representational and symbolic forms are in digital formats. Purpose: This paper is part of a larger Australian study dealing with the technologies and modalities of learning biology in secondary schools. Sample: The classroom practices of three experienced biology teachers, working in a range of NSW secondary schools, are compared and contrasted to illustrate how the challenges of limited technologies are confronted to seamlessly integrate what is available into a number of molecular genetics lessons to enhance student learning. Design and method: The data are qualitative and the analysis is based on video classroom observations and semi-structured teacher interviews. Results: Findings indicate that if professional development opportunities are provided where the pedagogy of learning and teaching of both the relevant biology and its digital representations are available, then teachers see the immediate pedagogic benefit to student learning. In particular, teachers use ICT for challenging genetic concepts despite limited computer hardware and software availability. Conclusion: Experienced teachers incorporate ICT, however limited, in order to improve the quality of student learning.

  6. Toward a mechanistic modeling of nitrogen limitation for photosynthesis

    Science.gov (United States)

    Xu, C.; Fisher, R. A.; Travis, B. J.; Wilson, C. J.; McDowell, N. G.

    2011-12-01

    The nitrogen limitation is an important regulator for vegetation growth and global carbon cycle. Most current ecosystem process models simulate nitrogen effects on photosynthesis based on a prescribed relationship between leaf nitrogen and photosynthesis; however, there is a large amount of variability in this relationship with different light, temperature, nitrogen availability and CO2 conditions, which can affect the reliability of photosynthesis prediction under future climate conditions. To account for the variability in nitrogen-photosynthesis relationship under different environmental conditions, in this study, we developed a mechanistic model of nitrogen limitation for photosynthesis based on nitrogen trade-offs among light absorption, electron transport, carboxylization and carbon sink. Our model shows that strategies of nitrogen storage allocation as determined by tradeoff among growth and persistence is a key factor contributing to the variability in relationship between leaf nitrogen and photosynthesis. Nitrogen fertilization substantially increases the proportion of nitrogen in storage for coniferous trees but much less for deciduous trees, suggesting that coniferous trees allocate more nitrogen toward persistence compared to deciduous trees. The CO2 fertilization will cause lower nitrogen allocation for carboxylization but higher nitrogen allocation for storage, which leads to a weaker relationship between leaf nitrogen and maximum photosynthesis rate. Lower radiation will cause higher nitrogen allocation for light absorption and electron transport but less nitrogen allocation for carboxylyzation and storage, which also leads to weaker relationship between leaf nitrogen and maximum photosynthesis rate. At the same time, lower growing temperature will cause higher nitrogen allocation for carboxylyzation but lower allocation for light absorption, electron transport and storage, which leads to a stronger relationship between leaf nitrogen and maximum

  7. The Research of Car-Following Model Based on Real-Time Maximum Deceleration

    Directory of Open Access Journals (Sweden)

    Longhai Yang

    2015-01-01

    Full Text Available This paper is concerned with the effect of real-time maximum deceleration in car-following. The real-time maximum acceleration is estimated with vehicle dynamics. It is known that an intelligent driver model (IDM can control adaptive cruise control (ACC well. The disadvantages of IDM at high and constant speed are analyzed. A new car-following model which is applied to ACC is established accordingly to modify the desired minimum gap and structure of the IDM. We simulated the new car-following model and IDM under two different kinds of road conditions. In the first, the vehicles drive on a single road, taking dry asphalt road as the example in this paper. In the second, vehicles drive onto a different road, and this paper analyzed the situation in which vehicles drive from a dry asphalt road onto an icy road. From the simulation, we found that the new car-following model can not only ensure driving security and comfort but also control the steady driving of the vehicle with a smaller time headway than IDM.

  8. Maximum Correntropy Criterion Kalman Filter for α-Jerk Tracking Model with Non-Gaussian Noise

    Directory of Open Access Journals (Sweden)

    Bowen Hou

    2017-11-01

    Full Text Available As one of the most critical issues for target track, α -jerk model is an effective maneuver target track model. Non-Gaussian noises always exist in the track process, which usually lead to inconsistency and divergence of the track filter. A novel Kalman filter is derived and applied on α -jerk tracking model to handle non-Gaussian noise. The weighted least square solution is presented and the standard Kalman filter is deduced firstly. A novel Kalman filter with the weighted least square based on the maximum correntropy criterion is deduced. The robustness of the maximum correntropy criterion is also analyzed with the influence function and compared with the Huber-based filter, and, moreover, the kernel size of Gaussian kernel plays an important role in the filter algorithm. A new adaptive kernel method is proposed in this paper to adjust the parameter in real time. Finally, simulation results indicate the validity and the efficiency of the proposed filter. The comparison study shows that the proposed filter can significantly reduce the noise influence for α -jerk model.

  9. Renal versus splenic maximum slope based perfusion CT modelling in patients with portal-hypertension

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Michael A. [University Hospital Zurich, Department of Diagnostic and Interventional Radiology, Zurich (Switzerland); Karolinska Institutet, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Stockholm (Sweden); Brehmer, Katharina [Karolinska University Hospital Huddinge, Department of Radiology, Stockholm (Sweden); Svensson, Anders; Aspelin, Peter; Brismar, Torkel B. [Karolinska Institutet, Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Stockholm (Sweden); Karolinska University Hospital Huddinge, Department of Radiology, Stockholm (Sweden)

    2016-11-15

    To assess liver perfusion-CT (P-CT) parameters derived from peak-splenic (PSE) versus peak-renal enhancement (PRE) maximum slope-based modelling in different levels of portal-venous hypertension (PVH). Twenty-four patients (16 men; mean age 68 ± 10 years) who underwent dynamic P-CT for detection of hepatocellular carcinoma (HCC) were retrospectively divided into three groups: (1) without PVH (n = 8), (2) with PVH (n = 8), (3) with PVH and thrombosis (n = 8). Time to PSE and PRE and arterial liver perfusion (ALP), portal-venous liver perfusion (PLP) and hepatic perfusion-index (HPI) of the liver and HCC derived from PSE- versus PRE-based modelling were compared between the groups. Time to PSE was significantly longer in PVH groups 2 and 3 (P = 0.02), whereas PRE was similar in groups 1, 2 and 3 (P > 0.05). In group 1, liver and HCC perfusion parameters were similar for PSE- and PRE-based modelling (all P > 0.05), whereas significant differences were seen for PLP and HPI (liver only) in group 2 and ALP in group 3 (all P < 0.05). PSE is delayed in patients with PVH, resulting in a miscalculation of PSE-based P-CT parameters. Maximum slope-based P-CT might be improved by replacing PSE with PRE-modelling, whereas the difference between PSE and PRE might serve as a non-invasive biomarker of PVH. (orig.)

  10. London limit for lattice model of superconductor

    International Nuclear Information System (INIS)

    Ktitorov, S.A.

    2004-01-01

    The phenomenological approach to the strong-bond superconductor, which is based on the Ginzburg-Landau equation in the London limit, is considered. The effect of the crystalline lattice discreteness on the superconductors electromagnetic properties is studied. The classic problems on the critical current and magnetic field penetration are studied within the frames of the lattice model for thin superconducting films. The dependence of the superconducting current on the thin film order parameter is obtained. The critical current dependence on the degree of deviation from the continual approximation is calculated [ru

  11. Linear and regressive stochastic models for prediction of daily maximum ozone values at Mexico City atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Bravo, J. L [Instituto de Geofisica, UNAM, Mexico, D.F. (Mexico); Nava, M. M [Instituto Mexicano del Petroleo, Mexico, D.F. (Mexico); Gay, C [Centro de Ciencias de la Atmosfera, UNAM, Mexico, D.F. (Mexico)

    2001-07-01

    We developed a procedure to forecast, with 2 or 3 hours, the daily maximum of surface ozone concentrations. It involves the adjustment of Autoregressive Integrated and Moving Average (ARIMA) models to daily ozone maximum concentrations at 10 monitoring atmospheric stations in Mexico City during one-year period. A one-day forecast is made and it is adjusted with the meteorological and solar radiation information acquired during the first 3 hours before the occurrence of the maximum value. The relative importance for forecasting of the history of the process and of meteorological conditions is evaluated. Finally an estimate of the daily probability of exceeding a given ozone level is made. [Spanish] Se aplica un procedimiento basado en la metodologia conocida como ARIMA, para predecir, con 2 o 3 horas de anticipacion, el valor maximo de la concentracion diaria de ozono. Esta basado en el calculo de autorregresiones y promedios moviles aplicados a los valores maximos de ozono superficial provenientes de 10 estaciones de monitoreo atmosferico en la Ciudad de Mexico y obtenidos durante un ano de muestreo. El pronostico para un dia se ajusta con la informacion meteorologica y de radiacion solar correspondiente a un periodo que antecede con al menos tres horas la ocurrencia esperada del valor maximo. Se compara la importancia relativa de la historia del proceso y de las condiciones meteorologicas previas para el pronostico. Finalmente se estima la probabilidad diaria de que un nivel normativo o preestablecido para contingencias de ozono sea rebasado.

  12. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  13. Scrape-off layer based modelling of the density limit in beryllated JET limiter discharges

    International Nuclear Information System (INIS)

    Borrass, K.; Campbell, D.J.; Clement, S.; Vlases, G.C.

    1993-01-01

    The paper gives a scrape-off layer based interpretation of the density limit in beryllated JET limiter discharges. In these discharges, JET edge parameters show a complicated time evolution as the density limit is approached and the limit is manifested as a non-disruptive density maximum which cannot be exceeded by enhanced gas puffing. The occurrence of Marfes, the manner of density control and details of recycling are essential elements of the interpretation. Scalings for the maximum density are given and compared with JET data. The relation to disruptive density limits, previously observed in JET carbon limiter discharges, and to density limits in divertor discharges is discussed. (author). 18 refs, 10 figs, 1 tab

  14. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  15. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  16. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  17. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    Science.gov (United States)

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  18. Maximum likelihood estimation of semiparametric mixture component models for competing risks data.

    Science.gov (United States)

    Choi, Sangbum; Huang, Xuelin

    2014-09-01

    In the analysis of competing risks data, the cumulative incidence function is a useful quantity to characterize the crude risk of failure from a specific event type. In this article, we consider an efficient semiparametric analysis of mixture component models on cumulative incidence functions. Under the proposed mixture model, latency survival regressions given the event type are performed through a class of semiparametric models that encompasses the proportional hazards model and the proportional odds model, allowing for time-dependent covariates. The marginal proportions of the occurrences of cause-specific events are assessed by a multinomial logistic model. Our mixture modeling approach is advantageous in that it makes a joint estimation of model parameters associated with all competing risks under consideration, satisfying the constraint that the cumulative probability of failing from any cause adds up to one given any covariates. We develop a novel maximum likelihood scheme based on semiparametric regression analysis that facilitates efficient and reliable estimation. Statistical inferences can be conveniently made from the inverse of the observed information matrix. We establish the consistency and asymptotic normality of the proposed estimators. We validate small sample properties with simulations and demonstrate the methodology with a data set from a study of follicular lymphoma. © 2014, The International Biometric Society.

  19. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    Science.gov (United States)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  20. A subjective supply–demand model: the maximum Boltzmann/Shannon entropy solution

    International Nuclear Information System (INIS)

    Piotrowski, Edward W; Sładkowski, Jan

    2009-01-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  1. A subjective supply-demand model: the maximum Boltzmann/Shannon entropy solution

    Science.gov (United States)

    Piotrowski, Edward W.; Sładkowski, Jan

    2009-03-01

    The present authors have put forward a projective geometry model of rational trading. The expected (mean) value of the time that is necessary to strike a deal and the profit strongly depend on the strategies adopted. A frequent trader often prefers maximal profit intensity to the maximization of profit resulting from a separate transaction because the gross profit/income is the adopted/recommended benchmark. To investigate activities that have different periods of duration we define, following the queuing theory, the profit intensity as a measure of this economic category. The profit intensity in repeated trading has a unique property of attaining its maximum at a fixed point regardless of the shape of demand curves for a wide class of probability distributions of random reverse transactions (i.e. closing of the position). These conclusions remain valid for an analogous model based on supply analysis. This type of market game is often considered in research aiming at finding an algorithm that maximizes profit of a trader who negotiates prices with the Rest of the World (a collective opponent), possessing a definite and objective supply profile. Such idealization neglects the sometimes important influence of an individual trader on the demand/supply profile of the Rest of the World and in extreme cases questions the very idea of demand/supply profile. Therefore we put forward a trading model in which the demand/supply profile of the Rest of the World induces the (rational) trader to (subjectively) presume that he/she lacks (almost) all knowledge concerning the market but his/her average frequency of trade. This point of view introduces maximum entropy principles into the model and broadens the range of economic phenomena that can be perceived as a sort of thermodynamical system. As a consequence, the profit intensity has a fixed point with an astonishing connection with Fibonacci classical works and looking for the quickest algorithm for obtaining the extremum of a

  2. Maximum-likelihood model averaging to profile clustering of site types across discrete linear sequences.

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2009-06-01

    Full Text Available A major analytical challenge in computational biology is the detection and description of clusters of specified site types, such as polymorphic or substituted sites within DNA or protein sequences. Progress has been stymied by a lack of suitable methods to detect clusters and to estimate the extent of clustering in discrete linear sequences, particularly when there is no a priori specification of cluster size or cluster count. Here we derive and demonstrate a maximum likelihood method of hierarchical clustering. Our method incorporates a tripartite divide-and-conquer strategy that models sequence heterogeneity, delineates clusters, and yields a profile of the level of clustering associated with each site. The clustering model may be evaluated via model selection using the Akaike Information Criterion, the corrected Akaike Information Criterion, and the Bayesian Information Criterion. Furthermore, model averaging using weighted model likelihoods may be applied to incorporate model uncertainty into the profile of heterogeneity across sites. We evaluated our method by examining its performance on a number of simulated datasets as well as on empirical polymorphism data from diverse natural alleles of the Drosophila alcohol dehydrogenase gene. Our method yielded greater power for the detection of clustered sites across a breadth of parameter ranges, and achieved better accuracy and precision of estimation of clusters, than did the existing empirical cumulative distribution function statistics.

  3. Photovoltaic System Modeling with Fuzzy Logic Based Maximum Power Point Tracking Algorithm

    Directory of Open Access Journals (Sweden)

    Hasan Mahamudul

    2013-01-01

    Full Text Available This paper represents a novel modeling technique of PV module with a fuzzy logic based MPPT algorithm and boost converter in Simulink environment. The prime contributions of this work are simplification of PV modeling technique and implementation of fuzzy based MPPT system to track maximum power efficiently. The main highlighted points of this paper are to demonstrate the precise control of the duty cycle with respect to various atmospheric conditions, illustration of PV characteristic curves, and operation analysis of the converter. The proposed system has been applied for three different PV modules SOLKAR 36 W, BP MSX 60 W, and KC85T 87 W. Finally the resultant data has been compared with the theoretical prediction and company specified value to ensure the validity of the system.

  4. Frequency-Domain Maximum-Likelihood Estimation of High-Voltage Pulse Transformer Model Parameters

    CERN Document Server

    Aguglia, D; Martins, C.D.A.

    2014-01-01

    This paper presents an offline frequency-domain nonlinear and stochastic identification method for equivalent model parameter estimation of high-voltage pulse transformers. Such kinds of transformers are widely used in the pulsed-power domain, and the difficulty in deriving pulsed-power converter optimal control strategies is directly linked to the accuracy of the equivalent circuit parameters. These components require models which take into account electric fields energies represented by stray capacitance in the equivalent circuit. These capacitive elements must be accurately identified, since they greatly influence the general converter performances. A nonlinear frequency-based identification method, based on maximum-likelihood estimation, is presented, and a sensitivity analysis of the best experimental test to be considered is carried out. The procedure takes into account magnetic saturation and skin effects occurring in the windings during the frequency tests. The presented method is validated by experim...

  5. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  6. Skill and reliability of climate model ensembles at the Last Glacial Maximum and mid-Holocene

    Directory of Open Access Journals (Sweden)

    J. C. Hargreaves

    2013-03-01

    Full Text Available Paleoclimate simulations provide us with an opportunity to critically confront and evaluate the performance of climate models in simulating the response of the climate system to changes in radiative forcing and other boundary conditions. Hargreaves et al. (2011 analysed the reliability of the Paleoclimate Modelling Intercomparison Project, PMIP2 model ensemble with respect to the MARGO sea surface temperature data synthesis (MARGO Project Members, 2009 for the Last Glacial Maximum (LGM, 21 ka BP. Here we extend that work to include a new comprehensive collection of land surface data (Bartlein et al., 2011, and introduce a novel analysis of the predictive skill of the models. We include output from the PMIP3 experiments, from the two models for which suitable data are currently available. We also perform the same analyses for the PMIP2 mid-Holocene (6 ka BP ensembles and available proxy data sets. Our results are predominantly positive for the LGM, suggesting that as well as the global mean change, the models can reproduce the observed pattern of change on the broadest scales, such as the overall land–sea contrast and polar amplification, although the more detailed sub-continental scale patterns of change remains elusive. In contrast, our results for the mid-Holocene are substantially negative, with the models failing to reproduce the observed changes with any degree of skill. One cause of this problem could be that the globally- and annually-averaged forcing anomaly is very weak at the mid-Holocene, and so the results are dominated by the more localised regional patterns in the parts of globe for which data are available. The root cause of the model-data mismatch at these scales is unclear. If the proxy calibration is itself reliable, then representativity error in the data-model comparison, and missing climate feedbacks in the models are other possible sources of error.

  7. Prediction Model of Mechanical Extending Limits in Horizontal Drilling and Design Methods of Tubular Strings to Improve Limits

    Directory of Open Access Journals (Sweden)

    Wenjun Huang

    2017-01-01

    Full Text Available Mechanical extending limit in horizontal drilling means the maximum horizontal extending length of a horizontal well under certain ground and down-hole mechanical constraint conditions. Around this concept, the constrained optimization model of mechanical extending limits is built and simplified analytical results for pick-up and slack-off operations are deduced. The horizontal extending limits for kinds of tubular strings under different drilling parameters are calculated and drawn. To improve extending limits, an optimal design model of drill strings is built and applied to a case study. The results indicate that horizontal extending limits are underestimated a lot when the effects of friction force on critical helical buckling loads are neglected. Horizontal extending limits firstly increase and tend to stable values with vertical depths. Horizontal extending limits increase faster but finally become smaller with the increase of horizontal pushing forces for tubular strings of smaller modulus-weight ratio. Sliding slack-off is the main limit operation and high axial friction is the main constraint factor constraining horizontal extending limits. A sophisticated installation of multiple tubular strings can greatly inhibit helical buckling and increase horizontal extending limits. The optimal design model is called only once to obtain design results, which greatly increases the calculation efficiency.

  8. Model unspecific search in CMS. Model unspecific limits

    Energy Technology Data Exchange (ETDEWEB)

    Knutzen, Simon; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The standard model of particle physics is increasingly challenged by recent discoveries and also by long known phenomena, representing a strong motivation to develop extensions of the standard model. The amount of theories describing possible extensions is large and steadily growing. In this presentation a new approach is introduced, verifying if a given theory beyond the standard model is consistent with data collected by the CMS detector without the need to perform a dedicated search. To achieve this, model unspecific limits on the number of additional events above the standard model expectation are calculated in every event class produced by the MUSiC algorithm. Furthermore, a tool is provided to translate these results into limits on the signal cross section of any theory. In addition to the general procedure, first results and examples are shown using the proton-proton collision data taken at a centre of mass energy of 8 TeV.

  9. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  10. [Prediction of potential geographic distribution of Lyme disease in Qinghai province with Maximum Entropy model].

    Science.gov (United States)

    Zhang, Lin; Hou, Xuexia; Liu, Huixin; Liu, Wei; Wan, Kanglin; Hao, Qin

    2016-01-01

    To predict the potential geographic distribution of Lyme disease in Qinghai by using Maximum Entropy model (MaxEnt). The sero-diagnosis data of Lyme disease in 6 counties (Huzhu, Zeku, Tongde, Datong, Qilian and Xunhua) and the environmental and anthropogenic data including altitude, human footprint, normalized difference vegetation index (NDVI) and temperature in Qinghai province since 1990 were collected. By using the data of Huzhu Zeku and Tongde, the prediction of potential distribution of Lyme disease in Qinghai was conducted with MaxEnt. The prediction results were compared with the human sero-prevalence of Lyme disease in Datong, Qilian and Xunhua counties in Qinghai. Three hot spots of Lyme disease were predicted in Qinghai, which were all in the east forest areas. Furthermore, the NDVI showed the most important role in the model prediction, followed by human footprint. Datong, Qilian and Xunhua counties were all in eastern Qinghai. Xunhua was in hot spot areaⅡ, Datong was close to the north of hot spot area Ⅲ, while Qilian with lowest sero-prevalence of Lyme disease was not in the hot spot areas. The data were well modeled in MaxEnt (Area Under Curve=0.980). The actual distribution of Lyme disease in Qinghai was in consistent with the results of the model prediction. MaxEnt could be used in predicting the potential distribution patterns of Lyme disease. The distribution of vegetation and the range and intensity of human activity might be related with Lyme disease distribution.

  11. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    International Nuclear Information System (INIS)

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-01-01

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability daily temperature within TCR will be 97.8%

  12. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    International Nuclear Information System (INIS)

    Urniezius, Renaldas

    2011-01-01

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  13. Toward Modeling Limited Plasticity in Ceramic Materials

    National Research Council Canada - National Science Library

    Grinfeld, Michael; Schoenfeld, Scott E; Wright, Tim W

    2008-01-01

    The characteristic features of many armor-related ceramic materials are the anisotropy on the micro-scale level and the very limited, though non-vanishing, plasticity due to limited number of the planes for plastic slip...

  14. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  15. A Low-Cost Maximum Power Point Tracking System Based on Neural Network Inverse Model Controller

    Directory of Open Access Journals (Sweden)

    Carlos Robles Algarín

    2018-01-01

    Full Text Available This work presents the design, modeling, and implementation of a neural network inverse model controller for tracking the maximum power point of a photovoltaic (PV module. A nonlinear autoregressive network with exogenous inputs (NARX was implemented in a serial-parallel architecture. The PV module mathematical modeling was developed, a buck converter was designed to operate in the continuous conduction mode with a switching frequency of 20 KHz, and the dynamic neural controller was designed using the Neural Network Toolbox from Matlab/Simulink (MathWorks, Natick, MA, USA, and it was implemented on an open-hardware Arduino Mega board. To obtain the reference signals for the NARX and determine the 65 W PV module behavior, a system made of a 0.8 W PV cell, a temperature sensor, a voltage sensor and a static neural network, was used. To evaluate performance a comparison with the P&O traditional algorithm was done in terms of response time and oscillations around the operating point. Simulation results demonstrated the superiority of neural controller over the P&O. Implementation results showed that approximately the same power is obtained with both controllers, but the P&O controller presents oscillations between 7 W and 10 W, in contrast to the inverse controller, which had oscillations between 1 W and 2 W.

  16. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    Science.gov (United States)

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  17. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus distribution using maximum entropy.

    Directory of Open Access Journals (Sweden)

    Mona Nazeri

    Full Text Available One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  18. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity.

    Directory of Open Access Journals (Sweden)

    Lorenzo Asti

    2016-04-01

    Full Text Available The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high-frequency mutation rate in the genome region that codes for the antibody active site. Eventually, cells that produce antibodies with higher affinity for their cognate antigen are selected and clonally expanded. Here, we propose a new statistical approach based on maximum entropy modeling in which a scoring function related to the binding affinity of antibodies against a specific antigen is inferred from a sample of sequences of the immune repertoire of an individual. We use our inference strategy to infer a statistical model on a data set obtained by sequencing a fairly large portion of the immune repertoire of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6, outperforming other sequence- and structure-based models.

  19. A Novel Maximum Entropy Markov Model for Human Facial Expression Recognition.

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    Full Text Available Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.

  20. Changes in the Global Hydrological Cycle: Lessons from Modeling Lake Levels at the Last Glacial Maximum

    Science.gov (United States)

    Lowry, D. P.; Morrill, C.

    2011-12-01

    Geologic evidence shows that lake levels in currently arid regions were higher and lakes in currently wet regions were lower during the Last Glacial Maximum (LGM). Current hypotheses used to explain these lake level changes include the thermodynamic hypothesis, in which decreased tropospheric water vapor coupled with patterns of convergence and divergence caused dry areas to become more wet and vice versa, the dynamic hypothesis, in which shifts in the jet stream and Inter-Tropical Convergence Zone (ITCZ) altered precipitation patterns, and the evaporation hypothesis, in which lake expansions are attributed to reduced evaporation in a colder climate. This modeling study uses the output of four climate models participating in phase 2 of the Paleoclimate Modeling Intercomparison Project (PMIP2) as input into a lake energy-balance model, in order to test the accuracy of the models and understand the causes of lake level changes. We model five lakes which include the Great Basin lakes, USA; Lake Petén Itzá, Guatemala; Lake Caçó, northern Brazil; Lake Tauca (Titicaca), Bolivia and Peru; and Lake Cari-Laufquen, Argentina. These lakes create a transect through the drylands of North America through the tropics and to the drylands of South America. The models accurately recreate LGM conditions in 14 out of 20 simulations, with the Great Basin lakes being the most robust and Lake Caçó being the least robust, due to model biases in portraying the ITCZ over South America. An analysis of the atmospheric moisture budget from one of the climate models shows that thermodynamic processes contribute most significantly to precipitation changes over the Great Basin, while dynamic processes are most significant for the other lakes. Lake Cari-Laufquen shows a lake expansion that is most likely attributed to reduced evaporation rather than changes in regional precipitation, suggesting that lake levels alone may not be the best indicator of how much precipitation this region

  1. Maximum Potential Score (MPS: An operating model for a successful customer-focused strategy.

    Directory of Open Access Journals (Sweden)

    Cabello González, José Manuel

    2015-12-01

    Full Text Available One of marketers’ chief objectives is to achieve customer loyalty, which is a key factor for profitable growth. Therefore, they need to develop a strategy that attracts and maintains customers, giving them adequate motives, both tangible (prices and promotions and intangible (personalized service and treatment, to satisfy a customer and make him loyal to the company. Finding a way to accurately measure satisfaction and customer loyalty is very important. With regard to typical Relationship Marketing measures, we can consider listening to customers, which can help to achieve a competitive sustainable advantage. Customer satisfaction surveys are essential tools for listening to customers. Short questionnaires have gained considerable acceptance among marketers as a means to achieve a customer satisfaction measure. Our research provides an indication of the benefits of a short questionnaire (one/three questions. We find that the number of questions survey is significantly related to the participation in the survey (Net Promoter Score or NPS. We also prove that a the three question survey is more likely to have more participants than a traditional survey (Maximum Potential Score or MPS . Our main goal is to analyse one method as a potential predictor of customer loyalty. Using surveys, we attempt to empirically establish the causal factors in determining the satisfaction of customers. This paper describes a maximum potential operating model that captures with a three questions survey, important elements for a successful customer-focused strategy. MPS may give us lower participation rates than NPS but important information that helps to convert unhappy customers or just satisfied customers, into loyal customers.

  2. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  3. Bayesian, maximum parsimony and UPGMA models for inferring the phylogenies of antelopes using mitochondrial markers.

    Science.gov (United States)

    Khan, Haseeb A; Arif, Ibrahim A; Bahkali, Ali H; Al Farhan, Ahmad H; Al Homaidan, Ali A

    2008-10-06

    This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b) and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA), maximum parsimony (MP) and unweighted pair group method with arithmetic mean (UPGMA). The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella) and an out-group (Addax nasomaculatus) were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65%) followed by cyt-b (94.22%) and d-loop (87.29%). There were few transitions (2.35%) and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions) and d-loop (11.57% transitions and 1.14% transversions) while comparing the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx) to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella) with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.

  4. MODELLING OF DYNAMIC SPEED LIMITS USING THE MODEL PREDICTIVE CONTROL

    Directory of Open Access Journals (Sweden)

    Andrey Borisovich Nikolaev

    2017-09-01

    Full Text Available The article considers the issues of traffic management using intelligent system “Car-Road” (IVHS, which consist of interacting intelligent vehicles (IV and intelligent roadside controllers. Vehicles are organized in convoy with small distances between them. All vehicles are assumed to be fully automated (throttle control, braking, steering. Proposed approaches for determining speed limits for traffic cars on the motorway using a model predictive control (MPC. The article proposes an approach to dynamic speed limit to minimize the downtime of vehicles in traffic.

  5. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    Science.gov (United States)

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  6. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Science.gov (United States)

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  7. Deconvolving the wedge: maximum-likelihood power spectra via spherical-wave visibility modelling

    Science.gov (United States)

    Ghosh, A.; Mertens, F. G.; Koopmans, L. V. E.

    2018-03-01

    Direct detection of the Epoch of Reionization (EoR) via the red-shifted 21-cm line will have unprecedented implications on the study of structure formation in the infant Universe. To fulfil this promise, current and future 21-cm experiments need to detect this weak EoR signal in the presence of foregrounds that are several orders of magnitude larger. This requires extreme noise control and improved wide-field high dynamic-range imaging techniques. We propose a new imaging method based on a maximum likelihood framework which solves for the interferometric equation directly on the sphere, or equivalently in the uvw-domain. The method uses the one-to-one relation between spherical waves and spherical harmonics (SpH). It consistently handles signals from the entire sky, and does not require a w-term correction. The SpH coefficients represent the sky-brightness distribution and the visibilities in the uvw-domain, and provide a direct estimate of the spatial power spectrum. Using these spectrally smooth SpH coefficients, bright foregrounds can be removed from the signal, including their side-lobe noise, which is one of the limiting factors in high dynamics-range wide-field imaging. Chromatic effects causing the so-called `wedge' are effectively eliminated (i.e. deconvolved) in the cylindrical (k⊥, k∥) power spectrum, compared to a power spectrum computed directly from the images of the foreground visibilities where the wedge is clearly present. We illustrate our method using simulated Low-Frequency Array observations, finding an excellent reconstruction of the input EoR signal with minimal bias.

  8. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  9. Model for movement of molten limiter material during the ISX-B beryllium limiter experiment

    International Nuclear Information System (INIS)

    Langley, R.A.; England, A.C.; Edmonds, P.H.; Hogan, J.T.; Neilson, G.H.

    1986-01-01

    A model is proposed for the movement and erosion of limiter material during the Beryllium Limiter Experiment performed on the ISX-B Tokamak. This model is consistent with observed experimental results and plasma operational characteristics. Conclusions drawn from the model can provide an understanding of erosion mechanisms, thereby contributing to the development of future design criteria. (author)

  10. Limitations of the Revised DREAM Model

    OpenAIRE

    Breivik, Stian

    2012-01-01

    Master's thesis in Environmental Technology DREAM (Dose-related Risk and Exposure Assessment Model) is a risk assessment tool for modeling offshore waste discharge to the marine environment. The drilling waste model was developed through the joint industrial project ERMS (Environmental Risk Management System). The method follows a PEC/PNEC (Predicted Environmental Concentration / Predicted no Effect Concentration) approach as to determine an EIF (Environmental Impact Factor) fo...

  11. Estimating total maximum daily loads with the Stochastic Empirical Loading and Dilution Model

    Science.gov (United States)

    Granato, Gregory; Jones, Susan Cheung

    2017-01-01

    The Massachusetts Department of Transportation (DOT) and the Rhode Island DOT are assessing and addressing roadway contributions to total maximum daily loads (TMDLs). Example analyses for total nitrogen, total phosphorus, suspended sediment, and total zinc in highway runoff were done by the U.S. Geological Survey in cooperation with FHWA to simulate long-term annual loads for TMDL analyses with the stochastic empirical loading and dilution model known as SELDM. Concentration statistics from 19 highway runoff monitoring sites in Massachusetts were used with precipitation statistics from 11 long-term monitoring sites to simulate long-term pavement yields (loads per unit area). Highway sites were stratified by traffic volume or surrounding land use to calculate concentration statistics for rural roads, low-volume highways, high-volume highways, and ultraurban highways. The median of the event mean concentration statistics in each traffic volume category was used to simulate annual yields from pavement for a 29- or 30-year period. Long-term average yields for total nitrogen, phosphorus, and zinc from rural roads are lower than yields from the other categories, but yields of sediment are higher than for the low-volume highways. The average yields of the selected water quality constituents from high-volume highways are 1.35 to 2.52 times the associated yields from low-volume highways. The average yields of the selected constituents from ultraurban highways are 1.52 to 3.46 times the associated yields from high-volume highways. Example simulations indicate that both concentration reduction and flow reduction by structural best management practices are crucial for reducing runoff yields.

  12. Analytic Models of Brown Dwarfs and the Substellar Mass Limit

    Directory of Open Access Journals (Sweden)

    Sayantan Auddy

    2016-01-01

    Full Text Available We present the analytic theory of brown dwarf evolution and the lower mass limit of the hydrogen burning main-sequence stars and introduce some modifications to the existing models. We give an exact expression for the pressure of an ideal nonrelativistic Fermi gas at a finite temperature, therefore allowing for nonzero values of the degeneracy parameter. We review the derivation of surface luminosity using an entropy matching condition and the first-order phase transition between the molecular hydrogen in the outer envelope and the partially ionized hydrogen in the inner region. We also discuss the results of modern simulations of the plasma phase transition, which illustrate the uncertainties in determining its critical temperature. Based on the existing models and with some simple modification, we find the maximum mass for a brown dwarf to be in the range 0.064M⊙–0.087M⊙. An analytic formula for the luminosity evolution allows us to estimate the time period of the nonsteady state (i.e., non-main-sequence nuclear burning for substellar objects. We also calculate the evolution of very low mass stars. We estimate that ≃11% of stars take longer than 107 yr to reach the main sequence, and ≃5% of stars take longer than 108 yr.

  13. Animal models of osteoporosis - necessity and limitations

    Directory of Open Access Journals (Sweden)

    Turner A. Simon

    2001-06-01

    Full Text Available There is a great need to further characterise the available animal models for postmenopausal osteoporosis, for the understanding of the pathogenesis of the disease, investigation of new therapies (e.g. selective estrogen receptor modulators (SERMs and evaluation of prosthetic devices in osteoporotic bone. Animal models that have been used in the past include non-human primates, dogs, cats, rodents, rabbits, guinea pigs and minipigs, all of which have advantages and disadvantages. Sheep are a promising model for various reasons: they are docile, easy to handle and house, relatively inexpensive, available in large numbers, spontaneously ovulate, and the sheep's bones are large enough to evaluate orthopaedic implants. Most animal models have used females and osteoporosis in the male has been largely ignored. Recently, interest in development of appropriate prosthetic devices which would stimulate osseointegration into osteoporotic, appendicular, axial and mandibular bone has intensified. Augmentation of osteopenic lumbar vertebrae with bioactive ceramics (vertebroplasty is another area that will require testing in the appropriate animal model. Using experimental animal models for the study of these different facets of osteoporosis minimizes some of the difficulties associated with studying the disease in humans, namely time and behavioral variability among test subjects. New experimental drug therapies and orthopaedic implants can potentially be tested on large numbers of animals subjected to a level of experimental control impossible in human clinical research.

  14. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  15. Oxygen stable isotopes during the Last Glacial Maximum climate: perspectives from data-model (iLOVECLIM) comparison

    NARCIS (Netherlands)

    Caley, T.; Roche, D.M.V.A.P.; Waelbroeck, C.; Michel, E.

    2014-01-01

    We use the fully coupled atmosphere-ocean three-dimensional model of intermediate complexity iLOVECLIM to simulate the climate and oxygen stable isotopic signal during the Last Glacial Maximum (LGM, 21 000 years). By using a model that is able to explicitly simulate the sensor (Î18O), results can be

  16. Recovery of Graded Response Model Parameters: A Comparison of Marginal Maximum Likelihood and Markov Chain Monte Carlo Estimation

    Science.gov (United States)

    Kieftenbeld, Vincent; Natesan, Prathiba

    2012-01-01

    Markov chain Monte Carlo (MCMC) methods enable a fully Bayesian approach to parameter estimation of item response models. In this simulation study, the authors compared the recovery of graded response model parameters using marginal maximum likelihood (MML) and Gibbs sampling (MCMC) under various latent trait distributions, test lengths, and…

  17. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  18. Animal models of contraception: utility and limitations

    Directory of Open Access Journals (Sweden)

    Liechty ER

    2015-04-01

    Full Text Available Emma R Liechty,1 Ingrid L Bergin,1 Jason D Bell2 1Unit for Laboratory Animal Medicine, 2Program on Women's Health Care Effectiveness Research, Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA Abstract: Appropriate animal modeling is vital for the successful development of novel contraceptive devices. Advances in reproductive biology have identified novel pathways for contraceptive intervention. Here we review species-specific anatomic and physiologic considerations impacting preclinical contraceptive testing, including efficacy testing, mechanistic studies, device design, and modeling off-target effects. Emphasis is placed on the use of nonhuman primate models in contraceptive device development. Keywords: nonhuman primate, preclinical, in vivo, contraceptive devices

  19. Animal models of asthma: utility and limitations

    Directory of Open Access Journals (Sweden)

    Aun MV

    2017-11-01

    Full Text Available Marcelo Vivolo Aun,1,2 Rafael Bonamichi-Santos,1,2 Fernanda Magalhães Arantes-Costa,2 Jorge Kalil,1 Pedro Giavina-Bianchi1 1Clinical Immunology and Allergy Division, Department of Internal Medicine, University of São Paulo School of Medicine, São Paulo, Brazil, 2Laboratory of Experimental Therapeutics (LIM20, Department of Internal Medicine, University of Sao Paulo, Sao Paulo, Brazil Abstract: Clinical studies in asthma are not able to clear up all aspects of disease pathophysiology. Animal models have been developed to better understand these mechanisms and to evaluate both safety and efficacy of therapies before starting clinical trials. Several species of animals have been used in experimental models of asthma, such as Drosophila, rats, guinea pigs, cats, dogs, pigs, primates and equines. However, the most common species studied in the last two decades is mice, particularly BALB/c. Animal models of asthma try to mimic the pathophysiology of human disease. They classically include two phases: sensitization and challenge. Sensitization is traditionally performed by intraperitoneal and subcutaneous routes, but intranasal instillation of allergens has been increasingly used because human asthma is induced by inhalation of allergens. Challenges with allergens are performed through aerosol, intranasal or intratracheal instillation. However, few studies have compared different routes of sensitization and challenge. The causative allergen is another important issue in developing a good animal model. Despite being more traditional and leading to intense inflammation, ovalbumin has been replaced by aeroallergens, such as house dust mites, to use the allergens that cause human disease. Finally, researchers should define outcomes to be evaluated, such as serum-specific antibodies, airway hyperresponsiveness, inflammation and remodeling. The present review analyzes the animal models of asthma, assessing differences between species, allergens and routes

  20. Method of estimating maximum VOC concentration in void volume of vented waste drums using limited sampling data: Application in transuranic waste drums

    International Nuclear Information System (INIS)

    Liekhus, K.J.; Connolly, M.J.

    1995-01-01

    A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste

  1. Estimating daily minimum, maximum, and mean near surface air temperature using hybrid satellite models across Israel.

    Science.gov (United States)

    Rosenfeld, Adar; Dorman, Michael; Schwartz, Joel; Novack, Victor; Just, Allan C; Kloog, Itai

    2017-11-01

    Meteorological stations measure air temperature (Ta) accurately with high temporal resolution, but usually suffer from limited spatial resolution due to their sparse distribution across rural, undeveloped or less populated areas. Remote sensing satellite-based measurements provide daily surface temperature (Ts) data in high spatial and temporal resolution and can improve the estimation of daily Ta. In this study we developed spatiotemporally resolved models which allow us to predict three daily parameters: Ta Max (day time), 24h mean, and Ta Min (night time) on a fine 1km grid across the state of Israel. We used and compared both the Aqua and Terra MODIS satellites. We used linear mixed effect models, IDW (inverse distance weighted) interpolations and thin plate splines (using a smooth nonparametric function of longitude and latitude) to first calibrate between Ts and Ta in those locations where we have available data for both and used that calibration to fill in neighboring cells without surface monitors or missing Ts. Out-of-sample ten-fold cross validation (CV) was used to quantify the accuracy of our predictions. Our model performance was excellent for both days with and without available Ts observations for both Aqua and Terra (CV Aqua R 2 results for min 0.966, mean 0.986, and max 0.967; CV Terra R 2 results for min 0.965, mean 0.987, and max 0.968). Our research shows that daily min, mean and max Ta can be reliably predicted using daily MODIS Ts data even across Israel, with high accuracy even for days without Ta or Ts data. These predictions can be used as three separate Ta exposures in epidemiology studies for better diurnal exposure assessment. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Maximum Smoke Temperature in Non-Smoke Model Evacuation Region for Semi-Transverse Tunnel Fire

    OpenAIRE

    B. Lou; Y. Qiu; X. Long

    2017-01-01

    Smoke temperature distribution in non-smoke evacuation under different mechanical smoke exhaust rates of semi-transverse tunnel fire were studied by FDS numerical simulation in this paper. The effect of fire heat release rate (10MW 20MW and 30MW) and exhaust rate (from 0 to 160m3/s) on the maximum smoke temperature in non-smoke evacuation region was discussed. Results show that the maximum smoke temperature in non-smoke evacuation region decreased with smoke exhaust rate. Plug-holing was obse...

  3. Overcoming some limitations of imprecise reliability models

    DEFF Research Database (Denmark)

    Kozine, Igor; Krymsky, Victor

    2011-01-01

    The application of imprecise reliability models is often hindered by the rapid growth in imprecision that occurs when many components constitute a system and by the fact that time to failure is bounded from above. The latter results in the necessity to explicitly introduce an upper bound on time ...

  4. Are inundation limit and maximum extent of sand useful for differentiating tsunamis and storms? An example from sediment transport simulations on the Sendai Plain, Japan

    Science.gov (United States)

    Watanabe, Masashi; Goto, Kazuhisa; Bricker, Jeremy D.; Imamura, Fumihiko

    2018-02-01

    We examined the quantitative difference in the distribution of tsunami and storm deposits based on numerical simulations of inundation and sediment transport due to tsunami and storm events on the Sendai Plain, Japan. The calculated distance from the shoreline inundated by the 2011 Tohoku-oki tsunami was smaller than that inundated by storm surges from hypothetical typhoon events. Previous studies have assumed that deposits observed farther inland than the possible inundation limit of storm waves and storm surge were tsunami deposits. However, confirming only the extent of inundation is insufficient to distinguish tsunami and storm deposits, because the inundation limit of storm surges may be farther inland than that of tsunamis in the case of gently sloping coastal topography such as on the Sendai Plain. In other locations, where coastal topography is steep, the maximum inland inundation extent of storm surges may be only several hundred meters, so marine-sourced deposits that are distributed several km inland can be identified as tsunami deposits by default. Over both gentle and steep slopes, another difference between tsunami and storm deposits is the total volume deposited, as flow speed over land during a tsunami is faster than during a storm surge. Therefore, the total deposit volume could also be a useful proxy to differentiate tsunami and storm deposits.

  5. Impacts of trace carbon on the microstructure of as-sintered biomedical Ti-15Mo alloy and reassessment of the maximum carbon limit.

    Science.gov (United States)

    Yan, M; Qian, M; Kong, C; Dargusch, M S

    2014-02-01

    The formation of grain boundary (GB) brittle carbides with a complex three-dimensional (3-D) morphology can be detrimental to both the fatigue properties and corrosion resistance of a biomedical titanium alloy. A detailed microscopic study has been performed on an as-sintered biomedical Ti-15Mo (in wt.%) alloy containing 0.032 wt.% C. A noticeable presence of a carbon-enriched phase has been observed along the GB, although the carbon content is well below the maximum carbon limit of 0.1 wt.% specified by ASTM Standard F2066. Transmission electron microscopy (TEM) identified that the carbon-enriched phase is face-centred cubic Ti2C. 3-D tomography reconstruction revealed that the Ti2C structure has morphology similar to primary α-Ti. Nanoindentation confirmed the high hardness and high Young's modulus of the GB Ti2C phase. To avoid GB carbide formation in Ti-15Mo, the carbon content should be limited to 0.006 wt.% by Thermo-Calc predictions. Similar analyses and characterization of the carbide formation in biomedical unalloyed Ti, Ti-6Al-4V and Ti-16Nb have also been performed. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  6. Studying DDT Susceptibility at Discriminating Time Intervals Focusing on Maximum Limit of Exposure Time Survived by DDT Resistant Phlebotomus argentipes (Diptera: Psychodidae): an Investigative Report.

    Science.gov (United States)

    Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay

    2017-07-24

    Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.

  7. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    Science.gov (United States)

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  8. Monte Carlo Maximum Likelihood Estimation for Generalized Long-Memory Time Series Models

    NARCIS (Netherlands)

    Mesters, G.; Koopman, S.J.; Ooms, M.

    2016-01-01

    An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating

  9. Modelling of extreme rainfall events in Peninsular Malaysia based on annual maximum and partial duration series

    Science.gov (United States)

    Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz

    2015-02-01

    In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.

  10. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas; Juul, Anders

    2004-01-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazard...

  11. Optimisation of sea surface current retrieval using a maximum cross correlation technique on modelled sea surface temperature

    Science.gov (United States)

    Heuzé, Céline; Eriksson, Leif; Carvajal, Gisela

    2017-04-01

    Using sea surface temperature from satellite images to retrieve sea surface currents is not a new idea, but so far its operational near-real time implementation has not been possible. Validation studies are too region-specific or uncertain, due to the errors induced by the images themselves. Moreover, the sensitivity of the most common retrieval method, the maximum cross correlation, to the three parameters that have to be set is unknown. Using model outputs instead of satellite images, biases induced by this method are assessed here, for four different seas of Western Europe, and the best of nine settings and eight temporal resolutions are determined. For all regions, tracking a small 5 km pattern from the first image over a large 30 km region around its original location on a second image, separated from the first image by 6 to 9 hours returned the most accurate results. Moreover, for all regions, the problem is not inaccurate results but missing results, where the velocity is too low to be picked by the retrieval. The results are consistent both with limitations caused by ocean surface current dynamics and with the available satellite technology, indicating that automated sea surface current retrieval from sea surface temperature images is feasible now, for search and rescue operations, pollution confinement or even for more energy efficient and comfortable ship navigation.

  12. Limiting assumptions in molecular modeling: electrostatics.

    Science.gov (United States)

    Marshall, Garland R

    2013-02-01

    Molecular mechanics attempts to represent intermolecular interactions in terms of classical physics. Initial efforts assumed a point charge located at the atom center and coulombic interactions. It is been recognized over multiple decades that simply representing electrostatics with a charge on each atom failed to reproduce the electrostatic potential surrounding a molecule as estimated by quantum mechanics. Molecular orbitals are not spherically symmetrical, an implicit assumption of monopole electrostatics. This perspective reviews recent evidence that requires use of multipole electrostatics and polarizability in molecular modeling.

  13. Total maximum allocated load calculation of nitrogen pollutants by linking a 3D biogeochemical-hydrodynamic model with a programming model in Bohai Sea

    Science.gov (United States)

    Dai, Aiquan; Li, Keqiang; Ding, Dongsheng; Li, Yan; Liang, Shengkang; Li, Yanbin; Su, Ying; Wang, Xiulin

    2015-12-01

    The equal percent removal (EPR) method, in which pollutant reduction ratio was set as the same in all administrative regions, failed to satisfy the requirement for water quality improvement in the Bohai Sea. Such requirement was imposed by the developed Coastal Pollution Total Load Control Management. The total maximum allocated load (TMAL) of nitrogen pollutants in the sea-sink source regions (SSRs) around the Bohai Rim, which is the maximum pollutant load of every outlet under the limitation of water quality criteria, was estimated by optimization-simulation method (OSM) combined with loop approximation calculation. In OSM, water quality is simulated using a water quality model and pollutant load is calculated with a programming model. The effect of changes in pollutant loads on TMAL was discussed. Results showed that the TMAL of nitrogen pollutants in 34 SSRs was 1.49×105 ton/year. The highest TMAL was observed in summer, whereas the lowest in winter. TMAL was also higher in the Bohai Strait and central Bohai Sea and lower in the inner area of the Liaodong Bay, Bohai Bay and Laizhou Bay. In loop approximation calculation, the TMAL obtained was considered satisfactory for water quality criteria as fluctuation of concentration response matrix with pollutant loads was eliminated. Results of numerical experiment further showed that water quality improved faster and were more evident under TMAL input than that when using the EPR method

  14. Limitations of the biopsychosocial model in psychiatry

    Directory of Open Access Journals (Sweden)

    Benning TB

    2015-05-01

    Full Text Available Tony B Benning Maple Ridge Mental Health Centre, Maple Ridge, BC, Canada Abstract: A commitment to an integrative, non-reductionist clinical and theoretical perspective in medicine that honors the importance of all relevant domains of knowledge, not just “the biological,” is clearly evident in Engel’s original writings on the biopsychosocial model. And though this model’s influence on modern psychiatry (in clinical as well as educational settings has been significant, a growing body of recent literature is critical of it - charging it with lacking philosophical coherence, insensitivity to patients’ subjective experience, being unfaithful to the general systems theory that Engel claimed it be rooted in, and engendering an undisciplined eclecticism that provides no safeguards against either the dominance or the under-representation of any one of the three domains of bio, psycho, or social. Keywords: critique of biopsychosocial psychiatry, integrative psychiatry, George Engel

  15. A Three-Dimensional Model of the Marine Nitrogen Cycle during the Last Glacial Maximum Constrained by Sedimentary Isotopes

    Directory of Open Access Journals (Sweden)

    Christopher J. Somes

    2017-05-01

    Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most

  16. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1991-01-01

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  17. Computing maximum likelihood estimates of loglinear models from marginal sums with special attention to loglinear item response theory

    NARCIS (Netherlands)

    Kelderman, Henk

    1992-01-01

    In this paper algorithms are described for obtaining the maximum likelihood estimates of the parameters in loglinear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual

  18. An efficient implementation of maximum likelihood identification of LTI state-space models by local gradient search

    NARCIS (Netherlands)

    Bergboer, N.H.; Verdult, V.; Verhaegen, M.H.G.

    2002-01-01

    We present a numerically efficient implementation of the nonlinear least squares and maximum likelihood identification of multivariable linear time-invariant (LTI) state-space models. This implementation is based on a local parameterization of the system and a gradient search in the resulting

  19. Comparison of full width at half maximum and penumbra of different Gamma Knife models.

    Science.gov (United States)

    Asgari, Sepideh; Banaee, Nooshin; Nedaie, Hassan Ali

    2018-01-01

    As a radiosurgical tool, Gamma Knife has the best and widespread name recognition. Gamma Knife is a noninvasive intracranial technique invented and developed by Swedish neurosurgeon Lars Leksell. The first commercial Leksell Gamma Knife entered the therapeutic armamentarium at the University of Pittsburgh in the United States on August 1987. Since that time, different generation of Gamma Knife developed. In this study, the technical points and dosimetric parameters including full width at half maximum and penumbra on different generation of Gamma Knife will be reviewed and compared. The results of this review study show that the rotating gamma system provides a better dose conformity.

  20. Fixed transaction costs and modelling limited dependent variables

    NARCIS (Netherlands)

    Hempenius, A.L.

    1994-01-01

    As an alternative to the Tobit model, for vectors of limited dependent variables, I suggest a model, which follows from explicitly using fixed costs, if appropriate of course, in the utility function of the decision-maker.

  1. Deep-sea benthic megafaunal habitat suitability modelling: A global-scale maximum entropy model for xenophyophores

    Science.gov (United States)

    Ashford, Oliver S.; Davies, Andrew J.; Jones, Daniel O. B.

    2014-12-01

    Xenophyophores are a group of exclusively deep-sea agglutinating rhizarian protozoans, at least some of which are foraminifera. They are an important constituent of the deep-sea megafauna that are sometimes found in sufficient abundance to act as a significant source of habitat structure for meiofaunal and macrofaunal organisms. This study utilised maximum entropy modelling (Maxent) and a high-resolution environmental database to explore the environmental factors controlling the presence of Xenophyophorea and two frequently sampled xenophyophore species that are taxonomically stable: Syringammina fragilissima and Stannophyllum zonarium. These factors were also used to predict the global distribution of each taxon. Areas of high habitat suitability for xenophyophores were highlighted throughout the world's oceans, including in a large number of areas yet to be suitably sampled, but the Northeast and Southeast Atlantic Ocean, Gulf of Mexico and Caribbean Sea, the Red Sea and deep-water regions of the Malay Archipelago represented particular hotspots. The two species investigated showed more specific habitat requirements when compared to the model encompassing all xenophyophore records, perhaps in part due to the smaller number and relatively more clustered nature of the presence records available for modelling at present. The environmental variables depth, oxygen parameters, nitrate concentration, carbon-chemistry parameters and temperature were of greatest importance in determining xenophyophore distributions, but, somewhat surprisingly, hydrodynamic parameters were consistently shown to have low importance, possibly due to the paucity of well-resolved global hydrodynamic datasets. The results of this study (and others of a similar type) have the potential to guide further sample collection, environmental policy, and spatial planning of marine protected areas and industrial activities that impact the seafloor, particularly those that overlap with aggregations of

  2. Modeling of the Maximum Entropy Problem as an Optimal Control Problem and its Application to Pdf Estimation of Electricity Price

    Directory of Open Access Journals (Sweden)

    M. E. Haji Abadi

    2013-09-01

    Full Text Available In this paper, the continuous optimal control theory is used to model and solve the maximum entropy problem for a continuous random variable. The maximum entropy principle provides a method to obtain least-biased probability density function (Pdf estimation. In this paper, to find a closed form solution for the maximum entropy problem with any number of moment constraints, the entropy is considered as a functional measure and the moment constraints are considered as the state equations. Therefore, the Pdf estimation problem can be reformulated as the optimal control problem. Finally, the proposed method is applied to estimate the Pdf of the hourly electricity prices of New England and Ontario electricity markets. Obtained results show the efficiency of the proposed method.

  3. Limit on mass differences in the Weinberg model

    NARCIS (Netherlands)

    Veltman, M.J.G.

    1977-01-01

    Within the Weinberg model mass differences between members of a multiplet generate further mass differences between the neutral and charged vector bosons. The experimental situation on the Weinberg model leads to an upper limit of about 800 GeV on mass differences within a multiplet. No limit on the

  4. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  5. Maximum entropy perception-action space: a Bayesian model of eye movement selection

    OpenAIRE

    Colas , Francis; Bessière , Pierre; Girard , Benoît

    2010-01-01

    International audience; In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps...

  6. Theoretical evaluation of maximum electric field approximation of direct band-to-band tunneling Kane model for low bandgap semiconductors

    Science.gov (United States)

    Dang Chien, Nguyen; Shih, Chun-Hsing; Hoa, Phu Chi; Minh, Nguyen Hong; Thi Thanh Hien, Duong; Nhung, Le Hong

    2016-06-01

    The two-band Kane model has been popularly used to calculate the band-to-band tunneling (BTBT) current in tunnel field-effect transistor (TFET) which is currently considered as a promising candidate for low power applications. This study theoretically clarifies the maximum electric field approximation (MEFA) of direct BTBT Kane model and evaluates its appropriateness for low bandgap semiconductors. By analysing the physical origin of each electric field term in the Kane model, it has been elucidated in the MEFA that the local electric field term must be remained while the nonlocal electric field terms are assigned by the maximum value of electric field at the tunnel junction. Mathematical investigations have showed that the MEFA is more appropriate for low bandgap semiconductors compared to high bandgap materials because of enhanced tunneling probability in low field regions. The appropriateness of the MEFA is very useful for practical uses in quickly estimating the direct BTBT current in low bandgap TFET devices.

  7. Atterberg Limits Prediction Comparing SVM with ANFIS Model

    Directory of Open Access Journals (Sweden)

    Mohammad Murtaza Sherzoy

    2017-03-01

    Full Text Available Support Vector Machine (SVM and Adaptive Neuro-Fuzzy inference Systems (ANFIS both analytical methods are used to predict the values of Atterberg limits, such as the liquid limit, plastic limit and plasticity index. The main objective of this study is to make a comparison between both forecasts (SVM & ANFIS methods. All data of 54 soil samples are used and taken from the area of Peninsular Malaysian and tested for different parameters containing liquid limit, plastic limit, plasticity index and grain size distribution and were. The input parameter used in for this case are the fraction of grain size distribution which are the percentage of silt, clay and sand. The actual and predicted values of Atterberg limit which obtained from the SVM and ANFIS models are compared by using the correlation coefficient R2 and root mean squared error (RMSE value.  The outcome of the study show that the ANFIS model shows higher accuracy than SVM model for the liquid limit (R2 = 0.987, plastic limit (R2 = 0.949 and plastic index (R2 = 0966. RMSE value that obtained for both methods have shown that the ANFIS model has represent the best performance than SVM model to predict the Atterberg Limits as a whole.

  8. Habitat modelling limitations - Puck Bay, Baltic Sea - a case study

    Directory of Open Access Journals (Sweden)

    Jan Marcin Węsławski

    2013-02-01

    Full Text Available The Natura 2000 sites and the Coastal Landscape Park in a shallow marine bay in the southern Baltic have been studied in detail for the distribution of benthic macroorganisms, species assemblages and seabed habitats. The relatively small Inner Puck Bay (104.8 km2 is one of the most thoroughly investigated marine areas in the Baltic: research has been carried out there continuously for over 50 years. Six physical parameters regarded as critically important for the marine benthos (depth, minimal temperature, maximum salinity, light, wave intensity and sediment type were summarized on a GIS map showing unified patches of seabed and the near-bottom water conditions. The occurrence of uniform seabed forms is weakly correlated with the distributions of individual species or multi-species assemblages. This is partly explained by the characteristics of the local macrofauna, which is dominated by highly tolerant, eurytopic species with opportunistic strategies. The history and timing of the assemblage formation also explains this weak correlation. The distribution of assemblages formed by long-living, structural species (Zostera marina and other higher plants shows the history of recovery following earlier disturbances. In the study area, these communities are still in the stage of recovery and recolonization, and their present distribution does not as yet match the distribution of the physical environmental conditions favourable to them. Our results show up the limitations of distribution modelling in coastal waters, where the history of anthropogenic disturbances can distort the picture of the present-day environmental control of biota distributions.

  9. Scaling limit for the Derezi\\'nski-G\\'erard model

    OpenAIRE

    OHKUBO, Atsushi

    2010-01-01

    We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.

  10. Exact sampling from conditional Boolean models with applications to maximum likelihood inference

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Zwet, van E.W.

    2001-01-01

    We are interested in estimating the intensity parameter of a Boolean model of discs (the bombing model) from a single realization. To do so, we derive the conditional distribution of the points (germs) of the underlying Poisson process. We demonstrate how to apply coupling from the past to generate

  11. The early maximum likelihood estimation model of audiovisual integration in speech perception

    DEFF Research Database (Denmark)

    Andersen, Tobias

    2015-01-01

    integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross......Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk−MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely......-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures...

  12. Bayesian modeling of the assimilative capacity component of nutrient total maximum daily loads

    Science.gov (United States)

    Faulkner, B. R.

    2008-08-01

    Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a total maximum daily load (TMDL) load capacity is developed and applied. The joint distribution of nutrient retention metrics from a literature review of 495 measurements was used for Monte Carlo sampling with a process transfer function for nutrient attenuation. Using the resulting histograms of nutrient retention, reference prior distributions were developed for sites in which some of the metrics contributing to the transfer function were measured. Contributing metrics for the prior include stream discharge, cross-sectional area, fraction of storage volume to free stream volume, denitrification rate constant, storage zone mass transfer rate, dispersion coefficient, and others. Confidence of compliance (CC) that any given level of nutrient retention has been achieved is also determined using this approach. The shape of the CC curve is dependent on the metrics measured and serves in part as a measure of the information provided by the metrics to predict nutrient retention. It is also a direct measurement, with a margin of safety, of the fraction of export load that can be reduced through changing retention metrics. For an impaired stream in western Oklahoma, a combination of prior information and measurement of nutrient attenuation was used to illustrate the proposed approach. This method may be considered for TMDL implementation.

  13. Evaluation of the Charm maximum residue limit β-lactam and tetracycline test for the detection of antibiotics in ewe and goat milk.

    Science.gov (United States)

    Beltrán, M C; Romero, T; Althaus, R L; Molina, M P

    2013-05-01

    The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects β-lactam or tetracycline drugs in raw commingled cow milk at or below European Union maximum residue levels (EU-MRL). The Charm MRL BLTET test procedure was recently modified (dilution in buffer and longer incubation) by the manufacturers to be used with raw ewe and goat milk. To assess the Charm MRL BLTET test for the detection of β-lactams and tetracyclines in milk of small ruminants, an evaluation study was performed at Instituto de Ciencia y Tecnologia Animal of Universitat Politècnica de València (Spain). The test specificity and detection capability (CCβ) were studied following Commission Decision 2002/657/EC. Specificity results obtained in this study were optimal for individual milk free of antimicrobials from ewes (99.2% for β-lactams and 100% for tetracyclines) and goats (97.9% for β-lactams and 100% for tetracyclines) along the entire lactation period regardless of whether the results were visually or instrumentally interpreted. Moreover, no positive results were obtained when a relatively high concentration of different substances belonging to antimicrobial families other than β-lactams and tetracyclines were present in ewe and goat milk. For both types of milk, the CCβ calculated was lower or equal to EU-MRL for amoxicillin (4 µg/kg), ampicillin (4 µg/kg), benzylpenicillin (≤ 2 µg/kg), dicloxacillin (30 µg/kg), oxacillin (30 µg/kg), cefacetrile (≤ 63 µg/kg), cefalonium (≤ 10 µg/kg), cefapirin (≤ 30 µg/kg), desacetylcefapirin (≤ 30 µg/kg), cefazolin (≤ 25 µg/kg), cefoperazone (≤ 25 µg/kg), cefquinome (20 µg/kg), ceftiofur (≤ 50 µg/kg), desfuroylceftiofur (≤ 50µg/kg), and cephalexin (≤ 50 µg/kg). However, this test could neither detect cloxacillin nor nafcillin at or below EU-MRL (CCβ >30 µg/kg). The

  14. Revisiting maximum-a-posteriori estimation in log-concave models: from differential geometry to decision theory

    OpenAIRE

    Pereyra, Marcelo

    2016-01-01

    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...

  15. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  16. Countercurrent flow limitation model for RELAP5/MOD3

    International Nuclear Information System (INIS)

    Riemke, R.A.

    1991-01-01

    This paper reports on a countercurrent flow limitation model incorporated into the RELAP5/MOD3 system transient analysis code. The model is implemented in a manner similar to the RELAP5 chocking model. Simulations using air/water flooding test problem demonstrate the ability of the code to significantly improve its comparison to data when a flooding correlation is used

  17. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, Kimitaka; Itoh, Sanae-I.

    2001-01-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the Wendelstein 7-AS (W7-AS) stellarator. (author)

  18. Modelling of density limit phenomena in toroidal helical plasmas

    International Nuclear Information System (INIS)

    Itoh, K.; Itoh, S.-I.

    2000-03-01

    The physics of density limit phenomena in toroidal helical plasmas based on an analytic point model of toroidal plasmas is discussed. The combined mechanism of the transport and radiation loss of energy is analyzed, and the achievable density is derived. A scaling law of the density limit is discussed. The dependence of the critical density on the heating power, magnetic field, plasma size and safety factor in the case of L-mode energy confinement is explained. The dynamic evolution of the plasma energy and radiation loss is discussed. Assuming a simple model of density evolution, of a sudden loss of density if the temperature becomes lower than critical value, then a limit cycle oscillation is shown to occur. A condition that divides the limit cycle oscillation and the complete radiation collapse is discussed. This model seems to explain the density limit oscillation that has been observed on the W7-AS stellarator. (author)

  19. A predictive model for the tokamak density limit

    International Nuclear Information System (INIS)

    Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; Gates, D. A.; Swerdlow, J.; White, R. B.

    2016-01-01

    We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Our results are robust to a substantial variation in model parameters within the range of experiments.

  20. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    Science.gov (United States)

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  2. A maximum entropy model for predicting wild boar distribution in Spain

    Directory of Open Access Journals (Sweden)

    Jaime Bosch

    2014-09-01

    Full Text Available Wild boar (Sus scrofa populations in many areas of the Palearctic including the Iberian Peninsula have grown continuously over the last century. This increase has led to numerous different types of conflicts due to the damage these mammals can cause to agriculture, the problems they create in the conservation of natural areas, and the threat they pose to animal health. In the context of both wildlife management and the design of health programs for disease control, it is essential to know how wild boar are distributed on a large spatial scale. Given that the quantifying of the distribution of wild species using census techniques is virtually impossible in the case of large-scale studies, modeling techniques have thus to be used instead to estimate animals’ distributions, densities, and abundances. In this study, the potential distribution of wild boar in Spain was predicted by integrating data of presence and environmental variables into a MaxEnt approach. We built and tested models using 100 bootstrapped replicates. For each replicate or simulation, presence data was divided into two subsets that were used for model fitting (60% of the data and cross-validation (40% of the data. The final model was found to be accurate with an area under the receiver operating characteristic curve (AUC value of 0.79. Six explanatory variables for predicting wild boar distribution were identified on the basis of the percentage of their contribution to the model. The model exhibited a high degree of predictive accuracy, which has been confirmed by its agreement with satellite images and field surveys.

  3. Maximum entropy modeling of invasive plants in the forests of Cumberland Plateau and Mountain Region

    Science.gov (United States)

    Dawn Lemke; Philip Hulme; Jennifer Brown; Wubishet. Tadesse

    2011-01-01

    As anthropogenic influences on the landscape change the composition of 'natural' areas, it is important that we apply spatial technology in active management to mitigate human impact. This research explores the integration of geographic information systems (GIS) and remote sensing with statistical analysis to assist in modeling the distribution of invasive...

  4. Maximum likelihood Bayesian averaging of airflow models in unsaturated fractured tuff using Occam and variance windows

    NARCIS (Netherlands)

    Morales-Casique, E.; Neuman, S.P.; Vesselinov, V.V.

    2010-01-01

    We use log permeability and porosity data obtained from single-hole pneumatic packer tests in six boreholes drilled into unsaturated fractured tuff near Superior, Arizona, to postulate, calibrate and compare five alternative variogram models (exponential, exponential with linear drift, power,

  5. Estimation of stochastic frontier models with fixed-effects through Monte Carlo Maximum Likelihood

    NARCIS (Netherlands)

    Emvalomatis, G.; Stefanou, S.E.; Oude Lansink, A.G.J.M.

    2011-01-01

    Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are

  6. Model studies of limitation of carbon dioxide emissions reduction

    International Nuclear Information System (INIS)

    1992-01-01

    The report consists of two papers concerning mitigation of CO 2 emissions in Sweden, ''Limitation of carbon dioxide emissions. Socio-economic effects and the importance of international coordination'', and ''Model calculations for Sweden's energy system with carbon dioxide limitations''. Separate abstracts were prepared for both of the papers

  7. Power system generation expansion planning using the maximum principle and analytical production cost model

    International Nuclear Information System (INIS)

    Lee, K.Y.; Park, Y.M.

    1991-01-01

    Historically, the electric utility demand in most countries has increased rapidly, with a doubling of approximately 10 years in the case of developing countries. In order to meet this growth in demand, the planners of expansion policies were concerned with obtaining expansion pans which dictate what new generation facilities to add and when to add them. This paper reports that, however, the practical planning problem is extremely difficult and complex, and required many hours of the planner's time even though the alternatives examined were extremely limited. In this connection, increased motivation for more sophisticated techniques of valuating utility expansion policies has been developed during the past decade. Among them, the long-range generation expansion planning is to select the most economical and reliable generation expansion plans in order to meet future power demand over a long period of time subject to a multitude of technical, economical, and social constraints

  8. Climatic impacts of fresh water hosing under Last Glacial Maximum conditions: a multi-model study

    Directory of Open Access Journals (Sweden)

    M. Kageyama

    2013-04-01

    Full Text Available Fresh water hosing simulations, in which a fresh water flux is imposed in the North Atlantic to force fluctuations of the Atlantic Meridional Overturning Circulation, have been routinely performed, first to study the climatic signature of different states of this circulation, then, under present or future conditions, to investigate the potential impact of a partial melting of the Greenland ice sheet. The most compelling examples of climatic changes potentially related to AMOC abrupt variations, however, are found in high resolution palaeo-records from around the globe for the last glacial period. To study those more specifically, more and more fresh water hosing experiments have been performed under glacial conditions in the recent years. Here we compare an ensemble constituted by 11 such simulations run with 6 different climate models. All simulations follow a slightly different design, but are sufficiently close in their design to be compared. They all study the impact of a fresh water hosing imposed in the extra-tropical North Atlantic. Common features in the model responses to hosing are the cooling over the North Atlantic, extending along the sub-tropical gyre in the tropical North Atlantic, the southward shift of the Atlantic ITCZ and the weakening of the African and Indian monsoons. On the other hand, the expression of the bipolar see-saw, i.e., warming in the Southern Hemisphere, differs from model to model, with some restricting it to the South Atlantic and specific regions of the southern ocean while others simulate a widespread southern ocean warming. The relationships between the features common to most models, i.e., climate changes over the north and tropical Atlantic, African and Asian monsoon regions, are further quantified. These suggest a tight correlation between the temperature and precipitation changes over the extra-tropical North Atlantic, but different pathways for the teleconnections between the AMOC/North Atlantic region

  9. Use of queue modelling in the analysis of elective patient treatment governed by a maximum waiting time policy

    DEFF Research Database (Denmark)

    Kozlowski, Dawid; Worthington, Dave

    2015-01-01

    chain and discrete event simulation models, to provide an insightful analysis of the public hospital performance under the policy rules. The aim of this paper is to support the enhancement of the quality of elective patient care, to be brought about by better understanding of the policy implications...... on the utilization of public hospital resources. This paper illustrates the use of a queue modelling approach in the analysis of elective patient treatment governed by the maximum waiting time policy. Drawing upon the combined strengths of analytic and simulation approaches we develop both continuous-time Markov...

  10. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  11. Atlantic Ocean Circulation at the Last Glacial Maximum: Inferences from Data and Models

    Science.gov (United States)

    2012-09-01

    Modern TS. Mean dynamic topography (MDT) is the difference between the time averaged sea sur- face and an equipotential surface of Earth’s gravity field...35◦S to 75◦N at 1◦ resolution. Six LGM proxy types are used to constrain the model: four compilations of near sea surface temperatures from the MARGO...176 4.5.1 Near sea surface temperatures . . . . . . . . . . . . . . . . . . . . 176 4.5.2 Seasonality

  12. Flux-limited diffusion models in radiation hydrodynamics

    International Nuclear Information System (INIS)

    Pomraning, G.C.; Szilard, R.H.

    1993-01-01

    The authors discuss certain flux-limited diffusion theories which approximately describe radiative transfer in the presence of steep spatial gradients. A new formulation is presented which generalizes a flux-limited description currently in widespread use for large radiation hydrodynamic calculations. This new formation allows more than one Case discrete mode to be described by a flux-limited diffusion equation. Such behavior is not extant in existing formulations. Numerical results predicted by these flux-limited diffusion models are presented for radiation penetration into an initially cold halfspace. 37 refs., 5 figs

  13. Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms

    Directory of Open Access Journals (Sweden)

    Wassim M. Haddad

    2014-07-01

    Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a

  14. Maximum-Entropy Models of Sequenced Immune Repertoires Predict Antigen-Antibody Affinity

    DEFF Research Database (Denmark)

    Asti, Lorenzo; Uguzzoni, Guido; Marcatili, Paolo

    2016-01-01

    The immune system has developed a number of distinct complex mechanisms to shape and control the antibody repertoire. One of these mechanisms, the affinity maturation process, works in an evolutionary-like fashion: after binding to a foreign molecule, the antibody-producing B-cells exhibit a high...... of an HIV-1 infected patient. The Pearson correlation coefficient between our scoring function and the IC50 neutralization titer measured on 30 different antibodies of known sequence is as high as 0.77 (p-value 10-6), outperforming other sequence- and structure-based models....

  15. Weak diffusion limits of dynamic conditional correlation models

    DEFF Research Database (Denmark)

    Hafner, Christian M.; Laurent, Sebastien; Violante, Francesco

    The properties of dynamic conditional correlation (DCC) models are still not entirely understood. This paper fills one of the gaps by deriving weak diffusion limits of a modified version of the classical DCC model. The limiting system of stochastic differential equations is characterized...... by a diffusion matrix of reduced rank. The degeneracy is due to perfect collinearity between the innovations of the volatility and correlation dynamics. For the special case of constant conditional correlations, a non-degenerate diffusion limit can be obtained. Alternative sets of conditions are considered...

  16. Model of analysis of maximum loads in wind generators produced by extreme winds

    International Nuclear Information System (INIS)

    Herrera – Sánchez, Omar; Schellong, Wolfgang; González – Fernández, Vladimir

    2010-01-01

    The use of the wind energy by means of the wind turbines in areas of high risk of occurrence of Hurricanes comes being an important challenge for the designers of wind farm at world for some years. The wind generator is not usually designed to support this type of phenomena, for this reason the areas of high incidence of tropical hurricanes of the planning are excluded, that which, in occasions disables the use of this renewable source of energy totally, either because the country is very small, or because it coincides the area of more potential fully with that of high risk. To counteract this situation, a model of analysis of maxims loads has been elaborated taken place the extreme winds in wind turbines of great behavior. This model has the advantage of determining, in a chosen place, for the installation of a wind farm, the micro-areas with higher risk of wind loads above the acceptable for the standard classes of wind turbines. (author)

  17. Capacity Prediction Model Based on Limited Priority Gap-Acceptance Theory at Multilane Roundabouts

    Directory of Open Access Journals (Sweden)

    Zhaowei Qu

    2014-01-01

    Full Text Available Capacity is an important design parameter for roundabouts, and it is the premise of computing their delay and queue. Roundabout capacity has been studied for decades, and empirical regression model and gap-acceptance model are the two main methods to predict it. Based on gap-acceptance theory, by considering the effect of limited priority, especially the relationship between limited priority factor and critical gap, a modified model was built to predict the roundabout capacity. We then compare the results between Raff’s method and maximum likelihood estimation (MLE method, and the MLE method was used to predict the critical gaps. Finally, the predicted capacities from different models were compared, with the observed capacity by field surveys, which verifies the performance of the proposed model.

  18. Performance analysis of the lineal model for estimating the maximum power of a HCPV module in different climate conditions

    Science.gov (United States)

    Fernández, Eduardo F.; Almonacid, Florencia; Sarmah, Nabin; Mallick, Tapas; Sanchez, Iñigo; Cuadra, Juan M.; Soria-Moya, Alberto; Pérez-Higueras, Pedro

    2014-09-01

    A model based on easily obtained atmospheric parameters and on a simple lineal mathematical expression has been developed at the Centre of Advanced Studies in Energy and Environment in southern Spain. The model predicts the maximum power of a HCPV module as a function of direct normal irradiance, air temperature and air mass. Presently, the proposed model has only been validated in southern Spain and its performance in locations with different atmospheric conditions still remains unknown. In order to address this issue, several HCPV modules have been measured in two different locations with different climate conditions than the south of Spain: the Environment and Sustainability Institute in southern UK and the National Renewable Energy Center in northern Spain. Results show that the model has an adequate match between actual and estimated data with a RMSE lower than 3.9% at locations with different climate conditions.

  19. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  20. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    International Nuclear Information System (INIS)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad

    2015-01-01

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  1. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    Energy Technology Data Exchange (ETDEWEB)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2015-09-15

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  2. MODELLING OF CONCENTRATION LIMITS BASED ON NEURAL NETWORKS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available We study the forecasting model with the concentration limits is-the use of neural network technology. The software for the implementation of these models. It is shown that the efficiency of the system in the experimental material.

  3. Modelling the influence of silicon and phosphorus limitation on the ...

    African Journals Online (AJOL)

    In the model, toxin production was related to C cell–1 and triggered by nutrient stress, defined by low values of the carbon-based cell quota of the limiting nutrient. The study therefore suggests that simple models, based on easily measured quantities, are capable of simulating Pseudo-nitzschia growth and toxin production.

  4. Geometrical Modeling of Woven Fabrics Weavability-Limit New Relationships

    Directory of Open Access Journals (Sweden)

    Dalal Mohamed

    2017-03-01

    Full Text Available The weavability limit and tightness for 2D and 3D woven fabrics is an important factor and depends on many geometric parameters. Based on a comprehensive review of the literature on textile fabric construction and property, and related research on fabric geometry, a study of the weavability limit and tightness relationships of 2D and 3D woven fabrics was undertaken. Experiments were conducted on a representative number of polyester and cotton woven fabrics which have been woven in our workshop, using three machines endowed with different insertion systems (rapier, projectiles and air jet. Afterwards, these woven fabrics have been analyzed in the laboratory to determine their physical and mechanical characteristics using air permeability-meter and KES-F KAWABATA Evaluation System for Fabrics. In this study, the current Booten’s weavability limit and tightness relationships based on Ashenhurst’s, Peirce’s, Love’s, Russell’s, Galuszynskl’s theory and maximum-weavability is reviewed and modified as new relationships to expand their use to general cases (2D and 3D woven fabrics, all fiber materiel, all yarns etc…. The theoretical relationships were examined and found to agree with experimental results. It was concluded that the weavability limit and tightness relationships are useful tools for weavers in predicting whether a proposed fabric construction was weavable and also in predicting and explaining their physical and mechanical properties.

  5. A new mathematical model for single machine batch scheduling problem for minimizing maximum lateness with deteriorating jobs

    Directory of Open Access Journals (Sweden)

    Ahmad Zeraatkar Moghaddam

    2012-01-01

    Full Text Available This paper presents a mathematical model for the problem of minimizing the maximum lateness on a single machine when the deteriorated jobs are delivered to each customer in various size batches. In reality, this issue may happen within a supply chain in which delivering goods to customers entails cost. Under such situation, keeping completed jobs to deliver in batches may result in reducing delivery costs. In literature review of batch scheduling, minimizing the maximum lateness is known as NP-Hard problem; therefore the present issue aiming at minimizing the costs of delivering, in addition to the aforementioned objective function, remains an NP-Hard problem. In order to solve the proposed model, a Simulation annealing meta-heuristic is used, where the parameters are calibrated by Taguchi approach and the results are compared to the global optimal values generated by Lingo 10 software. Furthermore, in order to check the efficiency of proposed method to solve larger scales of problem, a lower bound is generated. The results are also analyzed based on the effective factors of the problem. Computational study validates the efficiency and the accuracy of the presented model.

  6. Optimization of a Nucleic Acids united-RESidue 2-Point model (NARES-2P) with a maximum-likelihood approach

    International Nuclear Information System (INIS)

    He, Yi; Scheraga, Harold A.; Liwo, Adam

    2015-01-01

    Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field

  7. Exploiting maximum energy from variable speed wind power generation systems by using an adaptive Takagi-Sugeno-Kang fuzzy model

    International Nuclear Information System (INIS)

    Galdi, V.; Piccolo, A.; Siano, P.

    2009-01-01

    Nowadays, incentives and financing options for developing renewable energy facilities and the new development in variable speed wind technology make wind energy a competitive source if compared with conventional generation ones. In order to improve the effectiveness of variable speed wind systems, adaptive control systems able to cope with time variances of the system under control are necessary. On these basis, a data driven designing methodology for TSK fuzzy models design is presented in this paper. The methodology, on the basis of given input-output numerical data, generates the 'best' TSK fuzzy model able to estimate with high accuracy the maximum extractable power from a variable speed wind turbine. The design methodology is based on fuzzy clustering methods for partitioning the input-output space combined with genetic algorithms (GA), and recursive least-squares (LS) optimization methods for model parameter adaptation

  8. APPLICATION OF SOIL LOSS SCENARIOS USING THE ROMSEM MODEL DEPENDING ON MAXIMUM LAND USE PRETABILITY CLASSES. A CASE STUDY

    Directory of Open Access Journals (Sweden)

    SANDA ROȘCA

    2014-06-01

    Full Text Available Application of Soil Loss Scenarios Using the ROMSEM Model Depending on Maximum Land Use Pretability Classes. A Case Study. Practicing a modern agriculture that takes into consideration the favourability conditions and the natural resources of a territory represents one of the main national objectives. Due to the importance of the agricultural land, which prevails among the land use types from the Niraj river basin, as well as the pedological and geomorphological characteristics, different areas where soil erosion is above the accepted thresholds were identified by applying the ROMSEM model. In order to do so, a GIS database was used, regrouping quantitative information regarding soil type, land use, climate and hydrogeology, used as indicators in the model. Estimations for the potential soil erosion have been made on the entire basin as well as on its subbasins. The essential role played by the morphometrical characteristics has also been highlighted (concavity, convexity, slope length etc.. Taking into account the strong agricultural characteristic of the analysed territory, the scoring method was employed for the identification of crop favourability in the case of wheat, barley, corn, sunflower, sugar beet, potato, soy and pea-bean. The results have been used as input data for the C coefficient (crop/vegetation and management factor in the ROMSEM model that was applied for the present land use conditions, as well as for other four scenarios depicting the land use types with maximum favourability. The theoretical, modelled values of the soil erosion were obtained dependent on land use, while the other variables of the model were kept constant.

  9. Toward a mechanistic modeling of nitrogen limitation on vegetation dynamics.

    Directory of Open Access Journals (Sweden)

    Chonggang Xu

    Full Text Available Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO(2 concentration. To account for this known variability in nitrogen-photosynthesis relationships, we develop a mechanistic nitrogen allocation model based on a trade-off of nitrogen allocated between growth and storage, and an optimization of nitrogen allocated among light capture, electron transport, carboxylation, and respiration. The developed model is able to predict the acclimation of photosynthetic capacity to changes in CO(2 concentration, temperature, and radiation when evaluated against published data of V(c,max (maximum carboxylation rate and J(max (maximum electron transport rate. A sensitivity analysis of the model for herbaceous plants, deciduous and evergreen trees implies that elevated CO(2 concentrations lead to lower allocation of nitrogen to carboxylation but higher allocation to storage. Higher growing-season temperatures cause lower allocation of nitrogen to carboxylation, due to higher nitrogen requirements for light capture pigments and for storage. Lower levels of radiation have a much stronger effect on allocation of nitrogen to carboxylation for herbaceous plants than for trees, resulting from higher nitrogen requirements for light capture for herbaceous plants. As far as we know, this is the first model of complete nitrogen allocation that simultaneously considers nitrogen allocation to light capture, electron transport, carboxylation, respiration and storage, and the responses of each to altered environmental conditions. We expect this model could potentially improve our confidence in simulations of carbon-nitrogen interactions and the

  10. Toward a mechanistic modeling of nitrogen limitation on vegetation dynamics.

    Science.gov (United States)

    Xu, Chonggang; Fisher, Rosie; Wullschleger, Stan D; Wilson, Cathy J; Cai, Michael; McDowell, Nate G

    2012-01-01

    Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO(2) concentration. To account for this known variability in nitrogen-photosynthesis relationships, we develop a mechanistic nitrogen allocation model based on a trade-off of nitrogen allocated between growth and storage, and an optimization of nitrogen allocated among light capture, electron transport, carboxylation, and respiration. The developed model is able to predict the acclimation of photosynthetic capacity to changes in CO(2) concentration, temperature, and radiation when evaluated against published data of V(c,max) (maximum carboxylation rate) and J(max) (maximum electron transport rate). A sensitivity analysis of the model for herbaceous plants, deciduous and evergreen trees implies that elevated CO(2) concentrations lead to lower allocation of nitrogen to carboxylation but higher allocation to storage. Higher growing-season temperatures cause lower allocation of nitrogen to carboxylation, due to higher nitrogen requirements for light capture pigments and for storage. Lower levels of radiation have a much stronger effect on allocation of nitrogen to carboxylation for herbaceous plants than for trees, resulting from higher nitrogen requirements for light capture for herbaceous plants. As far as we know, this is the first model of complete nitrogen allocation that simultaneously considers nitrogen allocation to light capture, electron transport, carboxylation, respiration and storage, and the responses of each to altered environmental conditions. We expect this model could potentially improve our confidence in simulations of carbon-nitrogen interactions and the vegetation

  11. Toward a Mechanistic Modeling of Nitrogen Limitation on Vegetation Dynamics

    Science.gov (United States)

    Xu, Chonggang; Fisher, Rosie; Wullschleger, Stan D.; Wilson, Cathy J.; Cai, Michael; McDowell, Nate G.

    2012-01-01

    Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO2 concentration. To account for this known variability in nitrogen-photosynthesis relationships, we develop a mechanistic nitrogen allocation model based on a trade-off of nitrogen allocated between growth and storage, and an optimization of nitrogen allocated among light capture, electron transport, carboxylation, and respiration. The developed model is able to predict the acclimation of photosynthetic capacity to changes in CO2 concentration, temperature, and radiation when evaluated against published data of Vc,max (maximum carboxylation rate) and Jmax (maximum electron transport rate). A sensitivity analysis of the model for herbaceous plants, deciduous and evergreen trees implies that elevated CO2 concentrations lead to lower allocation of nitrogen to carboxylation but higher allocation to storage. Higher growing-season temperatures cause lower allocation of nitrogen to carboxylation, due to higher nitrogen requirements for light capture pigments and for storage. Lower levels of radiation have a much stronger effect on allocation of nitrogen to carboxylation for herbaceous plants than for trees, resulting from higher nitrogen requirements for light capture for herbaceous plants. As far as we know, this is the first model of complete nitrogen allocation that simultaneously considers nitrogen allocation to light capture, electron transport, carboxylation, respiration and storage, and the responses of each to altered environmental conditions. We expect this model could potentially improve our confidence in simulations of carbon-nitrogen interactions and the vegetation feedbacks

  12. Limitations of JEDI Models | Jobs and Economic Development Impact Models |

    Science.gov (United States)

    Group's IMPLAN accounting software. For JEDI, these are updated every two years for the best available -output modeling remains a widely used methodology for measuring economic development activity. Definition definition of the geographic area under consideration. Datasets of multipliers from IMPLAN are available at

  13. Modeling the evolution of the Laurentide Ice Sheet from MIS 3 to the Last Glacial Maximum: an approach using sea level modeling and ice flow dynamics

    Science.gov (United States)

    Weisenberg, J.; Pico, T.; Birch, L.; Mitrovica, J. X.

    2017-12-01

    The history of the Laurentide Ice Sheet since the Last Glacial Maximum ( 26 ka; LGM) is constrained by geological evidence of ice margin retreat in addition to relative sea-level (RSL) records in both the near and far field. Nonetheless, few observations exist constraining the ice sheet's extent across the glacial build-up phase preceding the LGM. Recent work correcting RSL records along the U.S. mid-Atlantic dated to mid-MIS 3 (50-35 ka) for glacial-isostatic adjustment (GIA) infer that the Laurentide Ice Sheet grew by more than three-fold in the 15 ky leading into the LGM. Here we test the plausibility of a late and extremely rapid glaciation by driving a high-resolution ice sheet model, based on a nonlinear diffusion equation for the ice thickness. We initialize this model at 44 ka with the mid-MIS 3 ice sheet configuration proposed by Pico et al. (2017), GIA-corrected basal topography, and mass balance representative of mid-MIS 3 conditions. These simulations predict rapid growth of the eastern Laurentide Ice Sheet, with rates consistent with achieving LGM ice volumes within 15 ky. We use these simulations to refine the initial ice configuration and present an improved and higher resolution model for North American ice cover during mid-MIS 3. In addition we show that assumptions of ice loads during the glacial phase, and the associated reconstructions of GIA-corrected basal topography, produce a bias that can underpredict ice growth rates in the late stages of the glaciation, which has important consequences for our understanding of the speed limit for ice growth on glacial timescales.

  14. [Staff Satisfaction within Duty Hour Models: Longitudinal Survey on Suitability and Legal Conformity at a Surgical Maximum Care Department].

    Science.gov (United States)

    Langelotz, C; Koplin, G; Pascher, A; Lohmann, R; Köhler, A; Pratschke, J; Haase, O

    2017-12-01

    Background Between the conflicting requirements of clinic organisation, the European Working Time Directive, patient safety, an increasing lack of junior staff, and competitiveness, the development of ideal duty hour models is vital to ensure maximum quality of care within the legal requirements. To achieve this, it is useful to evaluate the actual effects of duty hour models on staff satisfaction. Materials and Methods After the traditional 24-hour duty shift was given up in a surgical maximum care centre in 2007, an 18-hour duty shift was implemented, followed by a 12-hour shift in 2008, to improve handovers and reduce loss of information. The effects on work organisation, quality of life and salary were analysed in an anonymous survey in 2008. The staff survey was repeated in 2014. Results With a response rate of 95% of questionnaires in 2008 and a 93% response rate in 2014, the 12-hour duty model received negative ratings due to its high duty frequency and subsequent social strain. Also the physical strain and chronic tiredness were rated as most severe in the 12-hour rota. The 18-hour duty shift was the model of choice amongst staff. The 24-hour duty model was rated as the best compromise between the requirements of work organisation and staff satisfaction, and therefore this duty model was adapted accordingly in 2015. Conclusion The essential basis of a surgical department is a duty hour model suited to the requirements of work organisation, the Working Time Directive and the needs of the surgical staff. A 12-hour duty model can be ideal for work organisation, but only if augmented with an adequate number of staff members, the implementation of this model is possible without the frequency of 12-hour shifts being too high associated with strain on surgical staff and a perceived deterioration of quality of life. A staff survey should be performed on a regular basis to assess the actual effects of duty hour models and enable further optimisation. The much

  15. Cellular aging (the Hayflick limit) and species longevity: a unification model based on clonal succession.

    Science.gov (United States)

    Juckett, D A

    1987-03-01

    A model is presented which proposes a specific cause-and-effect relationship between a limited cell division potential and the maximum lifespan of humans and other mammals. It is based on the clonal succession hypothesis of Kay which states that continually replicating cell beds (e.g. bone marrow, intestinal crypts, epidermis) could be composed of cells with short, well-defined division potentials. In this model, the cells of these beds are proposed to exist in an ordered hierarchy which establishes a specific sequence for cell divisions throughout the organism's lifespan. The depletion of division potential at all hierarchical levels leads to a loss of bed function and sets an intrinsic limit to species longevity. A specific hierarchy for cell proliferation is defined which allows the calculation of time to bed depletion and, ultimately, to organism mortality. The model allows the existence of a small number (n) of critical cell beds within the organism and defines organism death as the inability of any one of these beds to produce cells. The model is consistent with all major observations related to cellular and organismic aging. In particular, it links the PDLs (population doubling limit) observed for various species to their mean lifespan; it explains the slow decline in PDL as a function of age of the donor; it establishes a thermodynamically stable maximum lifespan for a disease-free population; and it can explain why tissue transplants outlive donors or hosts.

  16. Development of total maximum daily loads for bacteria impaired watershed using the comprehensive hydrology and water quality simulation model.

    Science.gov (United States)

    Kim, Sang M; Brannan, Kevin M; Zeckoski, Rebecca W; Benham, Brian L

    2014-01-01

    The objective of this study was to develop bacteria total maximum daily loads (TMDLs) for the Hardware River watershed in the Commonwealth of Virginia, USA. The TMDL program is an integrated watershed management approach required by the Clean Water Act. The TMDLs were developed to meet Virginia's water quality standard for bacteria at the time, which stated that the calendar-month geometric mean concentration of Escherichia coli should not exceed 126 cfu/100 mL, and that no single sample should exceed a concentration of 235 cfu/100 mL. The bacteria impairment TMDLs were developed using the Hydrological Simulation Program-FORTRAN (HSPF). The hydrology and water quality components of HSPF were calibrated and validated using data from the Hardware River watershed to ensure that the model adequately simulated runoff and bacteria concentrations. The calibrated and validated HSPF model was used to estimate the contributions from the various bacteria sources in the Hardware River watershed to the in-stream concentration. Bacteria loads were estimated through an extensive source characterization process. Simulation results for existing conditions indicated that the majority of the bacteria came from livestock and wildlife direct deposits and pervious lands. Different source reduction scenarios were evaluated to identify scenarios that meet both the geometric mean and single sample maximum E. coli criteria with zero violations. The resulting scenarios required extreme and impractical reductions from livestock and wildlife sources. Results from studies similar to this across Virginia partially contributed to a reconsideration of the standard's applicability to TMDL development.

  17. Modelling the existing Irish energy-system to identify future energy costs and the maximum wind penetration feasible

    DEFF Research Database (Denmark)

    Connolly, D.; Lund, Henrik; Mathiesen, Brian Vad

    2010-01-01

    energy- system to future energy costs by considering future fuel prices, CO2 prices, and different interest rates. The final investigation identifies the maximum wind penetration feasible on the 2007 Irish energy- system from a technical and economic perspective, as wind is the most promising fluctuating...... for the existing Irish energy-system is approximately 30% from both a technical and economic perspective based on 2020 energy prices. Future studies will use the model developed in this study to show that higher wind penetrations can be achieved if the existing energy-system is modified correctly. Finally...... renewable resource available in Ireland. It is concluded that the reference model simulates the Irish energy-system accurately, the annual fuel costs for Ireland’s energy could increase by approximately 58% from 2007 to 2020 if a business-as-usual scenario is followed, and the optimum wind penetration...

  18. Stable isotopes of fossil teeth corroborate key general circulation model predictions for the Last Glacial Maximum in North America

    Science.gov (United States)

    Kohn, Matthew J.; McKay, Moriah

    2010-11-01

    Oxygen isotope data provide a key test of general circulation models (GCMs) for the Last Glacial Maximum (LGM) in North America, which have otherwise proved difficult to validate. High δ18O pedogenic carbonates in central Wyoming have been interpreted to indicate increased summer precipitation sourced from the Gulf of Mexico. Here we show that tooth enamel δ18O of large mammals, which is strongly correlated with local water and precipitation δ18O, is lower during the LGM in Wyoming, not higher. Similar data from Texas, California, Florida and Arizona indicate higher δ18O values than in the Holocene, which is also predicted by GCMs. Tooth enamel data closely validate some recent models of atmospheric circulation and precipitation δ18O, including an increase in the proportion of winter precipitation for central North America, and summer precipitation in the southern US, but suggest aridity can bias pedogenic carbonate δ18O values significantly.

  19. An off-line dual maximum resource bin packing model for solving the maintenance problem in the aviation industry

    Directory of Open Access Journals (Sweden)

    George Cristian Gruia

    2013-05-01

    Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programming model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.

  20. AN OFF-LINE DUAL MAXIMUM RESOURCE BIN PACKING MODEL FOR SOLVING THE MAINTENANCE PROBLEM IN THE AVIATION INDUSTRY

    Directory of Open Access Journals (Sweden)

    GEORGE CRISTIAN GRUIA

    2013-05-01

    Full Text Available In the aviation industry, propeller motor engines have a lifecycle of several thousand hours of flight and the maintenance is an important part of their lifecycle. The present article considers a multi-resource, priority-based case scheduling problem, which is applied in a Romanian manufacturing company, that repairs and maintains helicopter and airplane engines at a certain quality level imposed by the aviation standards. Given a reduced budget constraint, the management’s goal is to maximize the utilization of their resources (financial, material, space, workers, by maintaining a prior known priority rule. An Off-Line Dual Maximum Resource Bin Packing model, based on a Mixed Integer Programing model is thus presented. The obtained results show an increase with approx. 25% of the Just in Time shipping of the engines to the customers and approx. 12,5% increase in the utilization of the working area.

  1. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  2. A revised oceanographic model to calculate the limiting capacity of the ocean to accept radioactive waste

    International Nuclear Information System (INIS)

    Webb, G.A.M.; Grimwood, P.D.

    1976-12-01

    This report describes an oceanographic model which has been developed for the use in calculating the capacity of the oceans to accept radioactive wastes. One component is a relatively short-term diffusion model which is based on that described in an earlier report (Webb et al., NRPB-R14(1973)), but which has been generalised to some extent. Another component is a compartment model which is used to calculate long-term widespread water concentrations. This addition overcomes some of the short comings of the earlier diffusion model. Incorporation of radioactivity into deep ocean sediments is included in this long-term model as a removal mechanism. The combined model is used to provide a conservative (safe) estimate of the maximum concentrations of radioactivity in water as a function of time after the start of a continuous disposal operation. These results can then be used to assess the limiting capacity of an ocean to accept radioactive waste. (author)

  3. Functional State Modelling of Cultivation Processes: Dissolved Oxygen Limitation State

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2015-04-01

    Full Text Available A new functional state, namely dissolved oxygen limitation state for both bacteria Escherichia coli and yeast Saccharomyces cerevisiae fed-batch cultivation processes is presented in this study. Functional state modelling approach is applied to cultivation processes in order to overcome the main disadvantages of using global process model, namely complex model structure and a big number of model parameters. Alongwith the newly introduced dissolved oxygen limitation state, second acetate production state and first acetate production state are recognized during the fed-batch cultivation of E. coli, while mixed oxidative state and first ethanol production state are recognized during the fed-batch cultivation of S. cerevisiae. For all mentioned above functional states both structural and parameter identification is here performed based on experimental data of E. coli and S. cerevisiae fed-batch cultivations.

  4. Limitations of sorption isotherms on modeling groundwater contaminant transport

    International Nuclear Information System (INIS)

    Silva, Eduardo Figueira da

    2007-01-01

    Design and safety assessment of radioactive waste repositories, as well as remediation of radionuclide contaminated groundwater require the development of models capable of accurately predicting trace element fate and transport. Adsorption of trace radionuclides onto soils and groundwater is an important mechanism controlling near- and far- field transport. Although surface complexation models (SCMs) can better describe the adsorption mechanisms of most radionuclides onto mineral surfaces by directly accounting for variability of system properties and mineral surface properties, isotherms are still used to model contaminant transport in groundwater, despite the much higher system dependence. The present work investigates differences between transport model results based on these two approaches for adsorption modeling. A finite element transport model is used for the isotherm model, whereas the computer program PHREEQC is used for the SCM approach. Both models are calibrated for a batch experiment, and one-dimensional transport is simulated using the calibrated parameters. At the lower injected concentrations there are large discrepancies between SCM and isotherm transport predictions, with the SCM presenting much longer tails on the breakthrough curves. Isotherms may also provide non-conservative results for time to breakthrough and for maximum concentration in a contamination plume. Isotherm models are shown not to be robust enough to predict transport behavior of some trace elements, thus discouraging their use. The results also illustrate the promise of the SCM modeling approach in safety assessment and environmental remediation applications, also suggesting that independent batch sorption measurements can be used, within the framework of the SCM, to produce a more versatile and realistic groundwater transport model for radionuclides which is capable of accounting more accurately for temporal and spatial variations in geochemical conditions. (author)

  5. On the limits of application of the Stephens model

    International Nuclear Information System (INIS)

    Issa, A.; Piepenbring, R.

    1977-01-01

    The limits of the rotation alignment model of Stephens are studied. The conditions of applicability of the assumption of constant j for a unique parity isolated sub-shell (extended to N=4 and N=3) are discussed and explanations are given. A correct treatment of the eigenstates of the intrinsic motion allows however a simple extension of the model to non-isolated sub-shells without enlarging the basis of diagonalisation [fr

  6. Flexible Modeling of Survival Data with Covariates Subject to Detection Limits via Multiple Imputation.

    Science.gov (United States)

    Bernhardt, Paul W; Wang, Huixia Judy; Zhang, Daowen

    2014-01-01

    Models for survival data generally assume that covariates are fully observed. However, in medical studies it is not uncommon for biomarkers to be censored at known detection limits. A computationally-efficient multiple imputation procedure for modeling survival data with covariates subject to detection limits is proposed. This procedure is developed in the context of an accelerated failure time model with a flexible seminonparametric error distribution. The consistency and asymptotic normality of the multiple imputation estimator are established and a consistent variance estimator is provided. An iterative version of the proposed multiple imputation algorithm that approximates the EM algorithm for maximum likelihood is also suggested. Simulation studies demonstrate that the proposed multiple imputation methods work well while alternative methods lead to estimates that are either biased or more variable. The proposed methods are applied to analyze the dataset from a recently-conducted GenIMS study.

  7. Experimental limits from ATLAS on Standard Model Higgs production.

    CERN Multimedia

    ATLAS, collaboration

    2012-01-01

    Experimental limits from ATLAS on Standard Model Higgs production in the mass range 110-600 GeV. The solid curve reflects the observed experimental limits for the production of a Higgs of each possible mass value (horizontal axis). The region for which the solid curve dips below the horizontal line at the value of 1 is excluded with a 95% confidence level (CL). The dashed curve shows the expected limit in the absence of the Higgs boson, based on simulations. The green and yellow bands correspond (respectively) to 68%, and 95% confidence level regions from the expected limits. Higgs masses in the narrow range 123-130 GeV are the only masses not excluded at 95% CL

  8. Limited Area Forecasting and Statistical Modelling for Wind Energy Scheduling

    DEFF Research Database (Denmark)

    Rosgaard, Martin Haubjerg

    forecast accuracy for operational wind power scheduling. Numerical weather prediction history and scales of atmospheric motion are summarised, followed by a literature review of limited area wind speed forecasting. Hereafter, the original contribution to research on the topic is outlined. The quality...... control of wind farm data used as forecast reference is described in detail, and a preliminary limited area forecasting study illustrates the aggravation of issues related to numerical orography representation and accurate reference coordinates at ne weather model resolutions. For the o shore and coastal...... sites studied limited area forecasting is found to deteriorate wind speed prediction accuracy, while inland results exhibit a steady forecast performance increase with weather model resolution. Temporal smoothing of wind speed forecasts is shown to improve wind power forecast performance by up to almost...

  9. Usefulness and limitations of global flood risk models

    Science.gov (United States)

    Ward, Philip; Jongman, Brenden; Salamon, Peter; Simpson, Alanna; Bates, Paul; De Groeve, Tom; Muis, Sanne; Coughlan de Perez, Erin; Rudari, Roberto; Mark, Trigg; Winsemius, Hessel

    2016-04-01

    Global flood risk models are now a reality. Initially, their development was driven by a demand from users for first-order global assessments to identify risk hotspots. Relentless upward trends in flood damage over the last decade have enhanced interest in such assessments. The adoption of the Sendai Framework for Disaster Risk Reduction and the Warsaw International Mechanism for Loss and Damage Associated with Climate Change Impacts have made these efforts even more essential. As a result, global flood risk models are being used more and more in practice, by an increasingly large number of practitioners and decision-makers. However, they clearly have their limits compared to local models. To address these issues, a team of scientists and practitioners recently came together at the Global Flood Partnership meeting to critically assess the question 'What can('t) we do with global flood risk models?'. The results of this dialogue (Ward et al., 2013) will be presented, opening a discussion on similar broader initiatives at the science-policy interface in other natural hazards. In this contribution, examples are provided of successful applications of global flood risk models in practice (for example together with the World Bank, Red Cross, and UNISDR), and limitations and gaps between user 'wish-lists' and model capabilities are discussed. Finally, a research agenda is presented for addressing these limitations and reducing the gaps. Ward et al., 2015. Nature Climate Change, doi:10.1038/nclimate2742

  10. A binary genetic programing model for teleconnection identification between global sea surface temperature and local maximum monthly rainfall events

    Science.gov (United States)

    Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir

    2017-12-01

    The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.

  11. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models

    Science.gov (United States)

    Bopp, L.; Resplandy, L.; Untersee, A.; Le Mezo, P.; Kageyama, M.

    2017-08-01

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O2sat) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.

  12. Bayesian maximum entropy integration of ozone observations and model predictions: an application for attainment demonstration in North Carolina.

    Science.gov (United States)

    de Nazelle, Audrey; Arunachalam, Saravanan; Serre, Marc L

    2010-08-01

    States in the USA are required to demonstrate future compliance of criteria air pollutant standards by using both air quality monitors and model outputs. In the case of ozone, the demonstration tests aim at relying heavily on measured values, due to their perceived objectivity and enforceable quality. Weight given to numerical models is diminished by integrating them in the calculations only in a relative sense. For unmonitored locations, the EPA has suggested the use of a spatial interpolation technique to assign current values. We demonstrate that this approach may lead to erroneous assignments of nonattainment and may make it difficult for States to establish future compliance. We propose a method that combines different sources of information to map air pollution, using the Bayesian Maximum Entropy (BME) Framework. The approach gives precedence to measured values and integrates modeled data as a function of model performance. We demonstrate this approach in North Carolina, using the State's ozone monitoring network in combination with outputs from the Multiscale Air Quality Simulation Platform (MAQSIP) modeling system. We show that the BME data integration approach, compared to a spatial interpolation of measured data, improves the accuracy and the precision of ozone estimations across the state.

  13. Ocean (de)oxygenation from the Last Glacial Maximum to the twenty-first century: insights from Earth System models.

    Science.gov (United States)

    Bopp, L; Resplandy, L; Untersee, A; Le Mezo, P; Kageyama, M

    2017-09-13

    All Earth System models project a consistent decrease in the oxygen content of oceans for the coming decades because of ocean warming, reduced ventilation and increased stratification. But large uncertainties for these future projections of ocean deoxygenation remain for the subsurface tropical oceans where the major oxygen minimum zones are located. Here, we combine global warming projections, model-based estimates of natural short-term variability, as well as data and model estimates of the Last Glacial Maximum (LGM) ocean oxygenation to gain some insights into the major mechanisms of oxygenation changes across these different time scales. We show that the primary uncertainty on future ocean deoxygenation in the subsurface tropical oceans is in fact controlled by a robust compensation between decreasing oxygen saturation (O 2sat ) due to warming and decreasing apparent oxygen utilization (AOU) due to increased ventilation of the corresponding water masses. Modelled short-term natural variability in subsurface oxygen levels also reveals a compensation between O 2sat and AOU, controlled by the latter. Finally, using a model simulation of the LGM, reproducing data-based reconstructions of past ocean (de)oxygenation, we show that the deoxygenation trend of the subsurface ocean during deglaciation was controlled by a combination of warming-induced decreasing O 2sat and increasing AOU driven by a reduced ventilation of tropical subsurface waters.This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'. © 2017 The Author(s).

  14. Singular limit analysis of a model for earthquake faulting

    DEFF Research Database (Denmark)

    Bossolini, Elena; Brøns, Morten; Kristiansen, Kristian Uldall

    2017-01-01

    In this paper we consider the one dimensional spring-block model describing earthquake faulting. By using geometric singular perturbation theory and the blow-up method we provide a detailed description of the periodicity of the earthquake episodes. In particular, the limit cycles arise from...

  15. Homogeneous axisymmetric model with a limitting stiff equation of state

    International Nuclear Information System (INIS)

    Korkina, M.P.; Martynenko, V.G.

    1976-01-01

    A solution is obtained for Einstein's equations in which all metric coefficients are time functions for a limiting stiff equation of the substance state. Thr solution describes a homogeneous cosmological model with cylindrical symmetry. It is shown that the same metrics can be induced by a massless scalar only time-dependent field. Analysis of this solution is presented

  16. The limitations of applying rational decision-making models to ...

    African Journals Online (AJOL)

    The aim of this paper is to show the limitations of rational decision-making models as applied to child spacing and more specifically to the use of modern methods of contraception. In the light of factors known to influence low uptake of child spacing services in other African countries, suggestions are made to explain the ...

  17. Use of Maximum Likelihood-Mixed Models to select stable reference genes: a case of heat stress response in sheep

    Directory of Open Access Journals (Sweden)

    Salces Judit

    2011-08-01

    Full Text Available Abstract Background Reference genes with stable expression are required to normalize expression differences of target genes in qPCR experiments. Several procedures and companion software have been proposed to find the most stable genes. Model based procedures are attractive because they provide a solid statistical framework. NormFinder, a widely used software, uses a model based method. The pairwise comparison procedure implemented in GeNorm is a simpler procedure but one of the most extensively used. In the present work a statistical approach based in Maximum Likelihood estimation under mixed models was tested and compared with NormFinder and geNorm softwares. Sixteen candidate genes were tested in whole blood samples from control and heat stressed sheep. Results A model including gene and treatment as fixed effects, sample (animal, gene by treatment, gene by sample and treatment by sample interactions as random effects with heteroskedastic residual variance in gene by treatment levels was selected using goodness of fit and predictive ability criteria among a variety of models. Mean Square Error obtained under the selected model was used as indicator of gene expression stability. Genes top and bottom ranked by the three approaches were similar; however, notable differences for the best pair of genes selected for each method and the remaining genes of the rankings were shown. Differences among the expression values of normalized targets for each statistical approach were also found. Conclusions Optimal statistical properties of Maximum Likelihood estimation joined to mixed model flexibility allow for more accurate estimation of expression stability of genes under many different situations. Accurate selection of reference genes has a direct impact over the normalized expression values of a given target gene. This may be critical when the aim of the study is to compare expression rate differences among samples under different environmental

  18. Totally Asymmetric Limit for Models of Heat Conduction

    Science.gov (United States)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-08-01

    We consider one dimensional weakly asymmetric boundary driven models of heat conduction. In the cases of a constant diffusion coefficient and of a quadratic mobility we compute the quasi-potential that is a non local functional obtained by the solution of a variational problem. This is done using the dynamic variational approach of the macroscopic fluctuation theory (Bertini et al. in Rev Mod Phys 87:593, 2015). The case of a concave mobility corresponds essentially to the exclusion model that has been discussed in Bertini et al. (J Stat Mech L11001, 2010; Pure Appl Math 64(5):649-696, 2011; Commun Math Phys 289(1):311-334, 2009) and Enaud and Derrida (J Stat Phys 114:537-562, 2004). We consider here the convex case that includes for example the Kipnis-Marchioro-Presutti (KMP) model and its dual (KMPd) (Kipnis et al. in J Stat Phys 27:6574, 1982). This extends to the weakly asymmetric regime the computations in Bertini et al. (J Stat Phys 121(5/6):843-885, 2005). We consider then, both microscopically and macroscopically, the limit of large externalfields. Microscopically we discuss some possible totally asymmetric limits of the KMP model. In one case the totally asymmetric dynamics has a product invariant measure. Another possible limit dynamics has instead a non trivial invariant measure for which we give a duality representation. Macroscopically we show that the quasi-potentials of KMP and KMPd, which are non local for any value of the external field, become local in the limit. Moreover the dependence on one of the external reservoirs disappears. For models having strictly positive quadratic mobilities we obtain instead in the limit a non local functional having a structure similar to the one of the boundary driven asymmetric exclusion process.

  19. A simple shear limited, single size, time dependent flocculation model

    Science.gov (United States)

    Kuprenas, R.; Tran, D. A.; Strom, K.

    2017-12-01

    This research focuses on the modeling of flocculation of cohesive sediment due to turbulent shear, specifically, investigating the dependency of flocculation on the concentration of cohesive sediment. Flocculation is important in larger sediment transport models as cohesive particles can create aggregates which are orders of magnitude larger than their unflocculated state. As the settling velocity of each particle is determined by the sediment size, density, and shape, accounting for this aggregation is important in determining where the sediment is deposited. This study provides a new formulation for flocculation of cohesive sediment by modifying the Winterwerp (1998) flocculation model (W98) so that it limits floc size to that of the Kolmogorov micro length scale. The W98 model is a simple approach that calculates the average floc size as a function of time. Because of its simplicity, the W98 model is ideal for implementing into larger sediment transport models; however, the model tends to over predict the dependency of the floc size on concentration. It was found that the modification of the coefficients within the original model did not allow for the model to capture the dependency on concentration. Therefore, a new term within the breakup kernel of the W98 formulation was added. The new formulation results is a single size, shear limited, and time dependent flocculation model that is able to effectively capture the dependency of the equilibrium size of flocs on both suspended sediment concentration and the time to equilibrium. The overall behavior of the new model is explored and showed align well with other studies on flocculation. Winterwerp, J. C. (1998). A simple model for turbulence induced flocculation of cohesive sediment. .Journal of Hydraulic Research, 36(3):309-326.

  20. New limit on logotropic unified dark energy models

    Directory of Open Access Journals (Sweden)

    V.M.C. Ferreira

    2017-07-01

    Full Text Available A unification of dark matter and dark energy in terms of a logotropic perfect dark fluid has recently been proposed, where deviations with respect to the standard ΛCDM model are dependent on a single parameter B. In this paper we show that the requirement that the linear growth of cosmic structures on comoving scales larger than 8h−1Mpc is not significantly affected with respect to the standard ΛCDM result provides the strongest limit to date on the model (B<6×10−7, an improvement of more than three orders of magnitude over previous upper limits on the value of B. We further show that this limit rules out the logotropic Unified Dark Energy model as a possible solution to the small scale problems of the ΛCDM model, including the cusp problem of Dark Matter halos or the missing satellite problem, as well as the original version of the model where the Planck energy density was taken as one of the two parameters characterizing the logotropic dark fluid.

  1. The Particle-Matrix model: limitations and further improvements needed

    DEFF Research Database (Denmark)

    Cepuritis, Rolands; Jacobsen, Stefan; Spangenberg, Jon

    According to the Particle-Matrix Model (PMM) philosophy, the workability of concrete dependson the properties of two phases and the volumetric ratio between them: the fluid matrix phase (≤0.125 mm) and the solid particle phase (> 0.125 mm). The model has been successfully appliedto predict concrete...... workability for different types of concrete, but has also indicated that somepotential cases exist when its application is limited. The paper presents recent studies onimproving the method by analysing how the PMM one-point flow parameter λQ can beexpressed by rheological models (Bingham and Herschel-Bulkley)....

  2. Modelling the existing Irish energy-system to identify future energy costs and the maximum wind penetration feasible

    International Nuclear Information System (INIS)

    Connolly, D.; Leahy, M.; Lund, H.; Mathiesen, B.V.

    2010-01-01

    In this study a model of the Irish energy-system was developed using EnergyPLAN based on the year 2007, which was then used for three investigations. The first compares the model results with actual values from 2007 to validate its accuracy. The second illustrates the exposure of the existing Irish energy-system to future energy costs by considering future fuel prices, CO 2 prices, and different interest rates. The final investigation identifies the maximum wind penetration feasible on the 2007 Irish energy-system from a technical and economic perspective, as wind is the most promising fluctuating renewable resource available in Ireland. It is concluded that the reference model simulates the Irish energy-system accurately, the annual fuel costs for Ireland's energy could increase by approximately 58% from 2007 to 2020 if a business-as-usual scenario is followed, and the optimum wind penetration for the existing Irish energy-system is approximately 30% from both a technical and economic perspective based on 2020 energy prices. Future studies will use the model developed in this study to show that higher wind penetrations can be achieved if the existing energy-system is modified correctly. Finally, these results are not only applicable to Ireland, but also represent the issues facing many other countries. (author)

  3. Numerical Modeling of Rocky Mountain Paleoglaciers - Insights into the Climate of the Last Glacial Maximum and the Subsequent Deglaciation

    Science.gov (United States)

    Leonard, E. M.; Laabs, B. J. C.; Plummer, M. A.

    2014-12-01

    Numerical modeling of paleoglaciers can yield information on the climatic conditions necessary to sustain those glaciers. In this study we apply a coupled 2-d mass/energy balance and flow model (Plummer and Phillips, 2003) to reconstruct local last glacial maximum (LLGM) glaciers and paleoclimate in ten study areas along the crest of the U.S. Rocky Mountains between 33°N and 49°N. In some of the areas, where timing of post-LLGM ice recession is constrained by surface exposure ages on either polished bedrock upvalley from the LLGM moraines or post-LLGM recessional moraines, we use the model to assess magnitudes and rates of climate change during deglaciation. The modeling reveals a complex pattern of LLGM climate. The magnitude of LLGM-to-modern climate change (temperature and/or precipitation change) was greater in both the northern (Montana) Rocky Mountains and southern (New Mexico) Rocky Mountains than in the middle (Wyoming and Colorado) Rocky Mountains. We use temperature depression estimates from global and regional climate models to infer LLGM precipitation from our glacier model results. Our results suggest a reduction of precipitation coupled with strongly depressed temperatures in the north, contrasted with strongly enhanced precipitation and much more modest temperature depression in the south. The middle Rocky Mountains of Colorado and Wyoming appear to have experienced a reduction in precipitation at the LLGM without the strong temperature depression of the northern Rocky Mountains. Preliminary work on modeling of deglaciation in the Sangre de Cristo Range in southern Colorado suggests that approximately half of the LLGM-to-modern climate change took place during the initial ~2400 years of deglaciation. If increasing temperature and changing solar insolation were the sole drivers of this initial deglaciation, then temperature would need to have risen by slightly more than 1°C/ky through this interval to account for the observed rate of ice recession.

  4. Psychosocial Pain Management Moderation: The Limit, Activate, and Enhance Model.

    Science.gov (United States)

    Day, Melissa A; Ehde, Dawn M; Jensen, Mark P

    2015-10-01

    There is a growing emphasis in the pain literature on understanding the following second-order research questions: Why do psychosocial pain treatments work? For whom do various treatments work? This critical review summarizes research that addresses the latter question and proposes a moderation model to help guide future research. A theoretical moderation framework for matching individuals to specific psychosocial pain interventions has been lacking. However, several such frameworks have been proposed in the broad psychotherapy and implementation science literature. Drawing on these theories and adapting them specifically for psychosocial pain treatment, here we propose a Limit, Activate, and Enhance model of pain treatment moderation. This model is unique in that it includes algorithms not only for matching treatments on the basis of patient weaknesses but also for directing patients to interventions that build on their strengths. Critically, this model provides a basis for specific a priori hypothesis generation, and a selection of the possible hypotheses drawn from the model are proposed and discussed. Future research considerations are presented that could refine and expand the model based on theoretically driven empirical evidence. The Limit, Activate, and Enhance model presented here is a theoretically derived framework that provides an a priori basis for hypothesis generation regarding psychosocial pain treatment moderators. The model will advance moderation research via its unique focus on matching patients to specific treatments that (1) limit maladaptive responses, (2) activate adaptive responses, and (3) enhance treatment outcomes based on patient strengths and resources. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.

  5. Abilities and limitations in the use of regional climate models

    Energy Technology Data Exchange (ETDEWEB)

    Koeltzov, Morten Andreas Oedegaard

    2012-11-01

    In order to say something about the effect of climate change at the regional level, one takes in use regional climate models. In these models the thesis introduce regional features, which are not included in the global climate models (which are basically in climate research). Regional models can provide good and useful climate projections that add more value than the global climate models, but also introduces an uncertainty in the calculations. How should this uncertainty affect the use of regional climate models?The most common methodology for calculating potential future climate developments are based on different scenarios of possible emissions of greenhouse gases. These scenarios operates as global climate models using physical laws and calculate possible future developments. This is considered mathematical complexed and processes with limited supercomputing capacity calculates the global models for the larger scale of the climate system. To study the effects of climate change are regional details required and the regional models used therefore in a limited area of the climate system. These regional models are driven by data from the global models and refines and improves these data. Impact studies can then use the data from the regional models or data which are further processed to provide more local details using geo-statistical methods. In the preparation of the climate projections is there a minimum of 4 sources of uncertainty. This uncertainty is related to the provision of emission scenarios of greenhouse gases, uncertainties related to the use of global climate models, uncertainty related to the use of regional climate models and the uncertainty of internal variability in the climate system. This thesis discusses the use of regional climate models, and illustrates how the regional climate model adds value to climate projections, and at the same time introduce uncertainty in the calculations. It discusses in particular the importance of the choice of

  6. Calculation of the effects of pumping, divertor configuration and fueling on density limit in a tokamak model problem

    International Nuclear Information System (INIS)

    Stacey, W. M.

    2001-01-01

    Several series of model problem calculations have been performed to investigate the predicted effect of pumping, divertor configuration and fueling on the maximum achievable density in diverted tokamaks. Density limitations due to thermal instabilities (confinement degradation and multifaceted axisymmetric radiation from the edge) and to divertor choking are considered. For gas fueling the maximum achievable density is relatively insensitive to pumping (on or off), to the divertor configuration (open or closed), or to the location of the gas injection, although the gas fueling rate required to achieve this maximum achievable density is quite sensitive to these choices. Thermal instabilities are predicted to limit the density at lower values than divertor choking. Higher-density limits are predicted for pellet injection than for gas fueling

  7. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Science.gov (United States)

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  8. Probabilistic models of population evolution scaling limits, genealogies and interactions

    CERN Document Server

    Pardoux, Étienne

    2016-01-01

    This expository book presents the mathematical description of evolutionary models of populations subject to interactions (e.g. competition) within the population. The author includes both models of finite populations, and limiting models as the size of the population tends to infinity. The size of the population is described as a random function of time and of the initial population (the ancestors at time 0). The genealogical tree of such a population is given. Most models imply that the population is bound to go extinct in finite time. It is explained when the interaction is strong enough so that the extinction time remains finite, when the ancestral population at time 0 goes to infinity. The material could be used for teaching stochastic processes, together with their applications. Étienne Pardoux is Professor at Aix-Marseille University, working in the field of Stochastic Analysis, stochastic partial differential equations, and probabilistic models in evolutionary biology and population genetics. He obtai...

  9. Animal models of enterovirus 71 infection: applications and limitations

    Science.gov (United States)

    2014-01-01

    Human enterovirus 71 (EV71) has emerged as a neuroinvasive virus that is responsible for several outbreaks in the Asia-Pacific region over the past 15 years. Appropriate animal models are needed to understand EV71 neuropathogenesis better and to facilitate the development of effective vaccines and drugs. Non-human primate models have been used to characterize and evaluate the neurovirulence of EV71 after the early outbreaks in late 1990s. However, these models were not suitable for assessing the neurovirulence level of the virus and were associated with ethical and economic difficulties in terms of broad application. Several strategies have been applied to develop mouse models of EV71 infection, including strategies that employ virus adaption and immunodeficient hosts. Although these mouse models do not closely mimic human disease, they have been applied to determine the pathogenesis of and treatment and prevention of the disease. EV71 receptor-transgenic mouse models have recently been developed and have significantly advanced our understanding of the biological features of the virus and the host-parasite interactions. Overall, each of these models has advantages and disadvantages, and these models are differentially suited for studies of EV71 pathogenesis and/or the pre-clinical testing of antiviral drugs and vaccines. In this paper, we review the characteristics, applications and limitation of these EV71 animal models, including non-human primate and mouse models. PMID:24742252

  10. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  11. The spatial limitations of current neutral models of biodiversity.

    Directory of Open Access Journals (Sweden)

    Rampal S Etienne

    Full Text Available The unified neutral theory of biodiversity and biogeography is increasingly accepted as an informative null model of community composition and dynamics. It has successfully produced macro-ecological patterns such as species-area relationships and species abundance distributions. However, the models employed make many unrealistic auxiliary assumptions. For example, the popular spatially implicit version assumes a local plot exchanging migrants with a large panmictic regional source pool. This simple structure allows rigorous testing of its fit to data. In contrast, spatially explicit models assume that offspring disperse only limited distances from their parents, but one cannot as yet test the significance of their fit to data. Here we compare the spatially explicit and the spatially implicit model, fitting the most-used implicit model (with two levels, local and regional to data simulated by the most-used spatially explicit model (where offspring are distributed about their parent on a grid according to either a radially symmetric Gaussian or a 'fat-tailed' distribution. Based on these fits, we express spatially implicit parameters in terms of spatially explicit parameters. This suggests how we may obtain estimates of spatially explicit parameters from spatially implicit ones. The relationship between these parameters, however, makes no intuitive sense. Furthermore, the spatially implicit model usually fits observed species-abundance distributions better than those calculated from the spatially explicit model's simulated data. Current spatially explicit neutral models therefore have limited descriptive power. However, our results suggest that a fatter tail of the dispersal kernel seems to improve the fit, suggesting that dispersal kernels with even fatter tails should be studied in future. We conclude that more advanced spatially explicit models and tools to analyze them need to be developed.

  12. Numerical models for fluid-grains interactions: opportunities and limitations

    Science.gov (United States)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  13. A relativistic self-consistent model for studying enhancement of space charge limited emission due to counter-streaming ions

    Science.gov (United States)

    Lin, M. C.; Verboncoeur, J.

    2016-10-01

    A maximum electron current transmitted through a planar diode gap is limited by space charge of electrons dwelling across the gap region, the so called space charge limited (SCL) emission. By introducing a counter-streaming ion flow to neutralize the electron charge density, the SCL emission can be dramatically raised, so electron current transmission gets enhanced. In this work, we have developed a relativistic self-consistent model for studying the enhancement of maximum transmission by a counter-streaming ion current. The maximum enhancement is found when the ion effect is saturated, as shown analytically. The solutions in non-relativistic, intermediate, and ultra-relativistic regimes are obtained and verified with 1-D particle-in-cell simulations. This self-consistent model is general and can also serve as a comparison for verification of simulation codes, as well as extension to higher dimensions.

  14. Legal weight truck cask model impact limiter response

    International Nuclear Information System (INIS)

    Meinert, N.M.; Shappert, L.B.

    1989-01-01

    Dynamic and quasi-static quarter-scale model testing was performed to supplement the analytical case presented in the Nuclear Assurance Corporation Legal Weight Truck (NAC LWT) cask transport licensing application. Four successive drop tests from 9.0 meters (30 feet) onto an unyielding surface and one 1.0-meter (40-inch) drop onto a scale mild steel pin 3.8 centimeters (1.5 inches) in diameter, corroborated the impact limiter design and structural analyses presented in the licensing application. Quantitative measurements, made during drop testing, support the impact limiter analyses. High-speed photography of the tests confirm that only a small amount of energy is elastically stored in the aluminum honeycomb and that oblique drop slapdown is not significant. The qualitative conclusion is that the limiter protected LWT cask will not sustain permanent structural damage and containment will be maintained, subsequent to a hypothetical accident, as shown by structural analyses

  15. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    Science.gov (United States)

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Treponema pallidum 3-Phosphoglycerate Mutase Is a Heat-Labile Enzyme That May Limit the Maximum Growth Temperature for the Spirochete

    Science.gov (United States)

    Benoit, Stéphane; Posey, James E.; Chenoweth, Matthew R.; Gherardini, Frank C.

    2001-01-01

    In the causative agent of syphilis, Treponema pallidum, the gene encoding 3-phosphoglycerate mutase, gpm, is part of a six-gene operon (tro operon) that is regulated by the Mn-dependent repressor TroR. Since substrate-level phosphorylation via the Embden-Meyerhof pathway is the principal way to generate ATP in T. pallidum and Gpm is a key enzyme in this pathway, Mn could exert a regulatory effect on central metabolism in this bacterium. To study this, T. pallidum gpm was cloned, Gpm was purified from Escherichia coli, and antiserum against the recombinant protein was raised. Immunoblots indicated that Gpm was expressed in freshly extracted infective T. pallidum. Enzyme assays indicated that Gpm did not require Mn2+ while 2,3-diphosphoglycerate (DPG) was required for maximum activity. Consistent with these observations, Mn did not copurify with Gpm. The purified Gpm was stable for more than 4 h at 25°C, retained only 50% activity after incubation for 20 min at 34°C or 10 min at 37°C, and was completely inactive after 10 min at 42°C. The temperature effect was attenuated when 1 mM DPG was added to the assay mixture. The recombinant Gpm from pSLB2 complemented E. coli strain PL225 (gpm) and restored growth on minimal glucose medium in a temperature-dependent manner. Increasing the temperature of cultures of E. coli PL225 harboring pSLB2 from 34 to 42°C resulted in a 7- to 11-h period in which no growth occurred (compared to wild-type E. coli). These data suggest that biochemical properties of Gpm could be one contributing factor to the heat sensitivity of T. pallidum. PMID:11466272

  17. Predicting the current and future potential distributions of lymphatic filariasis in Africa using maximum entropy ecological niche modelling.

    Directory of Open Access Journals (Sweden)

    Hannah Slater

    Full Text Available Modelling the spatial distributions of human parasite species is crucial to understanding the environmental determinants of infection as well as for guiding the planning of control programmes. Here, we use ecological niche modelling to map the current potential distribution of the macroparasitic disease, lymphatic filariasis (LF, in Africa, and to estimate how future changes in climate and population could affect its spread and burden across the continent. We used 508 community-specific infection presence data collated from the published literature in conjunction with five predictive environmental/climatic and demographic variables, and a maximum entropy niche modelling method to construct the first ecological niche maps describing potential distribution and burden of LF in Africa. We also ran the best-fit model against climate projections made by the HADCM3 and CCCMA models for 2050 under A2a and B2a scenarios to simulate the likely distribution of LF under future climate and population changes. We predict a broad geographic distribution of LF in Africa extending from the west to the east across the middle region of the continent, with high probabilities of occurrence in the Western Africa compared to large areas of medium probability interspersed with smaller areas of high probability in Central and Eastern Africa and in Madagascar. We uncovered complex relationships between predictor ecological niche variables and the probability of LF occurrence. We show for the first time that predicted climate change and population growth will expand both the range and risk of LF infection (and ultimately disease in an endemic region. We estimate that populations at risk to LF may range from 543 and 804 million currently, and that this could rise to between 1.65 to 1.86 billion in the future depending on the climate scenario used and thresholds applied to signify infection presence.

  18. Two-phase-flow models and their limitations

    International Nuclear Information System (INIS)

    Ishii, M.; Kocamustafaogullari, G.

    1982-01-01

    An accurate prediction of transient two-phase flow is essential to safety analyses of nuclear reactors under accident conditions. The fluid flow and heat transfer encountered are often extremely complex due to the reactor geometry and occurrence of transient two-phase flow. Recently considerable progresses in understanding and predicting these phenomena have been made by a combination of rigorous model development, advanced computational techniques, and a number of small and large scale supporting experiments. In view of their essential importance, the foundation of various two-phase-flow models and their limitations are discussed in this paper

  19. Non compact continuum limit of two coupled Potts models

    International Nuclear Information System (INIS)

    Vernier, Éric; Jacobsen, Jesper Lykke; Saleur, Hubert

    2014-01-01

    We study two Q-state Potts models coupled by the product of their energy operators, in the regime 2  3 (2) vertex model. It corresponds to a selfdual system of two antiferromagnetic Potts models, coupled ferromagnetically. We derive the Bethe ansatz equations and study them numerically for two arbitrary twist angles. The continuum limit is shown to involve two compact bosons and one non compact boson, with discrete states emerging from the continuum at appropriate twists. The non compact boson entails strong logarithmic corrections to the finite-size behaviour of the scaling levels, an understanding of which allows us to correct an earlier proposal for some of the critical exponents. In particular, we infer the full set of magnetic scaling dimensions (watermelon operators) of the Potts model. (paper)

  20. Space-Charge-Limited Emission Models for Particle Simulation

    Science.gov (United States)

    Verboncoeur, J. P.; Cartwright, K. L.; Murphy, T.

    2004-11-01

    Space-charge-limited (SCL) emission of electrons from various materials is a common method of generating the high current beams required to drive high power microwave (HPM) sources. In the SCL emission process, sufficient space charge is extracted from a surface, often of complicated geometry, to drive the electric field normal to the surface close to zero. The emitted current is highly dominated by space charge effects as well as ambient fields near the surface. In this work, we consider computational models for the macroscopic SCL emission process including application of Gauss's law and the Child-Langmuir law for space-charge-limited emission. Models are described for ideal conductors, lossy conductors, and dielectrics. Also considered is the discretization of these models, and the implications for the emission physics. Previous work on primary and dual-cell emission models [Watrous et al., Phys. Plasmas 8, 289-296 (2001)] is reexamined, and aspects of the performance, including fidelity and noise properties, are improved. Models for one-dimensional diodes are considered, as well as multidimensional emitting surfaces, which include corners and transverse fields.

  1. Review of revised Klamath River Total Maximum Daily Load models from Link River Dam to Keno Dam, Oregon

    Science.gov (United States)

    Rounds, Stewart A.; Sullivan, Annett B.

    2013-01-01

    Flow and water-quality models are being used to support the development of Total Maximum Daily Load (TMDL) plans for the Klamath River downstream of Upper Klamath Lake (UKL) in south-central Oregon. For riverine reaches, the RMA-2 and RMA-11 models were used, whereas the CE-QUAL-W2 model was used to simulate pooled reaches. The U.S. Geological Survey (USGS) was asked to review the most upstream of these models, from Link River Dam at the outlet of UKL downstream through the first pooled reach of the Klamath River from Lake Ewauna to Keno Dam. Previous versions of these models were reviewed in 2009 by USGS. Since that time, important revisions were made to correct several problems and address other issues. This review documents an assessment of the revised models, with emphasis on the model revisions and any remaining issues. The primary focus of this review is the 19.7-mile Lake Ewauna to Keno Dam reach of the Klamath River that was simulated with the CE-QUAL-W2 model. Water spends far more time in the Lake Ewauna to Keno Dam reach than in the 1-mile Link River reach that connects UKL to the Klamath River, and most of the critical reactions affecting water quality upstream of Keno Dam occur in that pooled reach. This model review includes assessments of years 2000 and 2002 current conditions scenarios, which were used to calibrate the model, as well as a natural conditions scenario that was used as the reference condition for the TMDL and was based on the 2000 flow conditions. The natural conditions scenario included the removal of Keno Dam, restoration of the Keno reef (a shallow spot that was removed when the dam was built), removal of all point-source inputs, and derivation of upstream boundary water-quality inputs from a previously developed UKL TMDL model. This review examined the details of the models, including model algorithms, parameter values, and boundary conditions; the review did not assess the draft Klamath River TMDL or the TMDL allocations

  2. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Science.gov (United States)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  3. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  4. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  5. Predicting Changes in Macrophyte Community Structure from Functional Traits in a Freshwater Lake: A Test of Maximum Entropy Model

    Science.gov (United States)

    Fu, Hui; Zhong, Jiayou; Yuan, Guixiang; Guo, Chunjing; Lou, Qian; Zhang, Wei; Xu, Jun; Ni, Leyi; Xie, Ping; Cao, Te

    2015-01-01

    Trait-based approaches have been widely applied to investigate how community dynamics respond to environmental gradients. In this study, we applied a series of maximum entropy (maxent) models incorporating functional traits to unravel the processes governing macrophyte community structure along water depth gradient in a freshwater lake. We sampled 42 plots and 1513 individual plants, and measured 16 functional traits and abundance of 17 macrophyte species. Study results showed that maxent model can be highly robust (99.8%) in predicting the species relative abundance of macrophytes with observed community-weighted mean (CWM) traits as the constraints, while relative low (about 30%) with CWM traits fitted from water depth gradient as the constraints. The measured traits showed notably distinct importance in predicting species abundances, with lowest for perennial growth form and highest for leaf dry mass content. For tuber and leaf nitrogen content, there were significant shifts in their effects on species relative abundance from positive in shallow water to negative in deep water. This result suggests that macrophyte species with tuber organ and greater leaf nitrogen content would become more abundant in shallow water, but would become less abundant in deep water. Our study highlights how functional traits distributed across gradients provide a robust path towards predictive community ecology. PMID:26167856

  6. Accuracy limit of rigid 3-point water models

    Science.gov (United States)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  7. WCSPH with Limiting Viscosity for Modeling Landslide Hazard at the Slopes of Artificial Reservoir

    Directory of Open Access Journals (Sweden)

    Sauro Manenti

    2018-04-01

    Full Text Available This work illustrated an application of the FOSS code SPHERA v.8.0 (RSE SpA, Milano, Italy to the simulation of landslide hazard at the slope of a water basin. SPHERA is based on the weakly compressible SPH method (WCSPH and holds a mixture model, consistent with the packing limit of the Kinetic Theory of Granular Flow (KTGF, which was previously tested for simulating two-phase free-surface rapid flows involving water-sediment interaction. In this study a limiting viscosity parameter was implemented in the previous formulation of the mixture model to limit the growth of the apparent viscosity, thus saving computational time while preserving the solution accuracy. This approach is consistent with the experimental behavior of high polymer solutions for which an almost constant value of viscosity may be approached at very low deformation rates near the transition zone of elastic–plastic regime. In this application, the limiting viscosity was used as a numerical parameter for optimization of the computation. Some preliminary tests were performed by simulating a 2D erosional dam break, proving that a proper selection of the limiting viscosity leads to a considerable drop of the computational time without altering significantly the numerical solution. SPHERA was then validated by simulating a 2D scale experiment reproducing the early phase of the Vajont landslide when a tsunami wave was generated that climbed the opposite mountain side with a maximum run-up of about 270 m. The obtained maximum run-up was very close to the experimental result. Influence of saturation of the landslide material below the still water level was also accounted, showing that the landslide dynamics can be better represented and the wave run-up can be properly estimated.

  8. Implications of emission zone limits for the Ruderman-Sutherland pulsar model

    International Nuclear Information System (INIS)

    Matese, J.J.; Whitmire, D.P.

    1980-01-01

    In the Ruderman-Sutherland (RS) pulsar model the frequency at which coherent radiation is emitted depends upon the source location, v=v (r). In the oblique rotator version of this model the time-averaged tangential velocities of the magnetosphere sources must increase linearly with radius, and this leads to a frequency-dependent aberration and retardation time delay in which higher frequencies lag behind lower frequencies. As previously noted by Cordes, within the context of a given model which specifies v (r), the absence of any anomalous time delay in dispersion measurements allows limits to be placed on the radial position of the source of a given frequency. In this paper we (a) give a time-delay analysis (similar to that of Cordes) appropriate for the RS model and show that existing dispersion measurements are incompatible with RS emission mechanism. If the basic RS emission mechanism is applicable to pulsars, we find that the most plausible modification consistent with the dispersion data is a reduction in the low-energy plasma density by a factor approx.10 -4 to 10 -5 . This has the effect of bringing the radio emission zone closer to the stellar surface, thereby making the model consistent with the dispersion data. In addition, this modification results in a significant decrease in the predicted maximum cone angle and an increase in the predicted maximum frequency by factors which bring these predictions more in line with observation. We also consider implications of a reduced plasma density for radio luminosity

  9. Testing of materials and scale models for impact limiters

    International Nuclear Information System (INIS)

    Maji, A.K.; Satpathi, D.; Schryer, H.L.

    1991-01-01

    Aluminum Honeycomb and Polyurethane foam specimens were tested to obtain experimental data on the material's behavior under different loading conditions. This paper reports the dynamic tests conducted on the materials and on the design and testing of scale models made out of these open-quotes Impact Limiters,close quotes as they are used in the design of transportation casks. Dynamic tests were conducted on a modified Charpy Impact machine with associated instrumentation, and compared with static test results. A scale model testing setup was designed and used for preliminary tests on models being used by current designers of transportation casks. The paper presents preliminary results of the program. Additional information will be available and reported at the time of presentation of the paper

  10. Low-energy limit of the extended Linear Sigma Model

    Energy Technology Data Exchange (ETDEWEB)

    Divotgey, Florian [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Kovacs, Peter [Wigner Research Center for Physics, Hungarian Academy of Sciences, Institute for Particle and Nuclear Physics, Budapest (Hungary); GSI Helmholtzzentrum fuer Schwerionenforschung, ExtreMe Matter Institute, Darmstadt (Germany); Giacosa, Francesco [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); Jan-Kochanowski University, Institute of Physics, Kielce (Poland); Rischke, Dirk H. [Johann Wolfgang Goethe-Universitaet, Institut fuer Theoretische Physik, Frankfurt am Main (Germany); University of Science and Technology of China, Interdisciplinary Center for Theoretical Study and Department of Modern Physics, Hefei, Anhui (China)

    2018-01-15

    The extended Linear Sigma Model is an effective hadronic model based on the linear realization of chiral symmetry SU(N{sub f}){sub L} x SU(N{sub f}){sub R}, with (pseudo)scalar and (axial-)vector mesons as degrees of freedom. In this paper, we study the low-energy limit of the extended Linear Sigma Model (eLSM) for N{sub f} = flavors by integrating out all fields except for the pions, the (pseudo-)Nambu-Goldstone bosons of chiral symmetry breaking. The resulting low-energy effective action is identical to Chiral Perturbation Theory (ChPT) after choosing a representative for the coset space generated by chiral symmetry breaking and expanding it in powers of (derivatives of) the pion fields. The tree-level values of the coupling constants of the effective low-energy action agree remarkably well with those of ChPT. (orig.)

  11. Animal models of myasthenia gravis: utility and limitations

    Science.gov (United States)

    Mantegazza, Renato; Cordiglieri, Chiara; Consonni, Alessandra; Baggi, Fulvio

    2016-01-01

    Myasthenia gravis (MG) is a chronic autoimmune disease caused by the immune attack of the neuromuscular junction. Antibodies directed against the acetylcholine receptor (AChR) induce receptor degradation, complement cascade activation, and postsynaptic membrane destruction, resulting in functional reduction in AChR availability. Besides anti-AChR antibodies, other autoantibodies are known to play pathogenic roles in MG. The experimental autoimmune MG (EAMG) models have been of great help over the years in understanding the pathophysiological role of specific autoantibodies and T helper lymphocytes and in suggesting new therapies for prevention and modulation of the ongoing disease. EAMG can be induced in mice and rats of susceptible strains that show clinical symptoms mimicking the human disease. EAMG models are helpful for studying both the muscle and the immune compartments to evaluate new treatment perspectives. In this review, we concentrate on recent findings on EAMG models, focusing on their utility and limitations. PMID:27019601

  12. Sum rule limitations of kinetic particle-production models

    International Nuclear Information System (INIS)

    Knoll, J.; CEA Centre d'Etudes Nucleaires de Grenoble, 38; Guet, C.

    1988-04-01

    Photoproduction and absorption sum rules generalized to systems at finite temperature provide a stringent check on the validity of kinetic models for the production of hard photons in intermediate energy nuclear collisions. We inspect such models for the case of nuclear matter at finite temperature employed in a kinetic regime which copes those encountered in energetic nuclear collisions, and find photon production rates which significantly exceed the limits imposed by the sum rule even under favourable concession. This suggests that coherence effects are quite important and the production of photons cannot be considered as an incoherent addition of individual NNγ production processes. The deficiencies of present kinetic models may also apply for the production of probes such as the pion which do not couple perturbatively to the nuclear currents. (orig.)

  13. PROCOV: maximum likelihood estimation of protein phylogeny under covarion models and site-specific covarion pattern analysis

    Directory of Open Access Journals (Sweden)

    Wang Huai-Chun

    2009-09-01

    Full Text Available Abstract Background The covarion hypothesis of molecular evolution holds that selective pressures on a given amino acid or nucleotide site are dependent on the identity of other sites in the molecule that change throughout time, resulting in changes of evolutionary rates of sites along the branches of a phylogenetic tree. At the sequence level, covarion-like evolution at a site manifests as conservation of nucleotide or amino acid states among some homologs where the states are not conserved in other homologs (or groups of homologs. Covarion-like evolution has been shown to relate to changes in functions at sites in different clades, and, if ignored, can adversely affect the accuracy of phylogenetic inference. Results PROCOV (protein covarion analysis is a software tool that implements a number of previously proposed covarion models of protein evolution for phylogenetic inference in a maximum likelihood framework. Several algorithmic and implementation improvements in this tool over previous versions make computationally expensive tree searches with covarion models more efficient and analyses of large phylogenomic data sets tractable. PROCOV can be used to identify covarion sites by comparing the site likelihoods under the covarion process to the corresponding site likelihoods under a rates-across-sites (RAS process. Those sites with the greatest log-likelihood difference between a 'covarion' and an RAS process were found to be of functional or structural significance in a dataset of bacterial and eukaryotic elongation factors. Conclusion Covarion models implemented in PROCOV may be especially useful for phylogenetic estimation when ancient divergences between sequences have occurred and rates of evolution at sites are likely to have changed over the tree. It can also be used to study lineage-specific functional shifts in protein families that result in changes in the patterns of site variability among subtrees.

  14. Influence of Last Glacial Maximum boundary conditions on the global water isotope distribution in an atmospheric general circulation model

    Directory of Open Access Journals (Sweden)

    T. Tharammal

    2013-03-01

    Full Text Available To understand the validity of δ18O proxy records as indicators of past temperature change, a series of experiments was conducted using an atmospheric general circulation model fitted with water isotope tracers (Community Atmosphere Model version 3.0, IsoCAM. A pre-industrial simulation was performed as the control experiment, as well as a simulation with all the boundary conditions set to Last Glacial Maximum (LGM values. Results from the pre-industrial and LGM simulations were compared to experiments in which the influence of individual boundary conditions (greenhouse gases, ice sheet albedo and topography, sea surface temperature (SST, and orbital parameters were changed each at a time to assess their individual impact. The experiments were designed in order to analyze the spatial variations of the oxygen isotopic composition of precipitation (δ18Oprecip in response to individual climate factors. The change in topography (due to the change in land ice cover played a significant role in reducing the surface temperature and δ18Oprecip over North America. Exposed shelf areas and the ice sheet albedo reduced the Northern Hemisphere surface temperature and δ18Oprecip further. A global mean cooling of 4.1 °C was simulated with combined LGM boundary conditions compared to the control simulation, which was in agreement with previous experiments using the fully coupled Community Climate System Model (CCSM3. Large reductions in δ18Oprecip over the LGM ice sheets were strongly linked to the temperature decrease over them. The SST and ice sheet topography changes were responsible for most of the changes in the climate and hence the δ18Oprecip distribution among the simulations.

  15. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    Science.gov (United States)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  16. Quasi-Maximum Likelihood Estimation and Bootstrap Inference in Fractional Time Series Models with Heteroskedasticity of Unknown Form

    DEFF Research Database (Denmark)

    Cavaliere, Giuseppe; Nielsen, Morten Ørregaard; Taylor, Robert

    We consider the problem of conducting estimation and inference on the parameters of univariate heteroskedastic fractionally integrated time series models. We first extend existing results in the literature, developed for conditional sum-of squares estimators in the context of parametric fractional...... time series models driven by conditionally homoskedastic shocks, to allow for conditional and unconditional heteroskedasticity both of a quite general and unknown form. Global consistency and asymptotic normality are shown to still obtain; however, the covariance matrix of the limiting distribution...... of the estimator now depends on nuisance parameters derived both from the weak dependence and heteroskedasticity present in the shocks. We then investigate classical methods of inference based on the Wald, likelihood ratio and Lagrange multiplier tests for linear hypotheses on either or both of the long and short...

  17. An inventory model of purchase quantity for fully-loaded vehicles with maximum trips in consecutive transport time

    Directory of Open Access Journals (Sweden)

    Chen Pоуu

    2013-01-01

    Full Text Available Products made overseas but sold in Taiwan are very common. Regarding the cross-border or interregional production and marketing of goods, inventory decision-makers often have to think about how to determine the amount of purchases per cycle, the number of transport vehicles, the working hours of each transport vehicle, and the delivery by ground or air transport to sales offices in order to minimize the total cost of the inventory in unit time. This model assumes that the amount of purchases for each order cycle should allow all rented vehicles to be fully loaded and the transport times to reach the upper limit within the time period. The main research findings of this study included the search for the optimal solution of the integer planning of the model and the results of sensitivity analysis.

  18. Using spatially detailed water-quality data and solute-transport modeling to improve support total maximum daily load development

    Science.gov (United States)

    Walton-Day, Katherine; Runkel, Robert L.; Kimball, Briant A.

    2012-01-01

    Spatially detailed mass-loading studies and solute-transport modeling using OTIS (One-dimensional Transport with Inflow and Storage) demonstrate how natural attenuation and loading from distinct and diffuse sources control stream water quality and affect load reductions predicted in total maximum daily loads (TMDLs). Mass-loading data collected during low-flow from Cement Creek (a low-pH, metal-rich stream because of natural and mining sources, and subject to TMDL requirements) were used to calibrate OTIS and showed spatially variable effects of natural attenuation (instream reactions) and loading from diffuse (groundwater) and distinct sources. OTIS simulations of the possible effects of TMDL-recommended remediation of mine sites showed less improvement to dissolved zinc load and concentration (14% decrease) than did the TMDL (53-63% decrease). The TMDL (1) assumed conservative transport, (2) accounted for loads removed by remediation by subtracting them from total load at the stream mouth, and (3) did not include diffuse-source loads. In OTIS, loads were reduced near their source; the resulting concentration was decreased by natural attenuation and increased by diffuse-source loads during downstream transport. Thus, by not including natural attenuation and loading from diffuse sources, the TMDL overestimated remediation effects at low flow. Use of the techniques presented herein could improve TMDLs by incorporating these processes during TMDL development.

  19. Matrix models, Argyres-Douglas singularities and double scaling limits

    International Nuclear Information System (INIS)

    Bertoldi, Gaetano

    2003-01-01

    We construct an N = 1 theory with gauge group U(nN) and degree n+1 tree level superpotential whose matrix model spectral curve develops an Argyres-Douglas singularity. The calculation of the tension of domain walls in the U(nN) theory shows that the standard large-N expansion breaks down at the Argyres-Douglas points, with tension that scales as a fractional power of N. Nevertheless, it is possible to define appropriate double scaling limits which are conjectured to yield the tension of 2-branes in the resulting N = 1 four dimensional non-critical string theories as proposed by Ferrari. (author)

  20. Mapping the Global Potential Geographical Distribution of Black Locust (Robinia Pseudoacacia L. Using Herbarium Data and a Maximum Entropy Model

    Directory of Open Access Journals (Sweden)

    Guoqing Li

    2014-11-01

    Full Text Available Black locust (Robinia pseudoacacia L. is a tree species of high economic and ecological value, but is also considered to be highly invasive. Understanding the global potential distribution and ecological characteristics of this species is a prerequisite for its practical exploitation as a resource. Here, a maximum entropy modeling (MaxEnt was used to simulate the potential distribution of this species around the world, and the dominant climatic factors affecting its distribution were selected by using a jackknife test and the regularized gain change during each iteration of the training algorithm. The results show that the MaxEnt model performs better than random, with an average test AUC value of 0.9165 (±0.0088. The coldness index, annual mean temperature and warmth index were the most important climatic factors affecting the species distribution, explaining 65.79% of the variability in the geographical distribution. Species response curves showed unimodal relationships with the annual mean temperature and warmth index, whereas there was a linear relationship with the coldness index. The dominant climatic conditions in the core of the black locust distribution are a coldness index of −9.8 °C–0 °C, an annual mean temperature of 5.8 °C–14.5 °C, a warmth index of 66 °C–168 °C and an annual precipitation of 508–1867 mm. The potential distribution of black locust is located mainly in the United States, the United Kingdom, Germany, France, the Netherlands, Belgium, Italy, Switzerland, Australia, New Zealand, China, Japan, South Korea, South Africa, Chile and Argentina. The predictive map of black locust, climatic thresholds and species response curves can provide globally applicable guidelines and valuable information for policymakers and planners involved in the introduction, planting and invasion control of this species around the world.

  1. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    Science.gov (United States)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  2. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    International Nuclear Information System (INIS)

    Almog, Assaf; Garlaschelli, Diego

    2014-01-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information. (paper)

  3. Mapping distribution of Rastrelliger kanagurta in the exclusive economic zone (EEZ) of Malaysia using maximum entropy modeling approach

    Science.gov (United States)

    Yusop, Syazwani Mohd; Mustapha, Muzzneena Ahmad

    2018-04-01

    The coupling of fishing locations for R. kanagurta obtained from SEAFDEC and multi-sensor satellite imageries of oceanographic variables; sea surface temperature (SST), sea surface height (SSH) and chl-a concentration (chl-a) were utilized to evaluate the performance of maximum entropy (MaxEnt) models for R. kanagurta fishing ground for prediction. Besides, this study was conducted to identify the relative percentage contribution of each environmental variable considered in order to describe the effects of the oceanographic factors on the species distribution in the study area. The potential fishing grounds during intermonsoon periods; April and October 2008-2009 were simulated separately and covered the near-coast of Kelantan, Terengganu, Pahang and Johor. The oceanographic conditions differed between regions by the inherent seasonal variability. The seasonal and spatial extents of potential fishing grounds were largely explained by chl-a concentration (0.21-0.99 mg/m3 in April and 0.28-1.00 mg/m3 in October), SSH (77.37-85.90 cm in April and 107.60-108.97 cm in October) and SST (30.43-33.70 °C in April and 30.48-30.97 °C in October). The constructed models were applicable and therefore they were suitable for predicting the potential fishing zones of R. kanagurta in EEZ. The results from this study revealed MaxEnt's potential for predicting the spatial distribution of R. kanagurta and highlighted the use of multispectral satellite images for describing the seasonal potential fishing grounds.

  4. Animal models of GM2 gangliosidosis: utility and limitations

    Directory of Open Access Journals (Sweden)

    Lawson CA

    2016-07-01

    Full Text Available Cheryl A Lawson,1,2 Douglas R Martin2,3 1Department of Pathobiology, 2Scott-Ritchey Research Center, 3Department of Anatomy, Physiology and Pharmacology, Auburn University College of Veterinary Medicine, Auburn, AL, USA Abstract: GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. Keywords: GM2 gangliosidosis, Tay–Sachs disease, Sandhoff disease, lysosomal storage disorder, sphingolipidosis, brain disease

  5. Animal models of GM2 gangliosidosis: utility and limitations

    Science.gov (United States)

    Lawson, Cheryl A; Martin, Douglas R

    2016-01-01

    GM2 gangliosidosis, a subset of lysosomal storage disorders, is caused by a deficiency of the glycohydrolase, β-N-acetylhexosaminidase, and includes the closely related Tay–Sachs and Sandhoff diseases. The enzyme deficiency prevents the normal, stepwise degradation of ganglioside, which accumulates unchecked within the cellular lysosome, particularly in neurons. As a result, individuals with GM2 gangliosidosis experience progressive neurological diseases including motor deficits, progressive weakness and hypotonia, decreased responsiveness, vision deterioration, and seizures. Mice and cats are well-established animal models for Sandhoff disease, whereas Jacob sheep are the only known laboratory animal model of Tay–Sachs disease to exhibit clinical symptoms. Since the human diseases are relatively rare, animal models are indispensable tools for further study of pathogenesis and for development of potential treatments. Though no effective treatments for gangliosidoses currently exist, animal models have been used to test promising experimental therapies. Herein, the utility and limitations of gangliosidosis animal models and how they have contributed to the development of potential new treatments are described. PMID:27499644

  6. The limitations of mathematical modeling in high school physics education

    Science.gov (United States)

    Forjan, Matej

    The theme of the doctoral dissertation falls within the scope of didactics of physics. Theoretical analysis of the key constraints that occur in the transmission of mathematical modeling of dynamical systems into field of physics education in secondary schools is presented. In an effort to explore the extent to which current physics education promotes understanding of models and modeling, we analyze the curriculum and the three most commonly used textbooks for high school physics. We focus primarily on the representation of the various stages of modeling in the solved tasks in textbooks and on the presentation of certain simplifications and idealizations, which are in high school physics frequently used. We show that one of the textbooks in most cases fairly and reasonably presents the simplifications, while the other two half of the analyzed simplifications do not explain. It also turns out that the vast majority of solved tasks in all the textbooks do not explicitly represent model assumptions based on what we can conclude that in high school physics the students do not develop sufficiently a sense of simplification and idealizations, which is a key part of the conceptual phase of modeling. For the introduction of modeling of dynamical systems the knowledge of students is also important, therefore we performed an empirical study on the extent to which high school students are able to understand the time evolution of some dynamical systems in the field of physics. The research results show the students have a very weak understanding of the dynamics of systems in which the feedbacks are present. This is independent of the year or final grade in physics and mathematics. When modeling dynamical systems in high school physics we also encounter the limitations which result from the lack of mathematical knowledge of students, because they don't know how analytically solve the differential equations. We show that when dealing with one-dimensional dynamical systems

  7. Profile modifications in laser-driven temperature fronts using flux-limiters and delocalization models

    Science.gov (United States)

    Colombant, Denis; Manheimer, Wallace; Busquet, Michel

    2004-11-01

    A simple steady-state model using flux-limiters by Day et al [1] showed that temperature profiles could formally be double-valued. Stability of temperature profiles in laser-driven temperature fronts using delocalization models was also discussed by Prasad and Kershaw [2]. We have observed steepening of the front and flattening of the maximum temperature in laser-driven implosions [3]. Following the simple model first proposed in [1], we solve for a two-boundary value steady-state heat flow problem for various non-local heat transport models. For the more complicated models [4,5], we obtain the steady-state solution as the asymptotic limit of the time-dependent solution. Solutions will be shown and compared for these various models. 1.M.Day, B.Merriman, F.Najmabadi and R.W.Conn, Contrib. Plasma Phys. 36, 419 (1996) 2.M.K.Prasad and D.S.Kershaw, Phys. Fluids B3, 3087 (1991) 3.D.Colombant, W.Manheimer and M.Busquet, Bull. Amer. Phys. Soc. 48, 326 (2003) 4.E.M.Epperlein and R.W.Short, Phys. Fluids B3, 3092 (1991) 5.W.Manheimer and D.Colombant, Phys. Plasmas 11, 260 (2004)

  8. Force Limited Random Vibration Test of TESS Camera Mass Model

    Science.gov (United States)

    Karlicek, Alexandra; Hwang, James Ho-Jin; Rey, Justin J.

    2015-01-01

    The Transiting Exoplanet Survey Satellite (TESS) is a spaceborne instrument consisting of four wide field-of-view-CCD cameras dedicated to the discovery of exoplanets around the brightest stars. As part of the environmental testing campaign, force limiting was used to simulate a realistic random vibration launch environment. While the force limit vibration test method is a standard approach used at multiple institutions including Jet Propulsion Laboratory (JPL), NASA Goddard Space Flight Center (GSFC), European Space Research and Technology Center (ESTEC), and Japan Aerospace Exploration Agency (JAXA), it is still difficult to find an actual implementation process in the literature. This paper describes the step-by-step process on how the force limit method was developed and applied on the TESS camera mass model. The process description includes the design of special fixtures to mount the test article for properly installing force transducers, development of the force spectral density using the semi-empirical method, estimation of the fuzzy factor (C2) based on the mass ratio between the supporting structure and the test article, subsequent validating of the C2 factor during the vibration test, and calculation of the C.G. accelerations using the Root Mean Square (RMS) reaction force in the spectral domain and the peak reaction force in the time domain.

  9. Limiting fragmentation in a thermal model with flow

    Energy Technology Data Exchange (ETDEWEB)

    Kumar Tiwari, Swatantra; Sahoo, Raghunath [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Sciences, Simrol, Indore (India)

    2016-12-15

    The property of limiting fragmentation of various observables such as rapidity distributions (dN/dy), elliptic flow (v{sub 2}), average transverse momentum (left angle p{sub T} right angle) etc. of charged particles is observed when they are plotted as a function of rapidity (y) shifted by the beam rapidity (y{sub beam}) for a wide range of energies from AGS to RHIC. Limiting fragmentation (LF) is a well-studied phenomenon as observed in various collision energies and colliding systems experimentally. It is very interesting to verify this phenomenon theoretically. We study such a phenomenon for pion rapidity spectra using our hydrodynamic-like model where the collective flow is incorporated in a thermal model in the longitudinal direction. Our findings advocate the observation of extended longitudinal scaling in the rapidity spectra of pions from AGS to lower RHIC energies, while it is observed to be violated at top RHIC and LHC energies. Prediction of LF hypothesis for Pb+Pb collisions at √(s{sub NN}) = 5.02 TeV is given. (orig.)

  10. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    Science.gov (United States)

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  11. MODELING THE TRANSITION CURVE ON A LIMITED TERAIN

    Directory of Open Access Journals (Sweden)

    V. D. Borisenko

    2017-04-01

    rectilinear and circular rail track in a region of a limited size has been proved. Originality. A method for geometric modelling of transition curves between a rectilinear and circular section of a railway track is developed in conditions of limited terrain size, on which rails are laid. The transition curve is represented in the natural parameterization, using the cubic dependence of the curvature distribution on the length of its arc. Practical value. The proposed method of modelling the transition curves in conditions of limited terrain size allows obtaining these curves with a high accuracy in a wide range of geometric parameters of rectilinear and circular sections of the railway track and a parameter that acts as a constraint in the modelling of the transition curve. The method can be recommended in the practice of building railways.

  12. Specialists without spirit: limitations of the mechanistic biomedical model.

    Science.gov (United States)

    Hewa, S; Hetherington, R W

    1995-06-01

    This paper examines the origin and the development of the mechanistic model of the human body and health in terms of Max Weber's theory of rationalization. It is argued that the development of Western scientific medicine is a part of the broad process of rationalization that began in sixteenth century Europe as a result of the Reformation. The development of the mechanistic view of the human body in Western medicine is consistent with the ideas of calculability, predictability, and control-the major tenets of the process of rationalization as described by Weber. In recent years, however, the limitations of the mechanistic model have been the topic of many discussions. George Engel, a leading advocate of general systems theory, is one of the leading proponents of a new medical model which includes the general quality of life, clean environment, and psychological, or spiritual stability of life. The paper concludes with consideration of the potential of Engel's proposed new model in the context of the current state of rationalization in modern industrialized society.

  13. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  14. Response of methane emissions from wetlands to the Last Glacial Maximum and an idealized Dansgaard–Oeschger climate event: insights from two models of different complexity

    Directory of Open Access Journals (Sweden)

    B. Ringeval

    2013-01-01

    Full Text Available The role of different sources and sinks of CH4 in changes in atmospheric methane ([CH4] concentration during the last 100 000 yr is still not fully understood. In particular, the magnitude of the change in wetland CH4 emissions at the Last Glacial Maximum (LGM relative to the pre-industrial period (PI, as well as during abrupt climatic warming or Dansgaard–Oeschger (D–O events of the last glacial period, is largely unconstrained. In the present study, we aim to understand the uncertainties related to the parameterization of the wetland CH4 emission models relevant to these time periods by using two wetland models of different complexity (SDGVM and ORCHIDEE. These models have been forced by identical climate fields from low-resolution coupled atmosphere–ocean general circulation model (FAMOUS simulations of these time periods. Both emission models simulate a large decrease in emissions during LGM in comparison to PI consistent with ice core observations and previous modelling studies. The global reduction is much larger in ORCHIDEE than in SDGVM (respectively −67 and −46%, and whilst the differences can be partially explained by different model sensitivities to temperature, the major reason for spatial differences between the models is the inclusion of freezing of soil water in ORCHIDEE and the resultant impact on methanogenesis substrate availability in boreal regions. Besides, a sensitivity test performed with ORCHIDEE in which the methanogenesis substrate sensitivity to the precipitations is modified to be more realistic gives a LGM reduction of −36%. The range of the global LGM decrease is still prone to uncertainty, and here we underline its sensitivity to different process parameterizations. Over the course of an idealized D–O warming, the magnitude of the change in wetland CH4 emissions simulated by the two models at global scale is very similar at around 15 Tg yr−1, but this is only around 25% of the ice-core measured

  15. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    Science.gov (United States)

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  16. Modelling the bioconversion of cellulose into microbial products: rate limitations

    Energy Technology Data Exchange (ETDEWEB)

    Asenjo, J A

    1984-12-01

    The direct bioconversion of cellulose into microbial products carried out as a simultaneous saccharification and fermentation has a strong effect on the rates of cellulose degradation because cellobiose and glucose inhibition of the reaction are circumvented. A general mathematical model of the kinetics of this bioconversion has been developed. Its use in representing aerobic systems and in the analysis of the kinetic limitations has been investigated. Simulations have been carried out to find the rate limiting steps in slow fermentations and in rapid ones as determined by the specific rate of product formation. The requirements for solubilising and depolymerising enzyme activities (cellulase and cellobiase) in these systems has been determined. The activity that have been obtained for fungal cellulases are adequate for the kinetic requirements of the fastest fermentative strains. The results also show that for simultaneous bioconversions where strong cellobiose and glucose inhibition is overcome, no additional cellobiase is necessary to increase the rate of product formation. These results are useful for the selection of cellolytic micro-organisms and in the determination of enzymes to be cloned in recombinant strains. 17 references.

  17. Spurious Latent Class Problem in the Mixed Rasch Model: A Comparison of Three Maximum Likelihood Estimation Methods under Different Ability Distributions

    Science.gov (United States)

    Sen, Sedat

    2018-01-01

    Recent research has shown that over-extraction of latent classes can be observed in the Bayesian estimation of the mixed Rasch model when the distribution of ability is non-normal. This study examined the effect of non-normal ability distributions on the number of latent classes in the mixed Rasch model when estimated with maximum likelihood…

  18. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  19. Entry limitations and heterogeneous tolerances in a Schelling-like segregation model

    International Nuclear Information System (INIS)

    Radi, Davide; Gardini, Laura

    2015-01-01

    In this paper we consider a Schelling-type segregation model with two groups of agents that differ in some aspects, such as religion, political affiliation or color of skin. The first group is identified as the local population, while the second group is identified as the newcomers, whose members want to settle down in the city or country, or more generally a system, already populated by members of the local population. The members of the local population have a limited tolerance towards newcomers. On the contrary, some newcomers, but not all of them, may stand the presence of any amount of members of the local population. The heterogeneous, and partially limited, levels of tolerance trigger an entry and exit dynamics into and from the system of the members of the two groups based on their satisfaction with the number of members of the other group into the system. This entry/exit dynamics is described by a continuous piecewise-differentiable map in two dimensions. The dynamics of the model is characterized by smooth bifurcations as well as by border collision bifurcations. A combination of analytical results and numerical analysis are the main tools used to describe the quite complicated local and global dynamics of the model. The investigation reveals that two factors are the main elements that preclude integration. The first one is a low level of tolerance of the members of the two populations. The second one is an excessive and unbalanced level of tolerance between the two populations. In this last case, to facilitate the integration between members of the two groups, we impose an entry-limitation policy represented by the imposition of a maximum number of newcomers allowed to enter the system. The investigation of the dynamics reveals that the entry-limitation policy is useful to promote integration as it limits the negative effects due to excessive and unbalanced levels of tolerance.

  20. A quantum relativistic integrable model as the continuous limit of the six-vertex model

    International Nuclear Information System (INIS)

    Zhou, Y.K.

    1992-01-01

    The six-vertex model in two-dimensional statistical mechanics is used to construct the L-matrix of a one-dimensional quantum relativistic integrable model through a continuous limit. This is the first step to extend the method used earlier by the author to construct quantum completely integrable systems from other well-known two-dimensional vertex models. (orig.)

  1. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  2. A unified model of density limit in fusion plasmas

    Science.gov (United States)

    Zanca, P.; Sattin, F.; Escande, D. F.; Pucella, G.; Tudisco, O.

    2017-05-01

    In this work we identify by analytical and numerical means the conditions for the existence of a magnetic and thermal equilibrium of a cylindrical plasma, in the presence of Ohmic and/or additional power sources, heat conduction and radiation losses by light impurities. The boundary defining the solutions’ space having realistic temperature profile with small edge value takes mathematically the form of a density limit (DL). Compared to previous similar analyses the present work benefits from dealing with a more accurate set of equations. This refinement is elementary, but decisive, since it discloses a tenuous dependence of the DL on the thermal transport for configurations with an applied electric field. Thanks to this property, the DL scaling law is recovered almost identical for two largely different devices such as the ohmic tokamak and the reversed field pinch. In particular, they have in common a Greenwald scaling, linearly depending on the plasma current, quantitatively consistent with experimental results. In the tokamak case the DL dependence on any additional heating approximately follows a 0.5 power law, which is compatible with L-mode experiments. For a purely externally heated configuration, taken as a cylindrical approximation of the stellarator, the DL dependence on transport is found stronger. By adopting suitable transport models, DL takes on a Sudo-like form, in fair agreement with LHD experiments. Overall, the model provides a good zeroth-order quantitative description of the DL, applicable to widely different configurations.

  3. Microscopic model for the non-linear fluctuating hydrodynamic of 4 He superfluid helium deduced by maximum entropy method

    International Nuclear Information System (INIS)

    Alvarez R, J.T.

    1998-01-01

    This thesis presents a microscopic model for the non-linear fluctuating hydrodynamic of superfluid helium ( 4 He), model developed by means of the Maximum Entropy Method (Maxent). In the chapter 1, it is demonstrated the necessity to developing a microscopic model for the fluctuating hydrodynamic of the superfluid helium, starting from to show a brief overview of the theories and experiments developed in order to explain the behavior of the superfluid helium. On the other hand, it is presented the Morozov heuristic method for the construction of the non-linear hydrodynamic fluctuating of simple fluid. Method that will be generalized for the construction of the non-linear fluctuating hydrodynamic of the superfluid helium. Besides, it is presented a brief summary of the content of the thesis. In the chapter 2, it is reproduced the construction of a Generalized Fokker-Planck equation, (GFP), for a distribution function associated with the coarse grained variables. Function defined with aid of a nonequilibrium statistical operator ρhut FP that is evaluated as Wigneris function through ρ CG obtained by Maxent. Later this equation of GFP is reduced to a non-linear local FP equation from considering a slow and Markov process in the coarse grained variables. In this equation appears a matrix D mn defined with a nonequilibrium coarse grained statistical operator ρhut CG , matrix elements are used in the construction of the non-linear fluctuating hydrodynamics equations of the superfluid helium. In the chapter 3, the Lagrange multipliers are evaluated for to determine ρhut CG by means of the local equilibrium statistical operator ρhut l -tilde with the hypothesis that the system presents small fluctuations. Also are determined the currents associated with the coarse grained variables and furthermore are evaluated the matrix elements D mn but with aid of a quasi equilibrium statistical operator ρhut qe instead of the local equilibrium operator ρhut l -tilde. Matrix

  4. A Kinetic Model to Explain the Maximum in alpha-Amylase Activity Measurements in the Presence of Small Carbohydrates

    NARCIS (Netherlands)

    Baks, T.; Janssen, A.E.M.; Boom, R.M.

    2006-01-01

    The effect of the presence of several small carbohydrates on the measurement of the -amylase activity was determined over a broad concentration range. At low carbohydrate concentrations, a distinct maximum in the -amylase activity versus concentration curves was observed in several cases. At higher

  5. Peak-counts blood flow model-errors and limitations

    International Nuclear Information System (INIS)

    Mullani, N.A.; Marani, S.K.; Ekas, R.D.; Gould, K.L.

    1984-01-01

    The peak-counts model has several advantages, but its use may be limited due to the condition that the venous egress may not be negligible at the time of peak-counts. Consequently, blood flow measurements by the peak-counts model will depend on the bolus size, bolus duration, and the minimum transit time of the bolus through the region of interest. The effect of bolus size on the measurement of extraction fraction and blood flow was evaluated by injecting 1 to 30ml of rubidium chloride in the femoral vein of a dog and measuring the myocardial activity with a beta probe over the heart. Regional blood flow measurements were not found to vary with bolus sizes up to 30ml. The effect of bolus duration was studied by injecting a 10cc bolus of tracer at different speeds in the femoral vein of a dog. All intravenous injections undergo a broadening of the bolus duration due to the transit time of the tracer through the lungs and the heart. This transit time was found to range from 4-6 second FWHM and dominates the duration of the bolus to the myocardium for up to 3 second injections. A computer simulation has been carried out in which the different parameters of delay time, extraction fraction, and bolus duration can be changed to assess the errors in the peak-counts model. The results of the simulations show that the error will be greatest for short transit time delays and for low extraction fractions

  6. Flux limitation in ultrafiltration: Osmotic pressure model and gel layer model

    NARCIS (Netherlands)

    Wijmans, J.G.; Nakao, S.; Smolders, C.A.

    1984-01-01

    The characteristic permeate flux behaviour in ultrafiltration, i.e., the existence of a limiting flux which is independent of applied pressure and membrane resistance and a linear plot of the limiting flux versus the logarithm of the feed concentration, is explained by the osmotic pressure model. In

  7. Development of a mathematical model of the heating phase of rubber mixture and development of the synthesis of the heating control algorithm using the Pontryagin maximum principle

    Directory of Open Access Journals (Sweden)

    V. S. Kudryashov

    2017-01-01

    Full Text Available The article is devoted to the development of the algorithm of the heating phase control of a rubber compound for CJSC “Voronezh tyre plant”. The algorithm is designed for implementation on basis of controller Siemens S-300 to control the RS-270 mixer. To compile the algorithm a systematic analysis of the heating process has been performed as a control object, also the mathematical model of the heating phase has been developed on the basis of the heat balance equation, which describes the process of heating of a heat-transfer agent in the heat exchanger and further heating of the mixture in the mixer. The dynamic characteristics of temperature of the heat exchanger and the rubber mixer have been obtained. Taking into account the complexity and nonlinearity of the control object – a rubber mixer, as well as the availability of methods and great experience in managing this machine in an industrial environment, the algorithm has been implemented using the Pontryagin maximum principle. The optimization problem is reduced to determining the optimal control (heating steam supply and the optimal path of the object’s output coordinates (the temperature of the mixture which ensure the least flow of steam while heating a rubber compound in a limited time. To do this, the mathematical model of the heating phase has been written in matrix form. Coefficients matrices for each state of the control, control and disturbance vectors have been created, the Hamilton function has been obtained and time switching points have been found for constructing an optimal control and escape path of the object. Analysis of the model experiments and practical research results in the process of programming of the controller have showed a decrease in the heating steam consumption by 24.4% during the heating phase of the rubber compound.

  8. About dynamic model of limiting fragmentation of heavy nuclei

    International Nuclear Information System (INIS)

    Kuchin, I.A.

    2001-01-01

    Full text: As is known, during last years defined progress in understanding of static aspect of a dynamic structure organization of massive nuclei was reached. The offered model of a 'crystalline' structure of the nucleus generalizes drop, shell and cluster models in a natural way. Now increased interest induces the phenomenon of limiting fragmentation of heavy nuclei. There is a hope, that clearing up the general regularities of a soft disintegration of the massive nuclei on nucleons, component it, in a broad range of high energies can give a valuable information about dynamics of origin of nuclear structures and nature of their qualitative difference from a quark system structure, i.e. from nucleons. The key for understanding the indicated phenomenon can be it's study in connection with other aspects of disintegration of the nuclei - Coulomb and diffraction dissociation, fission etc. The sequential analysis of all these a processes from a single point of view is possible only within the framework of results and methods of the dynamic system theory. The purpose of the present research is clearing up a possibility to understand the nature of limiting fragmentation as a consequence of development of dynamic instability in a system of the nuclei as a result of ions interaction at high energy. In the analysis we based on data of the phenomenological analysis of heavy ion interactions at ultra-relativistic energies obtained by many authors for a number of years. As a result we came to a conclusion about general stochastic nature of an investigated phenomenon. In it development the fragmentation passes three different stages. On the first there is a process of preparation of chaos at a quantum level in an outcome of a Coulomb dissociation of the approaching nuclei and isotopic recharge of their nucleons, carrying a random character. A dominant here - viscous dissociation of nuclei under an operation of Coulomb forces. (A two body initial state). Then the multiparticle

  9. Choosing between Higher Moment Maximum Entropy Models and Its Application to Homogeneous Point Processes with Random Effects

    Directory of Open Access Journals (Sweden)

    Lotfi Khribi

    2017-12-01

    Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.

  10. Bayesian inference for partially identified models exploring the limits of limited data

    CERN Document Server

    Gustafson, Paul

    2015-01-01

    Introduction Identification What Is against Us? What Is for Us? Some Simple Examples of Partially Identified ModelsThe Road Ahead The Structure of Inference in Partially Identified Models Bayesian Inference The Structure of Posterior Distributions in PIMs Computational Strategies Strength of Bayesian Updating, Revisited Posterior MomentsCredible Intervals Evaluating the Worth of Inference Partial Identification versus Model Misspecification The Siren Call of Identification Comp

  11. Experimental modeling of eddy currents and deflections for tokamak limiters

    International Nuclear Information System (INIS)

    Hua, T.Q.; Knott, M.J.; Turner, L.R.; Wehrle, R.B.

    1986-01-01

    In this study, experiments were performed to investigate deflection, current, and material stress in cantilever beams with the Fusion ELectromagnetic Induction eXperiment (FELIX) at the Argonne National Laboratory. Since structures near the plasma are typically cantilevered, the beams provide a good model for the limiter blades of a tokamak fusion reactor. The test pieces were copper, aluminum, phosphor bronze, and brass cantilever beams, clamped rigidly at one end with a nonconducting support frame inside the FELIX test volume. The primary data recorded as functions of time were the beam deflection measured with a noncontact electro-optical device, the total eddy current measured with a Rogowski coil and linking through a central hole in the beam, and the material stress extracted from strain gauges. Measurements of stress and deflection were taken at selected positions along the beam. The extent of the coupling effect depends on several factors. These include the size, the electrical and mechanical properties of the beam, segmenting of the beam, the decay rate of the dipole field, and the strength of the solenoid field

  12. Cerebral radiation necrosis: limits and prospects of experimental models

    International Nuclear Information System (INIS)

    Lefaix, J.L.

    1992-01-01

    Cerebral radiation necrosis is the major CNS hazard of clinical treatment therapy involving delivery of high doses of radiation to the brain. It is generally irreversible and frequently leads to death from brain necrosis. Necrosis has been reported with total doses of 60 Gy, delivered in conventional fractions. Symptoms depend upon the volume of brain irradiated and are frequently those of an intracranial mass and may be present as an area of gliosis or frank necrosis. Possible causes include some direct effect of radiation on glial cells, vascular changes and the action of an immunological mechanism. The weight of evidence suggests that demyelination is important in the early delayed reaction, and that vascular changes gradually become more important in the late delayed reactions, from several months to years after treatment. The advent of sophisticated radiographic technologies such as computed tomography, magnetic resonance imaging and spectroscopy, and positron emission tomography have facilitated serial non invasive examination of morphologic or physiologic parameters within the brain after irradiation. Limits and prospects of these technologies are reviewed in experimental animal models of late radiation injuries of the brain, which were carried out in many species ranging from mouse to monkey

  13. An improved model for nucleation-limited ice formation in living cells during freezing.

    Directory of Open Access Journals (Sweden)

    Jingru Yi

    Full Text Available Ice formation in living cells is a lethal event during freezing and its characterization is important to the development of optimal protocols for not only cryopreservation but also cryotherapy applications. Although the model for probability of ice formation (PIF in cells developed by Toner et al. has been widely used to predict nucleation-limited intracellular ice formation (IIF, our data of freezing Hela cells suggest that this model could give misleading prediction of PIF when the maximum PIF in cells during freezing is less than 1 (PIF ranges from 0 to 1. We introduce a new model to overcome this problem by incorporating a critical cell volume to modify the Toner's original model. We further reveal that this critical cell volume is dependent on the mechanisms of ice nucleation in cells during freezing, i.e., surface-catalyzed nucleation (SCN and volume-catalyzed nucleation (VCN. Taken together, the improved PIF model may be valuable for better understanding of the mechanisms of ice nucleation in cells during freezing and more accurate prediction of PIF for cryopreservation and cryotherapy applications.

  14. 76 FR 62605 - Airworthiness Directives; Viking Air Limited Model DHC-3 (Otter) Airplanes With Supplemental Type...

    Science.gov (United States)

    2011-10-11

    ... Airworthiness Directives; Viking Air Limited Model DHC-3 (Otter) Airplanes With Supplemental Type Certificate.... That AD applies to Viking Air Limited Model DHC-3 (Otter) airplanes equipped with a Honeywell TPE331... limitations and marking the airspeed indicator accordingly for Viking Air Limited Model DHC-3 (Otter...

  15. A smoothed maximum score estimator for the binary choice panel data model with individual fixed effects and applications to labour force participation

    NARCIS (Netherlands)

    Charlier, G.W.P.

    1994-01-01

    In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)

  16. A Two-Stage Information-Theoretic Approach to Modeling Landscape-Level Attributes and Maximum Recruitment of Chinook Salmon in the Columbia River Basin.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, William L.; Lee, Danny C.

    2000-11-01

    Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.

  17. The Greenland ice sheet - a model for its culmination and decay during and after the last glacial maximum

    DEFF Research Database (Denmark)

    Funder, Svend Visby; Hansen, Louise

    1996-01-01

    there was little change at all. The driving factor during this step was calving caused by rising sea level. This lasted until c. 10 ka, but may have been consumated before the Younger Dryas. The second step began with a glacier-readvance between 10 and 9.5 ka, and after this the fjord glaciers began to retreat....... Maximum Holocene uplift was attained in areas of the 10 ka ice margin, indicating that the uplift is essentially a response to the melting and unloading of ice that began at this time. In suppport of this, recent results in West, North and East Greenland indicate that the...

  18. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  19. Consistency and asymptotic normality of maximum likelihood estimators of a multiplicative time-varying smooth transition correlation GARCH model

    DEFF Research Database (Denmark)

    Silvennoinen, Annestiina; Terasvirta, Timo

    A new multivariate volatility model that belongs to the family of conditional correlation GARCH models is introduced. The GARCH equations of this model contain a multiplicative deterministic component to describe long-run movements in volatility and, in addition, the correlations...

  20. The bottomside parameters B0, B1 obtained from incoherent scatter measurements during a solar maximum and their comparisons with the IRI-2001 model

    Directory of Open Access Journals (Sweden)

    N. K. Sethi

    2002-06-01

    Full Text Available High resolution electron density profiles (Ne measured with the Arecibo (18.4 N, 66.7 W, Incoherent Scatter radar (I. S. are used to obtain the bottomside shape parameters B0, B1 for a solar maximum period (1989–90. Median values of these parameters are compared with those obtained from the IRI-2001 model. It is observed that during summer, the IRI values agree fairly well with the Arecibo values, though the numbers are somewhat larger during the daytime. Discrepancies occur during winter and equinox, when the IRI underestimates B0 for the local times from about 12:00 LT to about 20:00 LT. Furthermore, the IRI model tends to generally overestimate B1 at all local times. At Arecibo, B0 increases by about 50%, and B1 decreases by about 30% from solar minimum to solar maximum.Key words. Ionosphere (equational ionosphere; modeling and forecasting

  1. DETERMINATION OF RESOLUTION LIMITS OF ELECTRICAL TOMOGRAPHY ON THE BLOCK MODEL IN A HOMOGENOUS ENVIRONMENT BY MEANS OF ELECTRICAL MODELLING

    Directory of Open Access Journals (Sweden)

    Franjo Šumanovac

    2007-12-01

    Full Text Available The block model in a homogenous environment can generally serve for presentation of some geological models: changes of facies, changes of rock compactness-fragmentation, underground cavities, bauxite deposits, etc. Therefore, on the block model of increased resistivities in a homogenous environment of low resistivity, the potentials of the electrical tomography method were tested for the purpose of their detection. Regarding potentials of block detection, resolution methods depend on: depth of block location, ratio between block resistivity and the environment in which it is located as well as applied survey geometry, i.e. electrode array. Thus the analyses carried out for the most frequently used electrode arrays in the investigations are the following: the Wenner, Wenner-Schlumberger, dipole-dipole and pole-pole arrays. For each array, maximum depths at which a block can be detected relative to the ratio between block resistivity and parent rock environment were analyzed. The results are shown in the two-dimensional graphs, where the ratio between the block resistivity and the environment is shown on the X-axis, and the resolution depth on the Y-axis, after which the curves defining the resolution limits were drawn. These graphs have a practical use, since they enable a fast, simple determination of potentials of the method application on a specific geological model.

  2. Optimizing Regional Food and Energy Production under Limited Water Availability through Integrated Modeling

    Directory of Open Access Journals (Sweden)

    Junlian Gao

    2018-05-01

    Full Text Available Across the world, human activity is approaching planetary boundaries. In northwest China, in particular, the coal industry and agriculture are competing for key limited inputs of land and water. In this situation, the traditional approach to planning the development of each sector independently fails to deliver sustainable solutions, as solutions made in sectorial ‘silos’ are often suboptimal for the entire economy. We propose a spatially detailed cost-minimizing model for coal and agricultural production in a region under constraints on land and water availability. We apply the model to the case study of Shanxi province, China. We show how such an integrated optimization, which takes maximum advantage of the spatial heterogeneity in resource abundance, could help resolve the conflicts around the water–food–energy (WFE nexus and assist in its management. We quantify the production-possibility frontiers under different water-availability scenarios and demonstrate that in water-scarce regions, like Shanxi, the production capacity and corresponding production solutions are highly sensitive to water constraints. The shadow prices estimated in the model could be the basis for intelligent differentiated water pricing, not only to enable the water-resource transfer between agriculture and the coal industry, and across regions, but also to achieve cost-effective WFE management.

  3. Air Pollution Modelling to Predict Maximum Ground Level Concentration for Dust from a Palm Oil Mill Stack

    Directory of Open Access Journals (Sweden)

    Regina A. A.

    2010-12-01

    Full Text Available The study is to model emission from a stack to estimate ground level concentration from a palm oil mill. The case study is a mill located in Kuala Langat, Selangor. Emission source is from boilers stacks. The exercise determines the estimate the ground level concentrations for dust to the surrounding areas through the utilization of modelling software. The surround area is relatively flat, an industrial area surrounded by factories and with palm oil plantations in the outskirts. The model utilized in the study was to gauge the worst-case scenario. Ambient air concentrations were garnered calculate the increase to localized conditions. Keywords: emission, modelling, palm oil mill, particulate, POME

  4. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  5. Bioenergy Sorghum Crop Model Predicts VPD-Limited Transpiration Traits Enhance Biomass Yield in Water-Limited Environments.

    Science.gov (United States)

    Truong, Sandra K; McCormick, Ryan F; Mullet, John E

    2017-01-01

    Bioenergy sorghum is targeted for production in water-limited annual cropland therefore traits that improve plant water capture, water use efficiency, and resilience to water deficit are necessary to maximize productivity. A crop modeling framework, APSIM, was adapted to predict the growth and biomass yield of energy sorghum and to identify potentially useful traits for crop improvement. APSIM simulations of energy sorghum development and biomass accumulation replicated results from field experiments across multiple years, patterns of rainfall, and irrigation schemes. Modeling showed that energy sorghum's long duration of vegetative growth increased water capture and biomass yield by ~30% compared to short season crops in a water-limited production region. Additionally, APSIM was extended to enable modeling of VPD-limited transpiration traits that reduce crop water use under high vapor pressure deficits (VPDs). The response of transpiration rate to increasing VPD was modeled as a linear response until a VPD threshold was reached, at which the slope of the response decreases, representing a range of responses to VPD observed in sorghum germplasm. Simulation results indicated that the VPD-limited transpiration trait is most beneficial in hot and dry regions of production where crops are exposed to extended periods without rainfall during the season or to a terminal drought. In these environments, slower but more efficient transpiration increases biomass yield and prevents or delays the exhaustion of soil water and onset of leaf senescence. The VPD-limited transpiration responses observed in sorghum germplasm increased biomass accumulation by 20% in years with lower summer rainfall, and the ability to drastically reduce transpiration under high VPD conditions could increase biomass by 6% on average across all years. This work indicates that the productivity and resilience of bioenergy sorghum grown in water-limited environments could be further enhanced by development

  6. Transgenic mouse models of hormonal mammary carcinogenesis: advantages and limitations.

    Science.gov (United States)

    Kirma, Nameer B; Tekmal, Rajeshwar R

    2012-09-01

    Mouse models of breast cancer, especially transgenic and knockout mice, have been established as valuable tools in shedding light on factors involved in preneoplastic changes, tumor development and malignant progression. The majority of mouse transgenic models develop estrogen receptor (ER) negative tumors. This is seen as a drawback because the majority of human breast cancers present an ER positive phenotype. On the other hand, several transgenic mouse models have been developed that produce ER positive mammary tumors. These include mice over-expressing aromatase, ERα, PELP-1 and AIB-1. In this review, we will discuss the value of these models as physiologically relevant in vivo systems to understand breast cancer as well as some of the pitfalls involving these models. In all, we argue that the use of transgenic models has improved our understanding of the molecular aspects and biology of breast cancer. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Science.gov (United States)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  8. Last Glacial Maximum simulations over southern Africa using a variable-resolution global model: synoptic-scale verification

    CSIR Research Space (South Africa)

    Nkoana, R

    2015-09-01

    Full Text Available developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Australia. An ensemble of LGM simulations was constructed, through the downscaling of PMIP3 coupled model simulations over southern Africa. A multiple nudging...

  9. An adaptive meshfree method for phase-field models of biomembranes. Part I: Approximation with maximum-entropy basis functions

    OpenAIRE

    Rosolen, A.; Peco, C.; Arroyo, M.

    2013-01-01

    We present an adaptive meshfree method to approximate phase-field models of biomembranes. In such models, the Helfrich curvature elastic energy, the surface area, and the enclosed volume of a vesicle are written as functionals of a continuous phase-field, which describes the interface in a smeared manner. Such functionals involve up to second-order spatial derivatives of the phase-field, leading to fourth-order Euler–Lagrange partial differential equations (PDE). The solutions develop sharp i...

  10. Intermediate modeling between kinetic equations and hydrodynamic limits: derivation, analysis and simulations

    International Nuclear Information System (INIS)

    Parisot, M.

    2011-01-01

    This work is dedicated study of a problem resulting from plasma physics: the thermal transfer of electrons in a plasma close to equilibrium Maxwellian. Firstly, a dimensional study of the Vlasov-Fokker-Planck-Maxwell system is performed, allowing one hand to identify a physically relevant parameter of scale and also to define mathematically the contours of validity domain. The asymptotic regime called Spitzer-Harm is studied for a relatively general class of collision operator. The following part of this work is devoted to the derivation and study of the hydrodynamic limit of the system of Vlasov-Maxwell-Landau outside the strictly asymptotic. A model proposed by Schurtz and Nicolais located in this context and analyzed. The particularity of this model lies in the application of a delocalization operation in the heat flux. The link with non-local models of Luciani and Mora is established as well as mathematics properties as the principle of maximum and entropy dissipation. Then a formal derivation from the Vlasov equations with a simplified collision operator, is proposed. The derivation, inspired by the recent work of D. Levermore, involves decomposition methods according to the spherical harmonics and methods of closing called diffusion methods. A hierarchy of intermediate models between the kinetic equations and the hydrodynamic limit is described. In particular a new hydrodynamic system integro-differential by nature, is proposed. The Schurtz and Nicolai model appears as a simplification of the system resulting from the derivation, assuming a steady flow of heat. The above results are then generalized to account for the internal energy dependence which appears naturally in the equation establishment. The existence and uniqueness of the solution of the nonstationary system are established in a simplified framework. The last part is devoted was the implementation of a specific numerical scheme to solve these models. We propose a finite volume approach can be

  11. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    of stochastic origin can be observed in experiments. The models include a new approach to the platinum phase transition, which allows for a unification of existing models for Pt(100) and Pt(110). The rich nonlinear dynamical behavior of the macroscopic reaction kinetics is investigated and shows good agreement...

  12. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    Science.gov (United States)

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  13. Modelling of Microbiological Influenced Corrosion – Limitations and Perspectives

    DEFF Research Database (Denmark)

    Skovhus, Torben Lund; Taylor, Christopher; Eckert, Rickard

    of corrosion relative to asset integrity, operators commonly use models to support decision-making. The models use qualitative, semi-quantitative or quantitative measures to help predict the rate of degradation caused by MIC and other threats. A new model that links MIC in topsides oil processing systems...... modeling tools to industry in the shortest development time. ICME development would couple our current understanding of MIC, as represented in models, with experimental data, to build a digital “twin” for optimizing performance of engineering systems, whether in the design phase or operations. Since...... functional groups of microorganisms on reaction kinetics or the significance of microbial growth kinetics on corrosion. The ability to accurately predict MIC initiation and growth is hampered by knowledge gaps regarding environmental conditions affect corrosion under biofilms. In order to manage the threat...

  14. Quasi-neutral limit for a model of viscous plasma

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard; Zhang, P.

    2010-01-01

    Roč. 197, č. 1 (2010), s. 271-295 ISSN 0003-9527 R&D Projects: GA ČR GA201/08/0315 Institutional research plan: CEZ:AV0Z10190503 Keywords : Navier-Stokes- Poisson system * quasi-neutral limit * viscous plasma Subject RIV: BA - General Mathematics Impact factor: 2.277, year: 2010 http://link.springer.com/article/10.1007%2Fs00205-010-0317-7

  15. Introduction to thermodynamics of spin models in the Hamiltonian limit

    Energy Technology Data Exchange (ETDEWEB)

    Berche, Bertrand [Groupe M, Laboratoire de Physique des Materiaux, UMR CNRS No 7556, Universite Henri Poincare, Nancy 1, BP 239, F-54506 Vandoeuvre les Nancy, (France); Lopez, Alexander [Instituto Venezolano de Investigaciones CientIficas, Centro de Fisica, Carr. Panamericana, km 11, Altos de Pipe, Aptdo 21827, 1020-A Caracas, (Venezuela)

    2006-01-01

    A didactic description of the thermodynamic properties of classical spin systems is given in terms of their quantum counterpart in the Hamiltonian limit. Emphasis is on the construction of the relevant Hamiltonian and the calculation of thermal averages is explicitly done in the case of small systems described, in Hamiltonian field theory, by small matrices. The targeted students are those of a graduate statistical physics course.

  16. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  17. Testing the coherence between occupational exposure limits for inhalation and their biological limit values with a generalized PBPK-model: the case of 2-propanol and acetone.

    Science.gov (United States)

    Huizer, Daan; Huijbregts, Mark A J; van Rooij, Joost G M; Ragas, Ad M J

    2014-08-01

    The coherence between occupational exposure limits (OELs) and their corresponding biological limit values (BLVs) was evaluated for 2-propanol and acetone. A generic human PBPK model was used to predict internal concentrations after inhalation exposure at the level of the OEL. The fraction of workers with predicted internal concentrations lower than the BLV, i.e. the 'false negatives', was taken as a measure for incoherence. The impact of variability and uncertainty in input parameters was separated by means of nested Monte Carlo simulation. Depending on the exposure scenario considered, the median fraction of the population for which the limit values were incoherent ranged from 2% to 45%. Parameter importance analysis showed that body weight was the main factor contributing to interindividual variability in blood and urine concentrations and that the metabolic parameters Vmax and Km were the most important sources of uncertainty. This study demonstrates that the OELs and BLVs for 2-propanol and acetone are not fully coherent, i.e. enforcement of BLVs may result in OELs being violated. In order to assess the acceptability of this "incoherence", a maximum population fraction at risk of exceeding the OEL should be specified as well as a minimum level of certainty in predicting this fraction. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A mechanistic nitrogen limitation model for CLM(ED)

    Science.gov (United States)

    Ali, A. A.; Xu, C.; McDowell, N. G.; Rogers, A.; Wullschleger, S. D.; Fisher, R.; Vrugt, J. A.

    2014-12-01

    Photosynthetic capacity is a key plant trait that determines the rate of photosynthesis; however, in Earth System Models it is either a fixed value or derived from a linear function of leaf nitrogen content. A mechanistic leaf nitrogen allocation model have been developed for a DOE-sponsored Community Land Model coupled to the Ecosystem Demography model (CLM-ED) to predict the photosynthetic capacity [Vc,max25 (μmol CO2 m-2 s-1)] under different environmental conditions at the global scale. We collected more than 800 data points of photosynthetic capacity (Vc,max25) for 124 species from 57 studies with the corresponding leaf nitrogen content and environmental conditions (temperature, radiation, humidity and day length) from literature and the NGEE arctic site (Barrow). Based on the data, we found that environmental control of Vc,max25 is about 4 times stronger than the leaf nitrogen content. Using the Markov-Chain Monte Carlo simulation approach, we fitted the collected data to our newly developed nitrogen allocation model, which predict the leaf nitrogen investment in different components including structure, storage, respiration, light capture, carboxylation and electron transport at different environmental conditions. Our results showed that our nitrogen allocation model explained 52% of variance in observed Vc,max25 and 65% variance in observed Jmax25 using a single set of fitted model parameters for all species. Across the growing season, we found that the modeled Vc,max25 explained 49% of the variability in measured Vc,max25. In the context of future global warming, our model predicts that a temperature increase by 5oC and the doubling of atmospheric carbon dioxide reduced the Vc,max25 by 5%, 11%, respectively.

  19. The Leaky Dielectric Model as a Weak Electrolyte Limit of an Electrodiffusion Model

    Science.gov (United States)

    Mori, Yoichiro; Young, Yuan-Nan

    2017-11-01

    The Taylor-Melcher (TM) model is the standard model for the electrohydrodynamics of poorly conducting leaky dielectric fluids under an electric field. The TM model treats the fluid as an ohmic conductor, without modeling ion dynamics. On the other hand, electrodiffusion models, which have been successful in describing electokinetic phenomena, incorporates ionic concentration dynamics. Mathematical reconciliation between electrodiffusion and the TM models has been a major issue for electrohydrodynamic theory. Here, we derive the TM model from an electrodiffusion model where we explicitly model the electrochemistry of ion dissociation. We introduce salt dissociation reaction in the bulk and take the limit of weak salt dissociation (corresponding to poor conductors in the TM model.) Assuming small Debye length we derive the TM model with or without the surface charge advection term depending upon the scaling of relevant dimensionless parameters. Our analysis also gives a description of the ionic concentration distribution within the Debye layer, which hints at possible scenarios for electrohydrodynamic singularity formation. In our analysis we also allow for a jump in voltage across the liquid interface which causes a drifting velocity for a liquid drop under an electric field. YM is partially supported by NSF-DMS-1516978 and NSF-DMS-1620316. YNY is partially supported by NSF-DMS-1412789 and NSF-DMS-1614863.

  20. Imperfect Preventive Maintenance Model Study Based On Reliability Limitation

    Directory of Open Access Journals (Sweden)

    Zhou Qian

    2016-01-01

    Full Text Available Effective maintenance is crucial for equipment performance in industry. Imperfect maintenance conform to actual failure process. Taking the dynamic preventive maintenance cost into account, the preventive maintenance model was constructed by using age reduction factor. The model regards the minimization of repair cost rate as final target. It use allowed smallest reliability as the replacement condition. Equipment life was assumed to follow two parameters Weibull distribution since it was one of the most commonly adopted distributions to fit cumulative failure problems. Eventually the example verifies the rationality and benefits of the model.

  1. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  2. Real gas effects in mixing-limited diesel spray vaporization models

    NARCIS (Netherlands)

    Luijten, C.C.M.; Kurvers, C.

    2010-01-01

    The maximum penetration length of the liquid phase in diesel sprays is of paramount importance in reducing diesel engine emissions. Quasi-steady liquid length values have been successfully correlated in the literature, assuming that mixing of fuel and air is the limiting step in the evaporation

  3. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  4. The Limitations of Applying Rational Decision-Making Models

    African Journals Online (AJOL)

    decision-making models as applied to child spacing and more. specificaDy to the use .... also assumes that the individual operates as a rational decision- making organism in ..... work involves: Motivation; Counselling; Distribution ofIEC mate-.

  5. CheckMATE 2: From the model to the limit

    Science.gov (United States)

    Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten

    2017-12-01

    We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.

  6. The Model Identification Test: A Limited Verbal Science Test

    Science.gov (United States)

    McIntyre, P. J.

    1972-01-01

    Describes the production of a test with a low verbal load for use with elementary school science students. Animated films were used to present appropriate and inappropriate models of the behavior of particles of matter. (AL)

  7. Limit order book and its modeling in terms of Gibbs Grand-Canonical Ensemble

    Science.gov (United States)

    Bicci, Alberto

    2016-12-01

    In the domain of so called Econophysics some attempts have been already made for applying the theory of thermodynamics and statistical mechanics to economics and financial markets. In this paper a similar approach is made from a different perspective, trying to model the limit order book and price formation process of a given stock by the Grand-Canonical Gibbs Ensemble for the bid and ask orders. The application of the Bose-Einstein statistics to this ensemble allows then to derive the distribution of the sell and buy orders as a function of price. As a consequence we can define in a meaningful way expressions for the temperatures of the ensembles of bid orders and of ask orders, which are a function of minimum bid, maximum ask and closure prices of the stock as well as of the exchanged volume of shares. It is demonstrated that the difference between the ask and bid orders temperatures can be related to the VAO (Volume Accumulation Oscillator), an indicator empirically defined in Technical Analysis of stock markets. Furthermore the derived distributions for aggregate bid and ask orders can be subject to well defined validations against real data, giving a falsifiable character to the model.

  8. Mode-coupling theory predictions for a limited valency attractive square well model

    International Nuclear Information System (INIS)

    Zaccarelli, E; Saika-Voivod, I; Moreno, A J; Nave, E La; Buldyrev, S V; Sciortino, F; Tartaglia, P

    2006-01-01

    Recently we have studied, using numerical simulations, a limited valency model, i.e. an attractive square well model with a constraint on the maximum number of bonded neighbours. Studying a large region of temperatures T and packing fractions φ, we have estimated the location of the liquid-gas phase separation spinodal and the loci of dynamic arrest, where the system is trapped in a disordered non-ergodic state. Two distinct arrest lines for the system are present in the system: a (repulsive) glass line at high packing fraction, and a gel line at low φ and T. The former is essentially vertical φ controlled), while the latter is rather horizontal (T controlled) in the φ-T) plane. We here complement the molecular dynamics results with mode coupling theory calculations, using the numerical structure factors as input. We find that the theory predicts a repulsive glass line-in satisfactory agreement with the simulation results-and an attractive glass line, which appears to be unrelated to the gel line

  9. Physiology-based modelling approaches to characterize fish habitat suitability: Their usefulness and limitations

    Science.gov (United States)

    Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo

    2018-02-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe

  10. Advances and Limitations of Disease Biogeography Using Ecological Niche Modeling.

    Science.gov (United States)

    Escobar, Luis E; Craft, Meggan E

    2016-01-01

    Mapping disease transmission risk is crucial in public and animal health for evidence based decision-making. Ecology and epidemiology are highly related disciplines that may contribute to improvements in mapping disease, which can be used to answer health related questions. Ecological niche modeling is increasingly used for understanding the biogeography of diseases in plants, animals, and humans. However, epidemiological applications of niche modeling approaches for disease mapping can fail to generate robust study designs, producing incomplete or incorrect inferences. This manuscript is an overview of the history and conceptual bases behind ecological niche modeling, specifically as applied to epidemiology and public health; it does not pretend to be an exhaustive and detailed description of ecological niche modeling literature and methods. Instead, this review includes selected state-of-the-science approaches and tools, providing a short guide to designing studies incorporating information on the type and quality of the input data (i.e., occurrences and environmental variables), identification and justification of the extent of the study area, and encourages users to explore and test diverse algorithms for more informed conclusions. We provide a friendly introduction to the field of disease biogeography presenting an updated guide for researchers looking to use ecological niche modeling for disease mapping. We anticipate that ecological niche modeling will soon be a critical tool for epidemiologists aiming to map disease transmission risk, forecast disease distribution under climate change scenarios, and identify landscape factors triggering outbreaks.

  11. Interaction of tide and salinity barrier: Limitation of numerical model

    Directory of Open Access Journals (Sweden)

    Suphat Vongvisessomjai1

    2008-07-01

    Full Text Available Nowadays, the study of interaction of the tide and the salinity barrier in an estuarine area is usually accomplished vianumerical modeling, due to the speed and convenience of modern computers. However, numerical models provide littleinsight with respect to the fundamental physical mechanisms involved. In this study, it is found that all existing numericalmodels work satisfactorily when the barrier is located at some distance far from upstream and downstream boundary conditions.Results are considerably underestimate reality when the barrier is located near the downstream boundary, usually theriver mouth. Meanwhile, this analytical model provides satisfactory output for all scenarios. The main problem of thenumerical model is that the effects of barrier construction in creation of reflected tide are neglected when specifying thedownstream boundary conditions; the use of the boundary condition before construction of the barrier which are significantlydifferent from those after the barrier construction would result in an error outputs. Future numerical models shouldattempt to account for this deficiency; otherwise, using this analytical model is another choice.

  12. A diffusion-limited reaction model for self-propagating Al/Pt multilayers with quench limits

    Science.gov (United States)

    Kittell, D. E.; Yarrington, C. D.; Hobbs, M. L.; Abere, M. J.; Adams, D. P.

    2018-04-01

    A diffusion-limited reaction model was calibrated for Al/Pt multilayers ignited on oxidized silicon, sapphire, and tungsten substrates, as well as for some Al/Pt multilayers ignited as free-standing foils. The model was implemented in a finite element analysis code and used to match experimental burn front velocity data collected from several years of testing at Sandia National Laboratories. Moreover, both the simulations and experiments reveal well-defined quench limits in the total Al + Pt layer (i.e., bilayer) thickness. At these limits, the heat generated from atomic diffusion is insufficient to support a self-propagating wave front on top of the substrates. Quench limits for reactive multilayers are seldom reported and are found to depend on the thermal properties of the individual layers. Here, the diffusion-limited reaction model is generalized to allow for temperature- and composition-dependent material properties, phase change, and anisotropic thermal conductivity. Utilizing this increase in model fidelity, excellent overall agreement is shown between the simulations and experimental results with a single calibrated parameter set. However, the burn front velocities of Al/Pt multilayers ignited on tungsten substrates are over-predicted. Possible sources of error are discussed and a higher activation energy (from 41.9 kJ/mol.at. to 47.5 kJ/mol.at.) is shown to bring the simulations into agreement with the velocity data observed on tungsten substrates. This higher activation energy suggests an inhibited diffusion mechanism present at lower heating rates.

  13. Optimizing selective cutting strategies for maximum carbon stocks and yield of Moso bamboo forest using BIOME-BGC model.

    Science.gov (United States)

    Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing

    2017-04-15

    The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Geometrical Optimization Approach to Isomerization: Models and Limitations.

    Science.gov (United States)

    Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R

    2017-11-02

    We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.

  15. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  16. Coherent states and classical limit of algebraic quantum models

    International Nuclear Information System (INIS)

    Scutaru, H.

    1983-01-01

    The algebraic models for collective motion in nuclear physics belong to a class of theories the basic observables of which generate selfadjoint representations of finite dimensional, real Lie algebras, or of the enveloping algebras of these Lie algebras. The simplest and most used for illustrations model of this kind is the Lipkin model, which is associated with the Lie algebra of the three dimensional rotations group, and which presents all characteristic features of an algebraic model. The Lipkin Hamiltonian is the image, of an element of the enveloping algebra of the algebra SO under a representation. In order to understand the structure of the algebraic models the author remarks that in both classical and quantum mechanics the dynamics is associated to a typical algebraic structure which we shall call a dynamical algebra. In this paper he shows how the constructions can be made in the case of the algebraic quantum systems. The construction of the symplectic manifold M can be made in this case using a quantum analog of the momentum map which he defines

  17. Assessing the limitations of the Banister model in monitoring training

    Science.gov (United States)

    Hellard, Philippe; Avalos, Marta; Lacoste, Lucien; Barale, Frédéric; Chatard, Jean-Claude; Millet, Grégoire P.

    2006-01-01

    The aim of this study was to carry out a statistical analysis of the Banister model to verify how useful it is in monitoring the training programmes of elite swimmers. The accuracy, the ill-conditioning and the stability of this model were thus investigated. Training loads of nine elite swimmers, measured over one season, were related to performances with the Banister model. Firstly, to assess accuracy, the 95% bootstrap confidence interval (95% CI) of parameter estimates and modelled performances were calculated. Secondly, to study ill-conditioning, the correlation matrix of parameter estimates was computed. Finally, to analyse stability, iterative computation was performed with the same data but minus one performance, chosen randomly. Performances were significantly related to training loads in all subjects (R2= 0.79 ± 0.13, P < 0.05) and the estimation procedure seemed to be stable. Nevertheless, the 95% CI of the most useful parameters for monitoring training were wide τa =38 (17, 59), τf =19 (6, 32), tn =19 (7, 35), tg =43 (25, 61). Furthermore, some parameters were highly correlated making their interpretation worthless. The study suggested possible ways to deal with these problems and reviewed alternative methods to model the training-performance relationships. PMID:16608765

  18. Some Limits Using Random Slope Models to Measure Academic Growth

    Directory of Open Access Journals (Sweden)

    Daniel B. Wright

    2017-11-01

    Full Text Available Academic growth is often estimated using a random slope multilevel model with several years of data. However, if there are few time points, the estimates can be unreliable. While using random slope multilevel models can lower the variance of the estimates, these procedures can produce more highly erroneous estimates—zero and negative correlations with the true underlying growth—than using ordinary least squares estimates calculated for each student or school individually. An example is provided where schools with increasing graduation rates are estimated to have negative growth and vice versa. The estimation is worse when the underlying data are skewed. It is recommended that there are at least six time points for estimating growth if using a random slope model. A combination of methods can be used to avoid some of the aberrant results if it is not possible to have six or more time points.

  19. Confidence limits for data mining models of options prices

    Science.gov (United States)

    Healy, J. V.; Dixon, M.; Read, B. J.; Cai, F. F.

    2004-12-01

    Non-parametric methods such as artificial neural nets can successfully model prices of financial options, out-performing the Black-Scholes analytic model (Eur. Phys. J. B 27 (2002) 219). However, the accuracy of such approaches is usually expressed only by a global fitting/error measure. This paper describes a robust method for determining prediction intervals for models derived by non-linear regression. We have demonstrated it by application to a standard synthetic example (29th Annual Conference of the IEEE Industrial Electronics Society, Special Session on Intelligent Systems, pp. 1926-1931). The method is used here to obtain prediction intervals for option prices using market data for LIFFE “ESX” FTSE 100 index options ( http://www.liffe.com/liffedata/contracts/month_onmonth.xls). We avoid special neural net architectures and use standard regression procedures to determine local error bars. The method is appropriate for target data with non constant variance (or volatility).

  20. Comparing modeled and observed changes in mineral dust transport and deposition to Antarctica between the Last Glacial Maximum and current climates

    Energy Technology Data Exchange (ETDEWEB)

    Albani, Samuel [University of Siena, Graduate School in Polar Sciences, Siena (Italy); University of Milano-Bicocca, Department of Environmental Sciences, Milano (Italy); Cornell University, Department of Earth and Atmospheric Sciences, Ithaca, NY (United States); Mahowald, Natalie M. [Cornell University, Department of Earth and Atmospheric Sciences, Ithaca, NY (United States); Delmonte, Barbara; Maggi, Valter [University of Milano-Bicocca, Department of Environmental Sciences, Milano (Italy); Winckler, Gisela [Columbia University, Lamont-Doherty Earth Observatory, Palisades, NY (United States); Columbia University, Department of Earth and Environmental Sciences, New York, NY (United States)

    2012-05-15

    Mineral dust aerosols represent an active component of the Earth's climate system, by interacting with radiation directly, and by modifying clouds and biogeochemistry. Mineral dust from polar ice cores over the last million years can be used as paleoclimate proxy, and provide unique information about climate variability, as changes in dust deposition at the core sites can be due to changes in sources, transport and/or deposition locally. Here we present results from a study based on climate model simulations using the Community Climate System Model. The focus of this work is to analyze simulated differences in the dust concentration, size distribution and sources in current climate conditions and during the Last Glacial Maximum at specific ice core locations in Antarctica, and compare with available paleodata. Model results suggest that South America is the most important source for dust deposited in Antarctica in current climate, but Australia is also a major contributor and there is spatial variability in the relative importance of the major dust sources. During the Last Glacial Maximum the dominant source in the model was South America, because of the increased activity of glaciogenic dust sources in Southern Patagonia-Tierra del Fuego and the Southernmost Pampas regions, as well as an increase in transport efficiency southward. Dust emitted from the Southern Hemisphere dust source areas usually follow zonal patterns, but southward flow towards Antarctica is located in specific areas characterized by southward displacement of air masses. Observations and model results consistently suggest a spatially variable shift in dust particle sizes. This is due to a combination of relatively reduced en route wet removal favouring a generalized shift towards smaller particles, and on the other hand to an enhanced relative contribution of dry coarse particle deposition in the Last Glacial Maximum. (orig.)

  1. Limit Stress Spline Models for GRP Composites | Ihueze | Nigerian ...

    African Journals Online (AJOL)

    Spline functions were established on the assumption of three intervals and fitting of quadratic and cubic splines to critical stress-strain responses data. Quadratic ... of data points. Spline model is therefore recommended as it evaluates the function at subintervals, eliminating the error associated with wide range interpolation.

  2. Random fluid limit of an overloaded polling model

    NARCIS (Netherlands)

    M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)

    2014-01-01

    htmlabstractIn the present paper, we study the evolution of an overloaded cyclic polling model that starts empty. Exploiting a connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches

  3. Random fluid limit of an overloaded polling model

    NARCIS (Netherlands)

    M. Frolkova (Masha); S.G. Foss (Sergey); A.P. Zwart (Bert)

    2013-01-01

    htmlabstractIn the present paper, we study the evolution of an~overloaded cyclic polling model that starts empty. Exploiting a~connection with multitype branching processes, we derive fluid asymptotics for the joint queue length process. Under passage to the fluid dynamics, the server switches

  4. Evidence, models, conservation programs and limits to management

    Science.gov (United States)

    Nichols, J.D.

    2012-01-01

    Walsh et al. (2012) emphasized the importance of obtaining evidence to assess the effects of management actions on state variables relevant to objectives of conservation programs. They focused on malleefowl Leipoa ocellata, ground-dwelling Australian megapodes listed as vulnerable. They noted that although fox Vulpes vulpes baiting is the main management action used in malleefowl conservation throughout southern Australia, evidence of the effectiveness of this action is limited and currently debated. Walsh et al. (2012) then used data from 64 sites monitored for malleefowl and foxes over 23 years to assess key functional relationships relevant to fox control as a conservation action for malleefowl. In one set of analyses, Walsh et al. (2012) focused on two relationships: fox baiting investment versus fox presence, and fox presence versus malleefowl population size and rate of population change. Results led to the counterintuitive conclusion that increases in investments in fox control produced slight decreases in malleefowl population size and growth. In a second set of analyses, Walsh et al. (2012) directly assessed the relationship between investment in fox baiting and malleefowl population size and rate of population change. This set of analyses showed no significant relationship between investment in fox population control and malleefowl population growth. Both sets of analyses benefited from the incorporation of key environmental covariates hypothesized to influence these management relationships. Walsh et al. (2012) concluded that "in most situations, malleefowl conservation did not effectively benefit from fox baiting at current levels of investment." In this commentary, I discuss the work of Walsh et al. (2012) using the conceptual framework of structured decision making (SDM). In doing so, I accept their analytic results and associated conclusions as accurate and discuss basic ideas about evidence, conservation and limits to management.

  5. Simulation model study of limitation on the locating distance of a ground penetrating radar; Chichu tansa radar no tansa kyori genkai ni kansuru simulation model no kochiku

    Energy Technology Data Exchange (ETDEWEB)

    Nakauchi, T; Tsunasaki, M; Kishi, M; Hayakawa, H [Osaka Gas Co. Ltd., Osaka (Japan)

    1996-10-01

    Various simulations were carried out under various laying conditions to obtain the limitation of locating distance for ground penetrating radar. Recently, ground penetrating radar has been remarked as location technology of obstacles such as the existing buried objects. To enhance the theoretical model (radar equation) of a maximum locating distance, the following factors were examined experimentally using pulse ground penetrating radar: ground surface conditions such as asphalt pavement, diameter of buried pipes, material of buried pipes, effect of soil, antenna gain. The experiment results well agreed with actual field experiment ones. By adopting the antenna gain and effect of the ground surface, the more practical simulation using underground models became possible. The maximum locating distance was more improved by large antenna than small one in actual field. It is assumed that large antenna components contributed to improvement of gain and reduction of attenuation during passing through soil. 5 refs., 12 figs.

  6. 76 FR 31800 - Airworthiness Directives; Viking Air Limited Model DHC-3 (Otter) Airplanes

    Science.gov (United States)

    2011-06-02

    ... Airworthiness Directives; Viking Air Limited Model DHC-3 (Otter) Airplanes AGENCY: Federal Aviation... INFORMATION: Discussion Recent analysis by the FAA on the Viking Air Limited Model DHC-3 (Otter) airplanes... new airworthiness directive (AD): 2011-12-02 Viking Aircraft Limited: Amendment 39-16709; Docket No...

  7. Little Higgs model limits from LHC - Input for Snowmass 2013

    International Nuclear Information System (INIS)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel de

    2013-07-01

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb -1 of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  8. The Limit Deposit Velocity model, a new approach

    Directory of Open Access Journals (Sweden)

    Miedema Sape A.

    2015-12-01

    Full Text Available In slurry transport of settling slurries in Newtonian fluids, it is often stated that one should apply a line speed above a critical velocity, because blow this critical velocity there is the danger of plugging the line. There are many definitions and names for this critical velocity. It is referred to as the velocity where a bed starts sliding or the velocity above which there is no stationary bed or sliding bed. Others use the velocity where the hydraulic gradient is at a minimum, because of the minimum energy consumption. Most models from literature are one term one equation models, based on the idea that the critical velocity can be explained that way.

  9. Staying Connected: Sustaining Collaborative Care Models with Limited Funding.

    Science.gov (United States)

    Johnston, Brenda J; Peppard, Lora; Newton, Marian

    2015-08-01

    Providing psychiatric services in the primary care setting is challenging. The multidisciplinary, coordinated approach of collaborative care models (CCMs) addresses these challenges. The purpose of the current article is to discuss the implementation of a CCM at a free medical clinic (FMC) where volunteer staff provide the majority of services. Essential components of CCMs include (a) comprehensive screening and assessment, (b) shared development and communication of care plans among providers and the patient, and (c) care coordination and management. Challenges to implementing and sustaining a CCM at a FMC in Virginia attempting to meet the medical and psychiatric needs of the underserved are addressed. Although the CCM produced favorable outcomes, sustaining the model long-term presented many challenges. Strategies for addressing these challenges are discussed. Copyright 2015, SLACK Incorporated.

  10. Toward a Mechanistic Modeling of Nitrogen Limitation on Vegetation Dynamics

    OpenAIRE

    Xu, Chonggang; Fisher, Rosie; Wullschleger, Stan D.; Wilson, Cathy J.; Cai, Michael; McDowell, Nate G.

    2012-01-01

    Nitrogen is a dominant regulator of vegetation dynamics, net primary production, and terrestrial carbon cycles; however, most ecosystem models use a rather simplistic relationship between leaf nitrogen content and photosynthetic capacity. Such an approach does not consider how patterns of nitrogen allocation may change with differences in light intensity, growing-season temperature and CO(2) concentration. To account for this known variability in nitrogen-photosynthesis relationships, we deve...

  11. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion

    Science.gov (United States)

    2013-01-01

    that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437

  12. The MMS Dayside Magnetic Reconnection Locations During Phase 1 and Their Relation to the Predictions of the Maximum Magnetic Shear Model

    Science.gov (United States)

    Trattner, K. J.; Burch, J. L.; Ergun, R.; Eriksson, S.; Fuselier, S. A.; Giles, B. L.; Gomez, R. G.; Grimes, E. W.; Lewis, W. S.; Mauk, B.; Petrinec, S. M.; Russell, C. T.; Strangeway, R. J.; Trenchi, L.; Wilder, F. D.

    2017-12-01

    Several studies have validated the accuracy of the maximum magnetic shear model to predict the location of the reconnection site at the dayside magnetopause. These studies found agreement between model and observations for 74% to 88% of events examined. It should be noted that, of the anomalous events that failed the prediction of the model, 72% shared a very specific parameter range. These events occurred around equinox for an interplanetary magnetic field (IMF) clock angle of about 240°. This study investigates if this remarkable grouping of events is also present in data from the recently launched MMS. The MMS magnetopause encounter database from the first dayside phase of the mission includes about 4,500 full and partial magnetopause crossings and flux transfer events. We use the known reconnection line signature of switching accelerated ion beams in the magnetopause boundary layer to identify encounters with the reconnection region and identify 302 events during phase 1a when the spacecraft are at reconnection sites. These confirmed reconnection locations are compared with the predicted location from the maximum magnetic shear model and revealed an 80% agreement. The study also revealed the existence of anomalous cases as mentioned in an earlier study. The anomalies are concentrated for times around the equinoxes together with IMF clock angles around 140° and 240°. Another group of anomalies for the same clock angle ranges was found during December events.

  13. Little Higgs model limits from LHC - Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen; Tonini, Marco; Vries, Maikel. de

    2013-07-15

    The status of the most prominent model implementations of the Little Higgs paradigm, the Littlest Higgs with and without discrete T parity as well as the Simplest Little Higgs are reviewed. For this, we are taking into account a fit to 21 electroweak precision observables from LEP, SLC, Tevatron together with the full 25 fb{sup -1} of Higgs data reported from ATLAS and CMS at Moriond 2013. We also - focusing on the Littlest Higgs with T parity - include an outlook on corresponding direct searches at the 8 TeV LHC and their competitiveness with the EW and Higgs data regarding their exclusion potential. This contribution to the Snowmass procedure serves as a guideline which regions in parameter space of Little Higgs models are still compatible for the upcoming LHC runs and future experiments at the energy frontier. For this we propose two different benchmark scenarios for the Littlest Higgs with T parity, one with heavy mirror quarks, one with light ones.

  14. Dental Care Coverage and Use: Modeling Limitations and Opportunities

    Science.gov (United States)

    Moeller, John F.; Chen, Haiyan

    2014-01-01

    Objectives. We examined why older US adults without dental care coverage and use would have lower use rates if offered coverage than do those who currently have coverage. Methods. We used data from the 2008 Health and Retirement Study to estimate a multinomial logistic model to analyze the influence of personal characteristics in the grouping of older US adults into those with and those without dental care coverage and dental care use. Results. Compared with persons with no coverage and no dental care use, users of dental care with coverage were more likely to be younger, female, wealthier, college graduates, married, in excellent or very good health, and not missing all their permanent teeth. Conclusions. Providing dental care coverage to uninsured older US adults without use will not necessarily result in use rates similar to those with prior coverage and use. We have offered a model using modifiable factors that may help policy planners facilitate programs to increase dental care coverage uptake and use. PMID:24328635

  15. Projective limits of state spaces III. Toy-models

    Science.gov (United States)

    Lanéry, Suzanne; Thiemann, Thomas

    2018-01-01

    In this series of papers, we investigate the projective framework initiated by Kijowski (1977) and Okołów (2009, 2014, 2013) [1,2], which describes the states of a quantum theory as projective families of density matrices. A short reading guide to the series can be found in Lanéry (2016). A strategy to implement the dynamics in this formalism was presented in our first paper Lanéry and Thiemann (2017) (see also Lanéry, 2016, section 4), which we now test in two simple toy-models. The first one is a very basic linear model, meant as an illustration of the general procedure, and we will only discuss it at the classical level. In the second one, we reformulate the Schrödinger equation, treated as a classical field theory, within this projective framework, and proceed to its (non-relativistic) second quantization. We are then able to reproduce the physical content of the usual Fock quantization.

  16. On the limits of application of the nolocal quark model

    International Nuclear Information System (INIS)

    Efimov, G.V.; Ivanov, M.A.; Novitsyn, E.A.; Ryabtsev, A.D.

    1983-01-01

    The possibility of application of the nolocal quark model (NQM) to the physics of mesons, containin charmed quarks, is considered. A method for description of states with identical quantum numbers is suggested. I' order to distinguish between such states different quark currents are introduced with additional condition of ''o thogonality'' implied. The latter allows one to neglect nondiagonal off-shell matrix elements in compositeness conditi ' for coupling constants. In the framework of NQM with ditional assumptions mentioned several decay widths of vector charmonium states have been computed, namely lepton c widths of J/psi(3100), psi'(3685) and psi(3770) an the decay width into charmed D-mesons psi(3770) → D nti D. It is shown that the two-parametric freedom of the m del is not sufficient to fit the experimental data. It is co'cluded that the revision of basic concepts of NQM is nec ssary in physics of mesons containing c-quarks

  17. Achieving the physical limits of the bounded-storage model

    International Nuclear Information System (INIS)

    Mandayam, Prabha; Wehner, Stephanie

    2011-01-01

    Secure two-party cryptography is possible if the adversary's quantum storage device suffers imperfections. For example, security can be achieved if the adversary can store strictly less then half of the qubits transmitted during the protocol. This special case is known as the bounded-storage model, and it has long been an open question whether security can still be achieved if the adversary's storage were any larger. Here, we answer this question positively and demonstrate a two-party protocol which is secure as long as the adversary cannot store even a small fraction of the transmitted pulses. We also show that security can be extended to a larger class of noisy quantum memories.

  18. COUNTERCURRENT FLOW LIMITATION EXPERIMENTS AND MODELING FOR IMPROVED REACTOR SAFETY

    International Nuclear Information System (INIS)

    Vierow, Karen

    2008-01-01

    This project is investigating countercurrent flow and 'flooding' phenomena in light water reactor systems to improve reactor safety of current and future reactors. To better understand the occurrence of flooding in the surge line geometry of a PWR, two experimental programs were performed. In the first, a test facility with an acrylic test section provided visual data on flooding for air-water systems in large diameter tubes. This test section also allowed for development of techniques to form an annular liquid film along the inner surface of the 'surge line' and other techniques which would be difficult to verify in an opaque test section. Based on experiences in the air-water testing and the improved understanding of flooding phenomena, two series of tests were conducted in a large-diameter, stainless steel test section. Air-water test results and steam-water test results were directly compared to note the effect of condensation. Results indicate that, as for smaller diameter tubes, the flooding phenomena is predominantly driven by the hydrodynamics. Tests with the test sections inclined were attempted but the annular film was easily disrupted. A theoretical model for steam venting from inclined tubes is proposed herein and validated against air-water data. Empirical correlations were proposed for air-water and steam-water data. Methods for developing analytical models of the air-water and steam-water systems are discussed, as is the applicability of the current data to the surge line conditions. This report documents the project results from July 1, 2005 through June 30, 2008

  19. Determining passive cooling limits in CPV using an analytical thermal model

    Science.gov (United States)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  20. Turning limited experimental information into 3D models of RNA.

    Science.gov (United States)

    Flores, Samuel Coulbourn; Altman, Russ B

    2010-09-01

    Our understanding of RNA functions in the cell is evolving rapidly. As for proteins, the detailed three-dimensional (3D) structure of RNA is often key to understanding its function. Although crystallography and nuclear magnetic resonance (NMR) can determine the atomic coordinates of some RNA structures, many 3D structures present technical challenges that make these methods difficult to apply. The great flexibility of RNA, its charged backbone, dearth of specific surface features, and propensity for kinetic traps all conspire with its long folding time, to challenge in silico methods for physics-based folding. On the other hand, base-pairing interactions (either in runs to form helices or isolated tertiary contacts) and motifs are often available from relatively low-cost experiments or informatics analyses. We present RNABuilder, a novel code that uses internal coordinate mechanics to satisfy user-specified base pairing and steric forces under chemical constraints. The code recapitulates the topology and characteristic L-shape of tRNA and obtains an accurate noncrystallographic structure of the Tetrahymena ribozyme P4/P6 domain. The algorithm scales nearly linearly with molecule size, opening the door to the modeling of significantly larger structures.

  1. New limits on coupled dark energy model after Planck 2015

    Science.gov (United States)

    Li, Hang; Yang, Weiqiang; Wu, Yabo; Jiang, Ying

    2018-06-01

    We used the Planck 2015 cosmic microwave background anisotropy, baryon acoustic oscillation, type-Ia supernovae, redshift-space distortions, and weak gravitational lensing to test the model parameter space of coupled dark energy. We assumed the constant and time-varying equation of state parameter for dark energy, and treated dark matter and dark energy as the fluids whose energy transfer was proportional to the combined term of the energy densities and equation of state, such as Q = 3 Hξ(1 +wx) ρx and Q = 3 Hξ [ 1 +w0 +w1(1 - a) ] ρx, the full space of equation of state could be measured when we considered the term (1 +wx) in the energy exchange. According to the joint observational constraint, the results showed that wx = - 1.006-0.027+0.047 and ξ = 0.098-0.098>+0.026 for coupled dark energy with a constant equation of state, w0 = -1.076-0.076+0.085, w1 = - 0.069-0.319+0.361, and ξ = 0.210-0.210+0.048 for a variable equation of state. We did not get any clear evidence for the coupling in the dark fluids at 1 σ region.

  2. Dose Modeling Evaluations and Technical Support Document For the Authorized Limits Request for the DOE-Owned Property Outside the Limited Area, Paducah Gaseous Diffusion Plant Paducah, Kentucky

    Energy Technology Data Exchange (ETDEWEB)

    Boerner, A. J. [Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program; Maldonado, D. G. [Oak Ridge Institute for Science and Education (ORISE), Oak Ridge, TN (United States). Independent Environmental Assessment and Verification Program; Hansen, Tom [Ameriphysics, LLC (United States)

    2012-09-01

    Environmental assessments and remediation activities are being conducted by the U.S. Department of Energy (DOE) at the Paducah Gaseous Diffusion Plant (PGDP), Paducah, Kentucky. The Oak Ridge Institute for Science and Education (ORISE), a DOE prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct radiation dose modeling analyses and derive single radionuclide soil guidelines (soil guidelines) in support of the derivation of Authorized Limits (ALs) for 'DOE-Owned Property Outside the Limited Area' ('Property') at the PGDP. The ORISE evaluation specifically included the area identified by DOE restricted area postings (public use access restrictions) and areas licensed by DOE to the West Kentucky Wildlife Management Area (WKWMA). The licensed areas are available without restriction to the general public for a variety of (primarily) recreational uses. Relevant receptors impacting current and reasonably anticipated future use activities were evaluated. In support of soil guideline derivation, a Conceptual Site Model (CSM) was developed. The CSM listed radiation and contamination sources, release mechanisms, transport media, representative exposure pathways from residual radioactivity, and a total of three receptors (under present and future use scenarios). Plausible receptors included a Resident Farmer, Recreational User, and Wildlife Worker. single radionuclide soil guidelines (outputs specified by the software modeling code) were generated for three receptors and thirteen targeted radionuclides. These soil guidelines were based on satisfying the project dose constraints. For comparison, soil guidelines applicable to the basic radiation public dose limit of 100 mrem/yr were generated. Single radionuclide soil guidelines from the most limiting (restrictive) receptor based on a target dose constraint of 25 mrem/yr were then rounded and identified as the derived soil guidelines. An additional evaluation using the derived soil

  3. Dose Modeling Evaluations and Technical Support Document For the Authorized Limits Request for the DOE-Owned Property Outside the Limited Area, Paducah Gaseous Diffusion Plant Paducah, Kentucky

    International Nuclear Information System (INIS)

    Boerner, A. J.

    2012-01-01

    Environmental assessments and remediation activities are being conducted by the U.S. Department of Energy (DOE) at the Paducah Gaseous Diffusion Plant (PGDP), Paducah, Kentucky. The Oak Ridge Institute for Science and Education (ORISE), a DOE prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct radiation dose modeling analyses and derive single radionuclide soil guidelines (soil guidelines) in support of the derivation of Authorized Limits (ALs) for 'DOE-Owned Property Outside the Limited Area' ('Property') at the PGDP. The ORISE evaluation specifically included the area identified by DOE restricted area postings (public use access restrictions) and areas licensed by DOE to the West Kentucky Wildlife Management Area (WKWMA). The licensed areas are available without restriction to the general public for a variety of (primarily) recreational uses. Relevant receptors impacting current and reasonably anticipated future use activities were evaluated. In support of soil guideline derivation, a Conceptual Site Model (CSM) was developed. The CSM listed radiation and contamination sources, release mechanisms, transport media, representative exposure pathways from residual radioactivity, and a total of three receptors (under present and future use scenarios). Plausible receptors included a Resident Farmer, Recreational User, and Wildlife Worker. single radionuclide soil guidelines (outputs specified by the software modeling code) were generated for three receptors and thirteen targeted radionuclides. These soil guidelines were based on satisfying the project dose constraints. For comparison, soil guidelines applicable to the basic radiation public dose limit of 100 mrem/yr were generated. Single radionuclide soil guidelines from the most limiting (restrictive) receptor based on a target dose constraint of 25 mrem/yr were then rounded and identified as the derived soil guidelines. An additional evaluation using the derived soil

  4. How cold was Europe at the Last Glacial Maximum? A synthesis of the progress achieved since the first PMIP model-data comparison

    Directory of Open Access Journals (Sweden)

    G. Ramstein

    2007-06-01

    Full Text Available The Last Glacial Maximum has been one of the first foci of the Paleoclimate Modelling Intercomparison Project (PMIP. During its first phase, the results of 17 atmosphere general circulation models were compared to paleoclimate reconstructions. One of the largest discrepancies in the simulations was the systematic underestimation, by at least 10°C, of the winter cooling over Europe and the Mediterranean region observed in the pollen-based reconstructions. In this paper, we investigate the progress achieved to reduce this inconsistency through a large modelling effort and improved temperature reconstructions. We show that increased model spatial resolution does not significantly increase the simulated LGM winter cooling. Further, neither the inclusion of a vegetation cover compatible with the LGM climate, nor the interactions with the oceans simulated by the atmosphere-ocean general circulation models run in the second phase of PMIP result in a better agreement between models and data. Accounting for changes in interannual variability in the interpretation of the pollen data does not result in a reduction of the reconstructed cooling. The largest recent improvement in the model-data comparison has instead arisen from a new climate reconstruction based on inverse vegetation modelling, which explicitly accounts for the CO2 decrease at LGM and which substantially reduces the LGM winter cooling reconstructed from pollen assemblages. As a result, the simulated and observed LGM winter cooling over Western Europe and the Mediterranean area are now in much better agreement.

  5. Modelling the occurrence of heat waves in maximum and minimum temperatures over Spain and projections for the period 2031-60

    Science.gov (United States)

    Abaurrea, J.; Asín, J.; Cebrián, A. C.

    2018-02-01

    The occurrence of extreme heat events in maximum and minimum daily temperatures is modelled using a non-homogeneous common Poisson shock process. It is applied to five Spanish locations, representative of the most common climates over the Iberian Peninsula. The model is based on an excess over threshold approach and distinguishes three types of extreme events: only in maximum temperature, only in minimum temperature and in both of them (simultaneous events). It takes into account the dependence between the occurrence of extreme events in both temperatures and its parameters are expressed as functions of time and temperature related covariates. The fitted models allow us to characterize the occurrence of extreme heat events and to compare their evolution in the different climates during the observed period. This model is also a useful tool for obtaining local projections of the occurrence rate of extreme heat events under climate change conditions, using the future downscaled temperature trajectories generated by Earth System Models. The projections for 2031-60 under scenarios RCP4.5, RCP6.0 and RCP8.5 are obtained and analysed using the trajectories from four earth system models which have successfully passed a preliminary control analysis. Different graphical tools and summary measures of the projected daily intensities are used to quantify the climate change on a local scale. A high increase in the occurrence of extreme heat events, mainly in July and August, is projected in all the locations, all types of event and in the three scenarios, although in 2051-60 the increase is higher under RCP8.5. However, relevant differences are found between the evolution in the different climates and the types of event, with a specially high increase in the simultaneous ones.

  6. A maximum-entropy model

    Indian Academy of Sciences (India)

    problem is important from an experimental point of view, because absorption is always present. ... equal-a-priori probabilities is expressed mathematically by the invariant measure on the matrix space ... the interval between zero and one.

  7. Impacts of projected maximum temperature extremes for C21 by an ensemble of regional climate models on cereal cropping systems in the Iberian Peninsula

    Directory of Open Access Journals (Sweden)

    M. Ruiz-Ramos

    2011-12-01

    Full Text Available Crops growing in the Iberian Peninsula may be subjected to damagingly high temperatures during the sensitive development periods of flowering and grain filling. Such episodes are considered important hazards and farmers may take insurance to offset their impact. Increases in value and frequency of maximum temperature have been observed in the Iberian Peninsula during the 20th century, and studies on climate change indicate the possibility of further increase by the end of the 21st century. Here, impacts of current and future high temperatures on cereal cropping systems of the Iberian Peninsula are evaluated, focusing on vulnerable development periods of winter and summer crops. Climate change scenarios obtained from an ensemble of ten Regional Climate Models (multimodel ensemble combined with crop simulation models were used for this purpose and related uncertainty was estimated. Results reveal that higher extremes of maximum temperature represent a threat to summer-grown but not to winter-grown crops in the Iberian Peninsula. The study highlights the different vulnerability of crops in the two growing seasons and the need to account for changes in extreme temperatures in developing adaptations in cereal cropping systems. Finally, this work contributes to clarifying the causes of high-uncertainty impact projections from previous studies.

  8. Modeling fracture in the context of a strain-limiting theory of elasticity: a single anti-plane shear crack

    KAUST Repository

    Rajagopal, K. R.

    2011-01-06

    This paper is the first part of an extended program to develop a theory of fracture in the context of strain-limiting theories of elasticity. This program exploits a novel approach to modeling the mechanical response of elastic, that is non-dissipative, materials through implicit constitutive relations. The particular class of models studied here can also be viewed as arising from an explicit theory in which the displacement gradient is specified to be a nonlinear function of stress. This modeling construct generalizes the classical Cauchy and Green theories of elasticity which are included as special cases. It was conjectured that special forms of these implicit theories that limit strains to physically realistic maximum levels even for arbitrarily large stresses would be ideal for modeling fracture by offering a modeling paradigm that avoids the crack-tip strain singularities characteristic of classical fracture theories. The simplest fracture setting in which to explore this conjecture is anti-plane shear. It is demonstrated herein that for a specific choice of strain-limiting elasticity theory, crack-tip strains do indeed remain bounded. Moreover, the theory predicts a bounded stress field in the neighborhood of a crack-tip and a cusp-shaped opening displacement. The results confirm the conjecture that use of a strain limiting explicit theory in which the displacement gradient is given as a function of stress for modeling the bulk constitutive behavior obviates the necessity of introducing ad hoc modeling constructs such as crack-tip cohesive or process zones in order to correct the unphysical stress and strain singularities predicted by classical linear elastic fracture mechanics. © 2011 Springer Science+Business Media B.V.

  9. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    Energy Technology Data Exchange (ETDEWEB)

    Alwan, Aravind; Aluru, N.R.

    2013-12-15

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems.

  10. Improved statistical models for limited datasets in uncertainty quantification using stochastic collocation

    International Nuclear Information System (INIS)

    Alwan, Aravind; Aluru, N.R.

    2013-01-01

    This paper presents a data-driven framework for performing uncertainty quantification (UQ) by choosing a stochastic model that accurately describes the sources of uncertainty in a system. This model is propagated through an appropriate response surface function that approximates the behavior of this system using stochastic collocation. Given a sample of data describing the uncertainty in the inputs, our goal is to estimate a probability density function (PDF) using the kernel moment matching (KMM) method so that this PDF can be used to accurately reproduce statistics like mean and variance of the response surface function. Instead of constraining the PDF to be optimal for a particular response function, we show that we can use the properties of stochastic collocation to make the estimated PDF optimal for a wide variety of response functions. We contrast this method with other traditional procedures that rely on the Maximum Likelihood approach, like kernel density estimation (KDE) and its adaptive modification (AKDE). We argue that this modified KMM method tries to preserve what is known from the given data and is the better approach when the available data is limited in quantity. We test the performance of these methods for both univariate and multivariate density estimation by sampling random datasets from known PDFs and then measuring the accuracy of the estimated PDFs, using the known PDF as a reference. Comparing the output mean and variance estimated with the empirical moments using the raw data sample as well as the actual moments using the known PDF, we show that the KMM method performs better than KDE and AKDE in predicting these moments with greater accuracy. This improvement in accuracy is also demonstrated for the case of UQ in electrostatic and electrothermomechanical microactuators. We show how our framework results in the accurate computation of statistics in micromechanical systems

  11. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  12. The spherical limit of the n-vector model and correlation inequaljties

    International Nuclear Information System (INIS)

    Angelescu, N.; Bundaru, M.; Costache, G.

    1978-08-01

    The asymptotics of the state of the n-vector model with a finite number of spins in the spherical limit is studied. Besides rederiving the limit free energy, corresponding to a generalized spherical model (with ''spherical constraint'' at every site), we obtain also the limit of the correlation functions, which allow a precise definition of the state of the latter model. Correlation inequalities are proved for ferromagnetic interactions in the asymptotic regime. In particular, it is shown that the generalized spherical model fulfills the expected Griffiths' type inequalities, differing in this respect from the spherical model with overall constraint. (author)

  13. On the differences between Last Glacial Maximum and Mid-Holocene climates in southern South America simulated by PMIP3 models

    Science.gov (United States)

    Berman, Ana Laura; Silvestri, Gabriel E.; Tonello, Marcela S.

    2018-04-01

    Differences between climate conditions during the Last Glacial Maximum (LGM) and the Mid-Holocene (MH) in southern South America inferred from the state-of-the-art PMIP3 paleoclimatic simulations are described for the first time in this paper. The aim is to expose characteristics of past climate changes occurred without human influence. In this context, numerical simulations are an indispensable tool for inferring changes in near-surface air temperature and precipitation in regions where proxy information is scarce or absent. The analyzed PMIP3 models describe MH temperatures significantly warmer than those of LGM with magnitudes of change depending on the season and the specific geographic region. In addition, models indicate that seasonal mean precipitation during MH increased with respect to LGM values in wide southern continental areas to the east of the Andes Cordillera whereas seasonal precipitation developed in areas to the west of Patagonian Andes reduced from LGM to MH.

  14. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy-LUR approaches.

    Science.gov (United States)

    Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey

    2014-09-01

    Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.

  15. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  16. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  17. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  18. A comparison of PMIP2 model simulations and the MARGO proxy reconstruction for tropical sea surface temperatures at last glacial maximum

    Energy Technology Data Exchange (ETDEWEB)

    Otto-Bliesner, Bette L.; Brady, E.C. [National Center for Atmospheric Research, Climate and Global Dynamics Division, Boulder, CO (United States); Schneider, Ralph; Weinelt, M. [Christian-Albrechts Universitaet, Institut fuer Geowissenschaften, Kiel (Germany); Kucera, M. [Eberhard-Karls Universitaet Tuebingen, Institut fuer Geowissenschaften, Tuebingen (Germany); Abe-Ouchi, A. [The University of Tokyo, Center for Climate System Research, Kashiwa (Japan); Bard, E. [CEREGE, College de France, CNRS, Universite Aix-Marseille, Aix-en-Provence (France); Braconnot, P.; Kageyama, M.; Marti, O.; Waelbroeck, C. [Unite mixte CEA-CNRS-UVSQ, Laboratoire des Sciences du Climat et de l' Environnement, Gif-sur-Yvette Cedex (France); Crucifix, M. [Universite Catholique de Louvain, Institut d' Astronomie et de Geophysique Georges Lemaitre, Louvain-la-Neuve (Belgium); Hewitt, C.D. [Met Office Hadley Centre, Exeter (United Kingdom); Paul, A. [Bremen University, Department of Geosciences, Bremen (Germany); Rosell-Mele, A. [Universitat Autonoma de Barcelona, ICREA and Institut de Ciencia i Tecnologia Ambientals, Barcelona (Spain); Weber, S.L. [Royal Netherlands Meteorological Institute (KNMI), De Bilt (Netherlands); Yu, Y. [Chinese Academy of Sciences, LASG, Institute of Atmospheric Physics, Beijing (China)

    2009-05-15

    Results from multiple model simulations are used to understand the tropical sea surface temperature (SST) response to the reduced greenhouse gas concentrations and large continental ice sheets of the last glacial maximum (LGM). We present LGM simulations from the Paleoclimate Modelling Intercomparison Project, Phase 2 (PMIP2) and compare these simulations to proxy data collated and harmonized within the Multiproxy Approach for the Reconstruction of the Glacial Ocean Surface Project (MARGO). Five atmosphere-ocean coupled climate models (AOGCMs) and one coupled model of intermediate complexity have PMIP2 ocean results available for LGM. The models give a range of tropical (defined for this paper as 15 S-15 N) SST cooling of 1.0-2.4 C, comparable to the MARGO estimate of annual cooling of 1.7{+-}1 C. The models simulate greater SST cooling in the tropical Atlantic than tropical Pacific, but interbasin and intrabasin variations of cooling are much smaller than those found in the MARGO reconstruction. The simulated tropical coolings are relatively insensitive to season, a feature also present in the MARGO transferred-based estimates calculated from planktonic foraminiferal assemblages for the Indian and Pacific Oceans. These assemblages indicate seasonality in cooling in the Atlantic basin, with greater cooling in northern summer than northern winter, not captured by the model simulations. Biases in the simulations of the tropical upwelling and thermocline found in the preindustrial control simulations remain for the LGM simulations and are partly responsible for the more homogeneous spatial and temporal LGM tropical cooling simulated by the models. The PMIP2 LGM simulations give estimates for the climate sensitivity parameter of 0.67 -0.83 C per Wm{sup -2}, which translates to equilibrium climate sensitivity for doubling of atmospheric CO{sub 2} of 2.6-3.1 C. (orig.)

  19. Design of a Front– End Amplifier for the Maximum Power Delivery and Required Noise by HBMO with Support Vector Microstrip Model

    Directory of Open Access Journals (Sweden)

    F. Guneş

    2014-04-01

    Full Text Available Honey Bee Mating Optimization (HBMO is a recent swarm-based optimization algorithm to solve highly nonlinear problems, whose based approach combines the powers of simulated annealing, genetic algorithms, and an effective local search heuristic to search for the best possible solution to the problem under investigation within a reasonable computing time. In this work, the HBMO- based design is carried out for a front-end amplifier subject to be a subunit of a radar system in conjunction with a cost effective 3-D SONNET-based Support Vector Regression Machine (SVRM microstrip model. All the matching microstrip widths, lengths are obtained on a chosen substrate to satisfy the maximum power delivery and the required noise over the required bandwidth of a selected transistor. The proposed HBMO- based design is applied to the design of a typical ultra-wide-band low noise amplifier with NE3512S02 on a substrate of Rogers 4350 for the maximum output power and the noise figure F(f=1dB within the 5-12 GHz using the T- type of microstrip matching circuits. Furthermore, the effectiveness and efficiency of the proposed HBMO based design are manifested by comparing it with the Genetic Algorithm (GA, Particle Swarm Optimization (PSO and the simple HBMO based designs.

  20. The Ising model in the scaling limit as model for the description of elementary particles

    International Nuclear Information System (INIS)

    Weinzierl, W.

    1981-01-01

    In this thesis a possible way is stepped over which starts from the derivation of a quantum field theory from simplest statistical degrees of freedom, as for instance in a two-level system. On a model theory, the Ising model in (1+1) dimensions the idea is explained. In this model theory two particle-interpretable quantum fields arise which can be constructed by a basic field which parametrizes the local dynamics in a simplest way. This so called proliferation is further examined. For the proliferation of the basic field a conserved quantity, a kind of parity is necessary. The stability of both particle fields is a consequence of this conservation law. For the identification of the ''particle-interpretable'' fields the propagators of the order and disorder parameter field are calculated and discussed. An effective Hamiltonian in this particle fields is calculated. As further aspect of this transition from the statistical system to quantum field theory the dimensional transmutation and the closely to this connected mass renormalization is examined. The relation between spin systems in the critical region and fermionic field theories is explained. Thereby it results that certain fermionic degrees of freedom of the spin system vanish in the scaling limit. The ''macroscopically'' relevant degrees of freedom constitute a relativistic Majorana field. (orig./HSI) [de

  1. The Hintermann-Merlini-Baxter-Wu and the infinite-coupling-limit Ashkin-Teller models

    Energy Technology Data Exchange (ETDEWEB)

    Huang Yuan, E-mail: huangy22@mail.ustc.edu.cn [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Deng Youjin, E-mail: yjdeng@ustc.edu.cn [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Jacobsen, Jesper Lykke, E-mail: jacobsen@lpt.ens.fr [Laboratoire de Physique Theorique, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Universite Pierre et Marie Curie, 4 place Jussieu, 75252 Paris (France); Salas, Jesus, E-mail: jsalas@math.uc3m.es [Grupo de Modelizacion, Simulacion Numerica y Matematica Industrial, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganes (Spain); Grupo de Teorias de Campos y Fisica Estadistica, Instituto Gregorio Millan, Universidad Carlos III de Madrid, Unidad asociada al IEM-CSIC, Madrid (Spain)

    2013-03-11

    We show how the Hintermann-Merlini-Baxter-Wu model (which is a generalization of the well-known Baxter-Wu model to a general Eulerian triangulation) can be mapped onto a particular infinite-coupling-limit of the Ashkin-Teller model. We work out some mappings among these models, also including the standard and mixed Ashkin-Teller models. Finally, we compute the phase diagram of the infinite-coupling-limit Ashkin-Teller model on the square, triangular, hexagonal, and kagome lattices.

  2. Application of MCAM in generating Monte Carlo model for ITER port limiter

    International Nuclear Information System (INIS)

    Lu Lei; Li Ying; Ding Aiping; Zeng Qin; Huang Chenyu; Wu Yican

    2007-01-01

    On the basis of the pre-processing and conversion functions supplied by MCAM (Monte-Carlo Particle Transport Calculated Automatic Modeling System), this paper performed the generation of ITER Port Limiter MC (Monte-Carlo) calculation model from the CAD engineering model. The result was validated by using reverse function of MCAM and MCNP PLOT 2D cross-section drawing program. the successful application of MCAM to ITER Port Limiter demonstrates that MCAM is capable of dramatically increasing the efficiency and accuracy to generate MC calculation models from CAD engineering models with complex geometry comparing with the traditional manual modeling method. (authors)

  3. Theoretical models of highly magnetic white dwarf stars that violate the Chandrasekhar Limit

    Science.gov (United States)

    Shah, Hridaya

    2017-08-01

    Until recently, white dwarf (WD) stars were believed to be no more massive than 1.44 solar masses (M ⊙ ). This belief has been changed now with the observations of over-luminous or 'peculiar' Type la supernovae that have lead researchers to hypothesize the existence of WDs in the mass range 2.4 - 2.8 M ⊙ . This discovery also raises some doubt over the reliability of the Type Ia supernova as a standard candle. It is thought that these super-massive WDs are their most likely progenitors and that they probably have a very strong magnetic field inside them. A degenerate electron gas in a magnetic field, such as that present inside this star, will be Landau quantized. Magnetic field changes the momentum space of electrons which in turn changes their density of states (DOS) and that in turn changes the equation of state (EoS) of matter inside the star, as opposed to that without a field. When this change in the DOS is taken into account and a link between the DOS and the EoS is established, as is done in this work, I find a physical reason behind the theoretical mass-radius (M-R) relations of a super-massive WD. I start with different equations of state with at most three Landau levels occupied and then construct stellar models of magnetic WDs (MWDs) using the same. I also show the M-R relations of these stars for a particular chosen value of maximum electron Fermi energy. Once a multiple Landau level system of electrons is considered, I find that it leads to such an EoS that gives multiple branches in the MR relations. Super-massive MWDs are obtained only when the Landau level occupancy is limited to just one level and some of the mass values fall within the mass range given above.

  4. The Benefits and Limitations of Hydraulic Modeling for Ordinary High Water Mark Delineation

    Science.gov (United States)

    2016-02-01

    between two cross sections, the HEC-RAS model will not show it. If there is a sudden drop in the channel, such as a waterfall or steep rapids, the...ER D C/ CR RE L TR -1 6- 1 Wetland Regulatory Assistance Program (WRAP) The Benefits and Limitations of Hydraulic Modeling for Ordinary...client/default. Wetland Regulatory Assistance Program (WRAP) ERDC/CRREL TR-16-1 February 2016 The Benefits and Limitations of Hydraulic Modeling

  5. Modeling the evolution of natural cliffs subject to weathering. 1, Limit analysis approach

    OpenAIRE

    Utili, Stefano; Crosta, Giovanni B.

    2011-01-01

    Retrogressive landsliding evolution of natural slopes subjected to weathering has been modeled by assuming Mohr-Coulomb material behavior and by using an analytical method. The case of weathering-limited slope conditions, with complete erosion of the accumulated debris, has been modeled. The limit analysis upper-bound method is used to study slope instability induced by a homogeneous decrease of material strength in space and time. The only assumption required in the model concerns the degree...

  6. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  7. Growth dependence of conjugation explains limited plasmid invasion in biofilms: an individual‐based modelling study

    DEFF Research Database (Denmark)

    Merkey, Brian; Lardon, Laurent; Seoane, Jose Miguel

    2011-01-01

    Plasmid invasion in biofilms is often surprisingly limited in spite of the close contact of cells in a biofilm. We hypothesized that this poor plasmid spread into deeper biofilm layers is caused by a dependence of conjugation on the growth rate (relative to the maximum growth rate) of the donor......, we find that invasion of a resident biofilm is indeed limited when plasmid transfer depends on growth, but not so in the absence of growth dependence. Using sensitivity analysis we also find that parameters related to timing (i.e. a lag before the transconjugant can transfer, transfer proficiency...... and scan speed) and spatial reach (EPS yield, conjugal pilus length) are more important for successful plasmid invasion than the recipients' growth rate or the probability of segregational loss. While this study identifies one factor that can limit plasmid invasion in biofilms, the new individual...

  8. A field studies and modeling approach to develop organochlorine pesticide and PCB total maximum daily load calculations: Case study for Echo Park Lake, Los Angeles, CA

    Energy Technology Data Exchange (ETDEWEB)

    Vasquez, V.R., E-mail: vrvasquez@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Curren, J., E-mail: janecurren@yahoo.com [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Lau, S.-L., E-mail: simlin@ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Stenstrom, M.K., E-mail: stenstro@seas.ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Suffet, I.H., E-mail: msuffet@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States)

    2011-09-01

    Echo Park Lake is a small lake in Los Angeles, CA listed on the USA Clean Water Act Section 303(d) list of impaired water bodies for elevated levels of organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in fish tissue. A lake water and sediment sampling program was completed to support the development of total maximum daily loads (TMDL) to address the lake impairment. The field data indicated quantifiable levels of OCPs and PCBs in the sediments, but lake water data were all below detection levels. The field sediment data obtained may explain the contaminant levels in fish tissue using appropriate sediment-water partitioning coefficients and bioaccumulation factors. A partition-equilibrium fugacity model of the whole lake system was used to interpret the field data and indicated that half of the total mass of the pollutants in the system are in the sediments and the other half is in soil; therefore, soil erosion could be a significant pollutant transport mode into the lake. Modeling also indicated that developing and quantifying the TMDL depends significantly on the analytical detection level for the pollutants in field samples and on the choice of octanol-water partitioning coefficient and bioaccumulation factors for the model. - Research highlights: {yields} Fugacity model using new OCP and PCB field data supports lake TMDL calculations. {yields} OCP and PCB levels in lake sediment were found above levels for impairment. {yields} Relationship between sediment data and available fish tissue data evaluated. {yields} Model provides approximation of contaminant sources and sinks for a lake system. {yields} Model results were sensitive to analytical detection and quantification levels.

  9. New England SPARROW Water-Quality Modeling to Assist with the Development of Total Maximum Daily Loads in the Connecticut River Basin

    Science.gov (United States)

    Moore, R. B.; Robinson, K. W.; Simcox, A. C.; Johnston, C. M.

    2002-05-01

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Environmental Protection Agency (USEPA) and the New England Interstate Water Pollution Control Commission (NEWIPCC), is currently preparing a water-quality model, called SPARROW, to assist in the regional total maximum daily load (TMDL) studies in New England. A model is required to provide estimates of nutrient loads and confidence intervals at unmonitored stream reaches. SPARROW (Spatially Referenced Regressions on Watershed Attributes) is a spatially detailed, statistical model that uses regression equations to relate total phosphorus and nitrogen (nutrient) stream loads to pollution sources and watershed characteristics. These statistical relations are then used to predict nutrient loads in unmonitored streams. The New England SPARROW model is based on a hydrologic network of 42,000 stream reaches and associated watersheds. Point source data are derived from USEPA's Permit Compliance System (PCS). Information about nonpoint sources is derived from data such as fertilizer use, livestock wastes, and atmospheric deposition. Watershed characteristics include land use, streamflow, time-of-travel, stream density, percent wetlands, slope of the land surface, and soil permeability. Preliminary SPARROW results are expected in Spring 2002. The New England SPARROW model is proposed for use in the TMDL determination for nutrients in the Connecticut River Basin, upstream of Connecticut. The model will be used to estimate nitrogen loads from each of the upstream states to Long Island Sound. It will provide estimates and confidence intervals of phosphorus and nitrogen loads, area-weighted yields of nutrients by watershed, sources of nutrients, and the downstream movement of nutrients. This information will be used to (1) understand ranges in nutrient levels in surface waters, (2) identify the environmental factors that affect nutrient levels in streams, (3) evaluate monitoring efforts for better determination of

  10. A note on the relationships between multiple imputation, maximum likelihood and fully Bayesian methods for missing responses in linear regression models.

    Science.gov (United States)

    Chen, Qingxia; Ibrahim, Joseph G

    2014-07-01

    Multiple Imputation, Maximum Likelihood and Fully Bayesian methods are the three most commonly used model-based approaches in missing data problems. Although it is easy to show that when the responses are missing at random (MAR), the complete case analysis is unbiased and efficient, the aforementioned methods are still commonly used in practice for this setting. To examine the performance of and relationships between these three methods in this setting, we derive and investigate small sample and asymptotic expressions of the estimates and standard errors, and fully examine how these estimates are related for the three approaches in the linear regression model when the responses are MAR. We show that when the responses are MAR in the linear model, the estimates of the regression coefficients using these three methods are asymptotically equivalent to the complete case estimates under general conditions. One simulation and a real data set from a liver cancer clinical trial are given to compare the properties of these methods when the responses are MAR.

  11. Calibration of the maximum carboxylation velocity (Vcmax using data mining techniques and ecophysiological data from the Brazilian semiarid region, for use in Dynamic Global Vegetation Models

    Directory of Open Access Journals (Sweden)

    L. F. C. Rezende

    Full Text Available Abstract The semiarid region of northeastern Brazil, the Caatinga, is extremely important due to its biodiversity and endemism. Measurements of plant physiology are crucial to the calibration of Dynamic Global Vegetation Models (DGVMs that are currently used to simulate the responses of vegetation in face of global changes. In a field work realized in an area of preserved Caatinga forest located in Petrolina, Pernambuco, measurements of carbon assimilation (in response to light and CO2 were performed on 11 individuals of Poincianella microphylla, a native species that is abundant in this region. These data were used to calibrate the maximum carboxylation velocity (Vcmax used in the INLAND model. The calibration techniques used were Multiple Linear Regression (MLR, and data mining techniques as the Classification And Regression Tree (CART and K-MEANS. The results were compared to the UNCALIBRATED model. It was found that simulated Gross Primary Productivity (GPP reached 72% of observed GPP when using the calibrated Vcmax values, whereas the UNCALIBRATED approach accounted for 42% of observed GPP. Thus, this work shows the benefits of calibrating DGVMs using field ecophysiological measurements, especially in areas where field data is scarce or non-existent, such as in the Caatinga.

  12. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    Science.gov (United States)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  13. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Friction induced hunting limit cycles : a comparison between the LuGre and switch friction model

    NARCIS (Netherlands)

    Hensen, R.H.A.; Molengraft, van de M.J.G.; Steinbuch, M.

    2003-01-01

    In this paper, friction induced limit cycles are predicted for a simple motion system consisting of a motor-driven inertia subjected to friction and a PID-controlled regulator task. The two friction models used, i.e., (i) the dynamic LuGre friction model and (ii) the static switch friction model,

  15. Operational Limitations of Arctic Waste Stabilization Ponds: Insights from Modeling Oxygen Dynamics and Carbon Removal

    DEFF Research Database (Denmark)

    Ragush, Colin M.; Gentleman, Wendy C.; Hansen, Lisbeth Truelstrup

    2018-01-01

    Presented here is a mechanistic model of the biological dynamics of the photic zone of a single-cell arctic waste stabilization pond (WSP) for the prediction of oxygen concentration and the removal of oxygen-demanding substances. The model is an exploratory model to assess the limiting environmen...

  16. THE MATHEMATIC MODEL OF POTENTIAL RELAXATION IN COULOSTATIC CONDITIONS FOR LIMITING DIFFUSION CURRENT CASE

    Directory of Open Access Journals (Sweden)

    O. H. Kapitonov

    2010-05-01

    Full Text Available A mathematical model of coulostatic relaxation of the potential for solid metallic electrode was presented. The solution in the case of limiting diffusion current was obtained. On the basis of this model the technique of concentration measurements for heavy metal ions in diluted solutions was suggested. The model adequacy was proved by experimental data.

  17. Corrigendum to "Upper ocean climate of the Eastern Mediterranean Sea during the Holocene Insolation Maximum – a model study" published in Clim. Past, 7, 1103–1122, 2011

    Directory of Open Access Journals (Sweden)

    G. Schmiedl

    2011-11-01

    Full Text Available Nine thousand years ago (9 ka BP, the Northern Hemisphere experienced enhanced seasonality caused by an orbital configuration close to the minimum of the precession index. To assess the impact of this "Holocene Insolation Maximum" (HIM on the Mediterranean Sea, we use a regional ocean general circulation model forced by atmospheric input derived from global simulations. A stronger seasonal cycle is simulated by the model, which shows a relatively homogeneous winter cooling and a summer warming with well-defined spatial patterns, in particular, a subsurface warming in the Cretan and western Levantine areas. The comparison between the SST simulated for the HIM and a reconstruction from planktonic foraminifera transfer functions shows a poor agreement, especially for summer, when the vertical temperature gradient is strong. As a novel approach, we propose a reinterpretation of the reconstruction, to consider the conditions throughout the upper water column rather than at a single depth. We claim that such a depth-integrated approach is more adequate for surface temperature comparison purposes in a situation where the upper ocean structure in the past was different from the present-day. In this case, the depth-integrated interpretation of the proxy data strongly improves the agreement between modelled and reconstructed temperature signal with the subsurface summer warming being recorded by both model and proxies, with a small shift to the south in the model results. The mechanisms responsible for the peculiar subsurface pattern are found to be a combination of enhanced downwelling and wind mixing due to strengthened Etesian winds, and enhanced thermal forcing due to the stronger summer insolation in the Northern Hemisphere. Together, these processes induce a stronger heat transfer from the surface to the subsurface during late summer in the western Levantine; this leads to an enhanced heat piracy in this region, a process never identified before

  18. Development of an Anisotropic Geological-Based Land Use Regression and Bayesian Maximum Entropy Model for Estimating Groundwater Radon across Northing Carolina

    Science.gov (United States)

    Messier, K. P.; Serre, M. L.

    2015-12-01

    Radon (222Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium (238U), which is ubiquitous in rocks and soils worldwide. Exposure to 222Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater 222Rn with anisotropic geological and 238U based explanatory variables is developed, which helps elucidate the factors contributing to elevated 222Rn across North Carolina. Geological and uranium based variables are constructed in elliptical buffers surrounding each observation such that they capture the lateral geometric anisotropy present in groundwater 222Rn. Moreover, geological features are defined at three different geological spatial scales to allow the model to distinguish between large area and small area effects of geology on groundwater 222Rn. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater 222Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater 222Rn results in a leave-one out cross-validation of 0.46 (Pearson correlation coefficient= 0.68), effectively predicting within the spatial covariance range. Modeled results of 222Rn concentrations show variability among Intrusive Felsic geological formations likely due to average bedrock 238U defined on the basis of overlying stream-sediment 238U concentrations that is a widely distributed consistently analyzed point-source data.

  19. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  20. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  1. 76 FR 44245 - Special Conditions: Gulfstream Model GVI Airplane; Limit Engine Torque Loads for Sudden Engine...

    Science.gov (United States)

    2011-07-25

    ... Conditions No. 25-441-SC] Special Conditions: Gulfstream Model GVI Airplane; Limit Engine Torque Loads for... for transport category airplanes. These design features include engine size and the potential torque... engine mounts and the supporting structures must be designed to withstand a ``limit engine torque load...

  2. A model of the generalized stoichiometry of electron transport limited C3 photosynthesis: Development and Applications

    NARCIS (Netherlands)

    Yin, X.; Harbinson, J.; Struik, P.

    2009-01-01

    We describe an extended Farquhar, Von Caemmerer and Berry (FvCB) model for the RuBP regeneration-limited or electron transport-limited steady-state C3 photosynthesis. Analytical algorithms are presented to account for (i) the effects of Photosystem (PS) I and II photochemical efficiencies and of

  3. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the Regional Climate Model COSMO-CLM over Africa

    Directory of Open Access Journals (Sweden)

    Stefan Krähenmann

    2013-07-01

    Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly

  4. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Energy Technology Data Exchange (ETDEWEB)

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2013-10-15

    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  5. Modeling space-charge-limited currents in organic semiconductors: Extracting trap density and mobility

    KAUST Repository

    Dacuñ a, Javier; Salleo, Alberto

    2011-01-01

    We have developed and have applied a mobility edge model that takes drift and diffusion currents to characterize the space-charge-limited current in organic semiconductors into account. The numerical solution of the drift-diffusion equation allows

  6. Artificial neural network-based model for the prediction of optimal growth and culture conditions for maximum biomass accumulation in multiple shoot cultures of Centella asiatica.

    Science.gov (United States)

    Prasad, Archana; Prakash, Om; Mehrotra, Shakti; Khan, Feroz; Mathur, Ajay Kumar; Mathur, Archana

    2017-01-01

    An artificial neural network (ANN)-based modelling approach is used to determine the synergistic effect of five major components of growth medium (Mg, Cu, Zn, nitrate and sucrose) on improved in vitro biomass yield in multiple shoot cultures of Centella asiatica. The back propagation neural network (BPNN) was employed to predict optimal biomass accumulation in terms of growth index over a defined culture duration of 35 days. The four variable concentrations of five media components, i.e. MgSO 4 (0, 0.75, 1.5, 3.0 mM), ZnSO 4 (0, 15, 30, 60 μM), CuSO 4 (0, 0.05, 0.1, 0.2 μM), NO 3 (20, 30, 40, 60 mM) and sucrose (1, 3, 5, 7 %, w/v) were taken as inputs for the ANN model. The designed model was evaluated by performing three different sets of validation experiments that indicated a greater similarity between the target and predicted dataset. The results of the modelling experiment suggested that 1.5 mM Mg, 30 μM Zn, 0.1 μM Cu, 40 mM NO 3 and 6 % (w/v) sucrose were the respective optimal concentrations of the tested medium components for achieving maximum growth index of 1654.46 with high centelloside yield (62.37 mg DW/culture) in the cultured multiple shoots. This study can facilitate the generation of higher biomass of uniform, clean, good quality C. asiatica herb that can efficiently be utilized by pharmaceutical industries.

  7. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  8. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  9. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  10. GA-4/GA-9 honeycomb impact limiter tests and analytical model

    International Nuclear Information System (INIS)

    Koploy, M.A.; Taylor, C.S.

    1991-01-01

    General Atomics (GA) has a test program underway to obtain data on the behavior of a honeycomb impact limiter. The program includes testing of small samples to obtain basic information, as well as testing of complete 1/4-scale impact limiters to obtain load-versus-deflection curves for different crush orientations. GA has used the test results to aid in the development of an analytical model to predict the impact limiter loads. The results also helped optimize the design of the impact limiters for the GA-4 and GA-9 Casks

  11. Decay constants in the heavy quark limit in models a la Bakamjian and Thomas

    International Nuclear Information System (INIS)

    Morenas, V.; Le Yaouanc, A.; Oliver, L.; Pene, O.; Raynal, J.C.

    1997-07-01

    In quark models a la Bakamjian and Thomas, that yield covariance and Isgur-Wise scaling of form factors in the heavy quark limit, the decay constants f (n) and f 1/2 (n) of S-wave and P-wave mesons composed of heavy and light quarks are computed. Different Ansaetze for the dynamics of the mass operator at rest are discussed. Using phenomenological models of the spectrum with relativistic kinetic energy and regularized short distance part the decay constants in the heavy quark limit are calculated. The convergence of the heavy quark limit sum rules is also studied. (author)

  12. A model for the derivation of new transport limits for non-fixed contamination

    International Nuclear Information System (INIS)

    Thierfeldt, S.; Lorenz, B.; Hesse, J.

    2004-01-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination

  13. A model for the derivation of new transport limits for non-fixed contamination

    Energy Technology Data Exchange (ETDEWEB)

    Thierfeldt, S. [Brenk Systemplanung GmbH, Aachen (Germany); Lorenz, B. [GNS Gesellschaft fuer Nuklearservice, Essen (Germany); Hesse, J. [RWE Power AG, Essen (Germany)

    2004-07-01

    The IAEA Regulations for the Safe Transport of Radioactive Material contain requirements for contamination limits on packages and conveyances used for the transport of radioactive material. Current contamination limits for packages and conveyances under routine transport conditions have been derived from a model proposed by Fairbairn more than 40 years ago. This model has proven effective if used with pragmatism, but is based on very conservative as well as extremely simple assumptions which is in no way appropriate any more and which is not compatible with ICRP recommendations regarding radiation protection standards. Therefore, a new model has now been developed which reflects all steps of the transport process. The derivation of this model has been fostered by the IAEA by initiating a Co-ordinated Research Project. The results of the calculations using this model could be directly applied as new nuclide specific transport limits for the non-fixed contamination.

  14. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    Science.gov (United States)

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  15. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  16. Gauge Group Contraction of Electroweak Model and its Natural Energy Limits

    Directory of Open Access Journals (Sweden)

    Nikolai A. Gromov

    2015-09-01

    Full Text Available The low and higher energy limits of the Electroweak Model are obtained from first principles of gauge theory. Both limits are given by the same contraction of the gauge group, but for the different consistent rescalings of the field space. Mathematical contraction parameter in both cases is interpreted as energy. The very weak neutrino-matter interaction is explained by zero tending contraction parameter, which depends on neutrino energy. The second consistent rescaling corresponds to the higher energy limit of the Electroweak Model. At the infinite energy all particles lose masses, electroweak interactions become long-range and are mediated by the neutral currents. The limit model represents the development of the early Universe from the Big Bang up to the end of the first second.

  17. Natural limits of electroweak model as contraction of its gauge group

    International Nuclear Information System (INIS)

    Gromov, N A

    2015-01-01

    The low and higher energy limits of the electroweak model are obtained from the first principles of gauge theory. Both limits are given by the same contraction of the gauge group, but for the different consistent rescalings of the field space. Mathematical contraction parameter in both cases is interpreted as energy. Very weak neutrino–matter interactions are explained by zero tending contraction parameter, which depends on neutrino energy. The second consistent rescaling corresponds to the higher energy limit of the electroweak model. At the infinite energy all particles lose mass, electroweak interactions become long-range and are mediated by neutral currents. The limit model represents the development of the early Universe from the big bang up to the end of the first second. (paper)

  18. Importance of fish behaviour in modelling conservation problems: food limitation as an example

    Science.gov (United States)

    Steven Railsback; Bret Harvey

    2011-01-01

    Simulation experiments using the inSTREAM individual-based brown trout Salmo trutta population model explored the role of individual adaptive behaviour in food limitation, as an example of how behaviour can affect managers’ understanding of conservation problems. The model includes many natural complexities in habitat (spatial and temporal variation in characteristics...

  19. Central limit theorems for a class of irreducible multicolor urn models

    Indian Academy of Sciences (India)

    Central limit theorem; Markov chains; martingale; urn models. 1. Introduction. In this article we are going to ... multicolor urn model is vastly different from the Markov chain evolving according to the transition matrix equal to the ...... /2 contribute a random variable less in absolute value than const. { sup n0≤n<∞. ∥. ∥. ∥. ∥.

  20. Construction of quantized gauge fields: continuum limit of the Abelian Higgs model in two dimensions

    International Nuclear Information System (INIS)

    Seiler, E.

    1981-01-01

    The author proves the existence of the continuum limit of the two-dimensional Higgs model for two cases: External gauge fields that are Hoelder continuous and may be non-Abelian, and the fully quantized Abelian model. In the latter case all Wightman axioms are verified except clustering. Important ingredients are a universal diamagnetic bound and correlation inequalities. (Auth.)