WorldWideScience

Sample records for maximum limit approach

  1. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  2. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  3. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  4. Longitudinal and transverse space charge limitations on transport of maximum power beams

    International Nuclear Information System (INIS)

    Khoe, T.K.; Martin, R.L.

    1977-01-01

    The maximum transportable beam power is a critical issue in selecting the most favorable approach to generating ignition pulses for inertial fusion with high energy accelerators. Maschke and Courant have put forward expressions for the limits on transport power for quadrupole and solenoidal channels. Included in a more general way is the self consistent effect of space charge defocusing on the power limit. The results show that no limits on transmitted power exist in principal. In general, quadrupole transport magnets appear superior to solenoids except for transport of very low energy and highly charged particles. Longitudinal space charge effects are very significant for transport of intense beams

  5. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  6. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  7. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  8. Maximum time-dependent space-charge limited diode currents

    Energy Technology Data Exchange (ETDEWEB)

    Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)

    2016-01-15

    Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.

  9. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  10. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  11. Maximum β limited by ideal MHD ballooning instabilites in JT-60

    International Nuclear Information System (INIS)

    Seki, Shogo; Azumi, Masashi

    1986-03-01

    Maximum β limited by ideal MHD ballooning instabilities is investigated on divertor configurations in JT-60. Maximum β against ballooning modes in JT-60 has strong dependecy on the distribution of the safety factor over the magnetic surfaces. Maximum β is ∼ 2 % for q 0 = 1.0, while more than 3 % for q 0 = 1.5. These results suggest that the profile control of the safety factor, especially on the magnetic axis, is attractive to the higher β operation in JT-60. (author)

  12. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  13. Feedback Limits to Maximum Seed Masses of Black Holes

    International Nuclear Information System (INIS)

    Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea

    2017-01-01

    The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  14. Feedback Limits to Maximum Seed Masses of Black Holes

    Energy Technology Data Exchange (ETDEWEB)

    Pacucci, Fabio; Natarajan, Priyamvada [Department of Physics, Yale University, P.O. Box 208121, New Haven, CT 06520 (United States); Ferrara, Andrea [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)

    2017-02-01

    The most massive black holes observed in the universe weigh up to ∼10{sup 10} M {sub ⊙}, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M {sub •} ≳ 10{sup 4} M {sub ⊙}) hosted in small isolated halos ( M {sub h} ≲ 10{sup 9} M {sub ⊙}) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M {sub •}– σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10{sup 4–6} M {sub ⊙}, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.

  15. Radiation pressure acceleration: The factors limiting maximum attainable ion energy

    Energy Technology Data Exchange (ETDEWEB)

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)

    2016-05-15

    Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.

  16. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  17. Noise and physical limits to maximum resolution of PET images

    Energy Technology Data Exchange (ETDEWEB)

    Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es

    2007-10-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.

  18. Noise and physical limits to maximum resolution of PET images

    International Nuclear Information System (INIS)

    Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.

    2007-01-01

    In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners

  19. Global Harmonization of Maximum Residue Limits for Pesticides.

    Science.gov (United States)

    Ambrus, Árpád; Yang, Yong Zhen

    2016-01-13

    International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.

  20. Mechanical limits to maximum weapon size in a giant rhinoceros beetle.

    Science.gov (United States)

    McCullough, Erin L

    2014-07-07

    The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power

    DEFF Research Database (Denmark)

    Yang, Yongheng; Wang, Huai; Blaabjerg, Frede

    2014-01-01

    Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level...... devices, allowing a quantitative prediction of the power device lifetime. A study case on a 3 kW single-phase PV inverter has demonstrated the advantages of the CPG control in terms of improved reliability.......Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level....... The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...

  2. A maximum pseudo-likelihood approach for estimating species trees under the coalescent model

    Directory of Open Access Journals (Sweden)

    Edwards Scott V

    2010-10-01

    Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the

  3. Reliability analysis - systematic approach based on limited data

    International Nuclear Information System (INIS)

    Bourne, A.J.

    1975-11-01

    The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)

  4. Dynamical maximum entropy approach to flocking.

    Science.gov (United States)

    Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M

    2014-04-01

    We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.

  5. Maximum Power Point Tracking Control of Photovoltaic Systems: A Polynomial Fuzzy Model-Based Approach

    DEFF Research Database (Denmark)

    Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan

    2018-01-01

    This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...

  6. Maximum total organic carbon limits at different DWPF melter feed maters (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1996-01-01

    The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes

  7. Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    Nacer Kouider Msirdi

    2015-08-01

    Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.

  8. Maximum Throughput in a C-RAN Cluster with Limited Fronthaul Capacity

    OpenAIRE

    Duan , Jialong; Lagrange , Xavier; Guilloud , Frédéric

    2016-01-01

    International audience; Centralized/Cloud Radio Access Network (C-RAN) is a promising future mobile network architecture which can ease the cooperation between different cells to manage interference. However, the feasibility of C-RAN is limited by the large bit rate requirement in the fronthaul. This paper study the maximum throughput of different transmission strategies in a C-RAN cluster with transmission power constraints and fronthaul capacity constraints. Both transmission strategies wit...

  9. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.

    2012-01-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous

  10. A stochastic-deterministic approach for evaluation of uncertainty in the predicted maximum fuel bundle enthalpy in a CANDU postulated LBLOCA event

    Energy Technology Data Exchange (ETDEWEB)

    Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)

    2014-07-01

    A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)

  11. 24 CFR 203.18c - One-time or up-front mortgage insurance premium excluded from limitations on maximum mortgage...

    Science.gov (United States)

    2010-04-01

    ... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...

  12. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    Science.gov (United States)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  13. PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.

    2007-06-23

    In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.

  14. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  17. Hydraulic limits on maximum plant transpiration and the emergence of the safety-efficiency trade-off.

    Science.gov (United States)

    Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare

    2013-04-01

    Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  18. Compact stars with a small electric charge: the limiting radius to mass relation and the maximum mass for incompressible matter

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)

    2015-02-01

    One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)

  19. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  20. Theoretical and experimental investigations of the limits to the maximum output power of laser diodes

    International Nuclear Information System (INIS)

    Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G

    2010-01-01

    The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.

  1. A Maximum Entropy Approach to Loss Distribution Analysis

    Directory of Open Access Journals (Sweden)

    Marco Bee

    2013-03-01

    Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

  2. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  3. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  4. Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach

    KAUST Repository

    Sohail, Muhammad Sadiq

    2012-06-01

    This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.

  5. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  6. Scrape-off layer based modelling of the density limit in beryllated JET limiter discharges

    International Nuclear Information System (INIS)

    Borrass, K.; Campbell, D.J.; Clement, S.; Vlases, G.C.

    1993-01-01

    The paper gives a scrape-off layer based interpretation of the density limit in beryllated JET limiter discharges. In these discharges, JET edge parameters show a complicated time evolution as the density limit is approached and the limit is manifested as a non-disruptive density maximum which cannot be exceeded by enhanced gas puffing. The occurrence of Marfes, the manner of density control and details of recycling are essential elements of the interpretation. Scalings for the maximum density are given and compared with JET data. The relation to disruptive density limits, previously observed in JET carbon limiter discharges, and to density limits in divertor discharges is discussed. (author). 18 refs, 10 figs, 1 tab

  7. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  8. 40 CFR 130.7 - Total maximum daily loads (TMDL) and individual water quality-based effluent limitations.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Total maximum daily loads (TMDL) and individual water quality-based effluent limitations. 130.7 Section 130.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY PLANNING AND MANAGEMENT § 130.7 Total...

  9. Limits of the endoscopic transnasal transtubercular approach.

    Science.gov (United States)

    Gellner, Verena; Tomazic, Peter V

    2018-06-01

    The endoscopic transnasal trans-sphenoidal transtubercular approach has become a standard alternative approach to neurosurgical transcranial routes for lesions of the anterior skull base in particular pathologies of the anterior tubercle, sphenoid plane, and midline lesions up to the interpeduncular cistern. For both the endoscopic and the transcranial approach indications must strictly be evaluated and tailored to the patients' morphology and condition. The purpose of this review was to evaluate the evidence in literature of the limitations of the endoscopic transtubercular approach. A PubMed/Medline search was conducted in January 2018 entering following keywords. Upon initial screening 7 papers were included in this review. There are several other papers describing the endoscopic transtubercular approach (ETTA). We tried to list the limitation factors according to the actual existing literature as cited. The main limiting factors are laterally extending lesions in relation to the optic canal and vascular encasement and/or unfavorable tumor tissue consistency. The ETTA is considered as a high level transnasal endoscopic extended skull base approach and requires excellent training, skills and experience.

  10. The limit distribution of the maximum increment of a random walk with regularly varying jump size distribution

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Rackauskas, Alfredas

    2010-01-01

    In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...

  11. Avinash-Shukla mass limit for the maximum dust mass supported against gravity by electric fields

    Science.gov (United States)

    Avinash, K.

    2010-08-01

    The existence of a new class of astrophysical objects, where gravity is balanced by the shielded electric fields associated with the electric charge on the dust, is shown. Further, a mass limit MA for the maximum dust mass that can be supported against gravitational collapse by these fields is obtained. If the total mass of the dust in the interstellar cloud MD > MA, the dust collapses, while if MD < MA, stable equilibrium may be achieved. Heuristic arguments are given to show that the physics of the mass limit is similar to the Chandrasekar's mass limit for compact objects and the similarity of these dust configurations with neutron and white dwarfs is pointed out. The effect of grain size distribution on the mass limit and strong correlation effects in the core of such objects is discussed. Possible location of these dust configurations inside interstellar clouds is pointed out.

  12. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  13. The Maximum Cross-Correlation approach to detecting translational motions from sequential remote-sensing images

    Science.gov (United States)

    Gao, J.; Lythe, M. B.

    1996-06-01

    This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.

  14. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    Science.gov (United States)

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  15. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  16. Maximum likelihood approach for several stochastic volatility models

    International Nuclear Information System (INIS)

    Camprodon, Jordi; Perelló, Josep

    2012-01-01

    Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)

  17. Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact

    Science.gov (United States)

    Cheng, A. F.

    2017-12-01

    The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.

  18. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  19. Physical Limits on Hmax, the Maximum Height of Glaciers and Ice Sheets

    Science.gov (United States)

    Lipovsky, B. P.

    2017-12-01

    The longest glaciers and ice sheets on Earth never achieve a topographic relief, or height, greater than about Hmax = 4 km. What laws govern this apparent maximum height to which a glacier or ice sheet may rise? Two types of answer appear possible: one relating to geological process and the other to ice dynamics. In the first type of answer, one might suppose that if Earth had 100 km tall mountains then there would be many 20 km tall glaciers. The counterpoint to this argument is that recent evidence suggests that glaciers themselves limit the maximum height of mountain ranges. We turn, then, to ice dynamical explanations for Hmax. The classical ice dynamical theory of Nye (1951), however, does not predict any break in scaling to give rise to a maximum height, Hmax. I present a simple model for the height of glaciers and ice sheets. The expression is derived from a simplified representation of a thermomechanically coupled ice sheet that experiences a basal shear stress governed by Coulomb friction (i.e., a stress proportional to the overburden pressure minus the water pressure). I compare this model to satellite-derived digital elevation map measurements of glacier surface height profiles for the 200,000 glaciers in the Randolph Glacier Inventory (Pfeffer et al., 2014) as well as flowlines from the Greenland and Antarctic Ice Sheets. The simplified model provides a surprisingly good fit to these global observations. Small glaciers less than 1 km in length are characterized by having negligible influence of basal melt water, cold ( -15C) beds, and high surface slopes ( 30 deg). Glaciers longer than a critical distance 30km are characterized by having an ice-bed interface that is weakened by the presence of meltwater and is therefore not capable of supporting steep surface slopes. The simplified model makes predictions of ice volume change as a function of surface temperature, accumulation rate, and geothermal heat flux. For this reason, it provides insights into

  20. 33 CFR 401.52 - Limit of approach to a bridge.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Limit of approach to a bridge... approach to a bridge. (a) No vessel shall pass the limit of approach sign at any movable bridge until the bridge is in a fully open position and the signal light shows green. (b) No vessel shall pass the limit...

  1. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  2. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  3. Chandrasekhar Limit: An Elementary Approach Based on Classical Physics and Quantum Theory

    Science.gov (United States)

    Pinochet, Jorge; Van Sint Jan, Michael

    2016-01-01

    In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the "Chandrasekhar limit." This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons…

  4. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  5. Roothaan approach in the thermodynamic limit

    Science.gov (United States)

    Gutierrez, G.; Plastino, A.

    1982-02-01

    A systematic method for the solution of the Hartree-Fock equations in the thermodynamic limit is presented. The approach is seen to be a natural extension of the one usually employed in the finite-fermion case, i.e., that developed by Roothaan. The new techniques developed here are applied, as an example, to neutron matter, employing the so-called V1 Bethe "homework" potential. The results obtained are, by far, superior to those that the ordinary plane-wave Hartree-Fock theory yields. NUCLEAR STRUCTURE Hartree-Fock approach; nuclear and neutron matter.

  6. [95/95] Approach for design limits analysis in WWER

    International Nuclear Information System (INIS)

    Shishkov, L.; Tsyganov, S.

    2008-01-01

    The paper discusses a well-known condition [95%/95%], which is important for monitoring some limits of core parameters in the course of designing the reactors (such as PWR or WWER). The condition ensures the postulate 'there is at least a 95 % probability at a 95 % confidence level that' some parameter does not exceed the limit. Such conditions are stated, for instance, in US standards and IAEA norms as recommendations for DNBR and fuel temperature. A question may arise: why can such approach for the limits be only applied to these parameters, while not normally applied to any other parameters? What is the way to ensure the limits in design practice? Using the general statements of mathematical statistics the authors interpret the [95/95] approach as applied to WWER design limits. (Authors)

  7. The Betz-Joukowsky limit for the maximum power coefficient of wind turbines

    DEFF Research Database (Denmark)

    Okulov, Valery; van Kuik, G.A.M.

    2009-01-01

    The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...

  8. BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Bin Cao

    2017-09-01

    Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.

  9. An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach

    Science.gov (United States)

    Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani

    2013-02-01

    TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.

  10. The separate universe approach to soft limits

    Energy Technology Data Exchange (ETDEWEB)

    Kenton, Zachary; Mulryne, David J., E-mail: z.a.kenton@qmul.ac.uk, E-mail: d.mulryne@qmul.ac.uk [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London, E1 4NS (United Kingdom)

    2016-10-01

    We develop a formalism for calculating soft limits of n -point inflationary correlation functions using separate universe techniques. Our method naturally allows for multiple fields and leads to an elegant diagrammatic approach. As an application we focus on the trispectrum produced by inflation with multiple light fields, giving explicit formulae for all possible single- and double-soft limits. We also investigate consistency relations and present an infinite tower of inequalities between soft correlation functions which generalise the Suyama-Yamaguchi inequality.

  11. Double-tailored nonimaging reflector optics for maximum-performance solar concentration.

    Science.gov (United States)

    Goldstein, Alex; Gordon, Jeffrey M

    2010-09-01

    A nonimaging strategy that tailors two mirror contours for concentration near the étendue limit is explored, prompted by solar applications where a sizable gap between the optic and absorber is required. Subtle limitations of this simultaneous multiple surface method approach are derived, rooted in the manner in which phase space boundaries can be tailored according to the edge-ray principle. The fundamental categories of double-tailored reflective optics are identified, only a minority of which can pragmatically offer maximum concentration at high collection efficiency. Illustrative examples confirm that acceptance half-angles as large as 30 mrad can be realized at a flux concentration of approximately 1000.

  12. Higher renewable energy integration into the existing energy system of Finland – Is there any maximum limit?

    International Nuclear Information System (INIS)

    Zakeri, Behnam; Syri, Sanna; Rinne, Samuli

    2015-01-01

    Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.

  13. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  15. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  16. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang

    2016-01-01

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  17. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    KAUST Repository

    Shen, Hua

    2016-10-19

    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  18. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  19. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  20. Approaching the Shockley-Queisser limit: General assessment of the main limiting mechanisms in photovoltaic cells

    International Nuclear Information System (INIS)

    Vossier, Alexis; Gualdi, Federico; Dollet, Alain; Ares, Richard; Aimez, Vincent

    2015-01-01

    In principle, the upper efficiency limit of any solar cell technology can be determined using the detailed-balance limit formalism. However, “real” solar cells show efficiencies which are always below this theoretical value due to several limiting mechanisms. We study the ability of a solar cell architecture to approach its own theoretical limit, using a novel index introduced in this work, and the amplitude with which the different limiting mechanisms affect the cell efficiency is scrutinized as a function of the electronic gap and the illumination level to which the cell is submitted. The implications for future generations of solar cells aiming at an improved conversion of the solar spectrum are also addressed

  1. Reduced oxygen at high altitude limits maximum size.

    Science.gov (United States)

    Peck, L S; Chapelle, G

    2003-11-07

    The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).

  2. Roothaan approach in the thermodynamic limit

    International Nuclear Information System (INIS)

    Gutierrez, G.; Plastino, A.

    1982-01-01

    A systematic method for the solution of the Hartree-Fock equations in the thermodynamic limit is presented. The approach is seen to be a natural extension of the one usually employed in the finite-fermion case, i.e., that developed by Roothaan. The new techniques developed here are applied, as an example, to neutron matter, employing the so-called V 1 Bethe homework potential. The results obtained are, by far, superior to those that the ordinary plane-wave Hartree-Fock theory yields

  3. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  4. Paired maximum inspiratory and expiratory plain chest radiographs for assessment of airflow limitation in chronic obstructive pulmonary disease

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, Takashi, E-mail: tkino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Kawayama, Tomotaka, E-mail: kawayama_tomotaka@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Imamura, Youhei, E-mail: mamura_youhei@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Sakazaki, Yuki, E-mail: sakazaki@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Hirai, Ryo, E-mail: hirai_ryou@kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Ishii, Hidenobu, E-mail: shii_hidenobu@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Suetomo, Masashi, E-mail: jin_t_f_c@yahoo.co.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Matsunaga, Kazuko, E-mail: kmatsunaga@kouhoukai.or.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Azuma, Koichi, E-mail: azuma@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Fujimoto, Kiminori, E-mail: kimichan@med.kurume-u.ac.jp [Department of Radiology, Kurume University School of Medicine, Kurume (Japan); Hoshino, Tomoaki, E-mail: hoshino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan)

    2015-04-15

    Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and

  5. Paired maximum inspiratory and expiratory plain chest radiographs for assessment of airflow limitation in chronic obstructive pulmonary disease

    International Nuclear Information System (INIS)

    Kinoshita, Takashi; Kawayama, Tomotaka; Imamura, Youhei; Sakazaki, Yuki; Hirai, Ryo; Ishii, Hidenobu; Suetomo, Masashi; Matsunaga, Kazuko; Azuma, Koichi; Fujimoto, Kiminori; Hoshino, Tomoaki

    2015-01-01

    Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and

  6. Censoring approach to the detection limits in X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Pajek, M.; Kubala-Kukus, A.

    2004-01-01

    We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called 'nondetects', can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples

  7. Chandrasekhar limit: an elementary approach based on classical physics and quantum theory

    Science.gov (United States)

    Pinochet, Jorge; Van Sint Jan, Michael

    2016-05-01

    In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the Chandrasekhar limit. This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons due to the exclusion principle can no longer stop the gravitational collapse. In the present article, we create an elemental approximation to the Chandrasekhar limit, accessible to non-graduate science and engineering students. The article focuses especially on clarifying the origins of Chandrasekhar’s discovery and the underlying physical concepts. Throughout the article, only basic algebra is used as well as some general notions of classical physics and quantum theory.

  8. Quality assurance of nuclear analytical techniques based on Bayesian characteristic limits

    International Nuclear Information System (INIS)

    Michel, R.

    2000-01-01

    Based on Bayesian statistics, characteristic limits such as decision threshold, detection limit and confidence limits can be calculated taking into account all sources of experimental uncertainties. This approach separates the complete evaluation of a measurement according to the ISO Guide to the Expression of Uncertainty in Measurement from the determination of the characteristic limits. Using the principle of maximum entropy the characteristic limits are determined from the complete standard uncertainty of the measurand. (author)

  9. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  10. Estimating distribution parameters of annual maximum streamflows in Johor, Malaysia using TL-moments approach

    Science.gov (United States)

    Mat Jan, Nur Amalina; Shabri, Ani

    2017-01-01

    TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.

  11. Vehicle Maximum Weight Limitation Based on Intelligent Weight Sensor

    Science.gov (United States)

    Raihan, W.; Tessar, R. M.; Ernest, C. O. S.; E Byan, W. R.; Winda, A.

    2017-03-01

    Vehicle weight is an important factor to be maintained for transportation safety. A weight limitation system is proposed to make sure the vehicle weight is always below its designation prior the vehicle is being used by the driver. The proposed system is divided into two systems, namely vehicle weight confirmation system and weight warning system. In vehicle weight confirmation system, the weight sensor work for the first time after the ignition switch is turned on. When the weight is under the weight limit, the starter engine can be switched on to start the engine system, otherwise it will be locked. The seconds system, will operated after checking all the door at close position, once the door of the car is closed, the weight warning system will check once again the weight during runing engine condition. The results of these two systems, vehicle weight confirmation system and weight warning system have 100 % accuracy, respectively. These show that the proposed vehicle weight limitation system operate well.

  12. A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates

    Directory of Open Access Journals (Sweden)

    Viviana Meruane

    2014-05-01

    Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.

  13. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  14. Spectroscopy of 211Rn approaching the valence limit

    International Nuclear Information System (INIS)

    Davidson, P.M.; Dracoulis, G.D.; Byrne, A.P.; Kibedi, T.; Fabricus, B.; Baxter, A.M.; Stuchbery, A.E.; Poletti, A.R.; Schiffer, K.J.

    1993-01-01

    High-spin states in 211 Rn were populated using the reaction 198 Pt( 18 O, 5n) at 96 MeV. Their decay was studied using γ-ray and electron spectroscopy. The known level scheme is extended up to a spin of greater than 69/2 and many non-yrast states are added. Semi-empirical shell-model calculations and the properties of related states in 210 Rn and 212 Rn are used to assign configurations to some of the non-yrast states. The properties of the high-spin states observed are compared to the predictions of the multi-particle octupole-coupling model and the semi-empirical shell model. The maximum reasonable spin available from the valence particles and holes in 77/2 and states are observed to near this limit. (orig.)

  15. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  16. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  17. Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.

    Science.gov (United States)

    Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S

    2018-02-01

    A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.

  18. Censoring: a new approach for detection limits in total-reflection X-ray fluorescence

    International Nuclear Information System (INIS)

    Pajek, M.; Kubala-Kukus, A.; Braziewicz, J.

    2004-01-01

    It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called 'nondetects', i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for

  19. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario.

    Science.gov (United States)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-07-07

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.

  20. Theoretical assessment of the maximum obtainable power in wireless power transfer constrained by human body exposure limits in a typical room scenario

    International Nuclear Information System (INIS)

    Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai

    2014-01-01

    In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)

  1. Spectroscopy of 211Rn approaching the valence limit

    International Nuclear Information System (INIS)

    Davidson, P.M.; Dracoulis, G.D.; Kibedi, T.; Fabricius, B.; Baxter, A.M.; Stuchbery, A.E.; Poletti, A.R.; Schiffer, K.J.

    1993-02-01

    High spin states in 211 Rn were populated using the reaction 198 Pt( 18 O,5n) at 96 MeV. The decay was studied using γ-ray and electron spectroscopy. The known level scheme is extended up to a spin of greater than 69/2 and many non-yrast states are added. Semi-empirical shell model calculations and the properties of related states in 210 Rn and 212 Rn are used to assign configurations to some of the non-yrast states. The properties of the high spin states observed are compared to the predictions of the Multi-Particle Octupole Coupling model and the semi-empirical shell model. The maximum reasonable spin available from the valence particles and holes is 77/2 and states are observed to near this limit. 12 refs., 4 tabs., 8 figs

  2. Upper limit for Poisson variable incorporating systematic uncertainties by Bayesian approach

    International Nuclear Information System (INIS)

    Zhu, Yongsheng

    2007-01-01

    To calculate the upper limit for the Poisson observable at given confidence level with inclusion of systematic uncertainties in background expectation and signal efficiency, formulations have been established along the line of Bayesian approach. A FORTRAN program, BPULE, has been developed to implement the upper limit calculation

  3. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A New Approach to Identify Optimal Properties of Shunting Elements for Maximum Damping of Structural Vibration Using Piezoelectric Patches

    Science.gov (United States)

    Park, Junhong; Palumbo, Daniel L.

    2004-01-01

    The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.

  5. Quantum cryptography approaching the classical limit.

    Science.gov (United States)

    Weedbrook, Christian; Pirandola, Stefano; Lloyd, Seth; Ralph, Timothy C

    2010-09-10

    We consider the security of continuous-variable quantum cryptography as we approach the classical limit, i.e., when the unknown preparation noise at the sender's station becomes significantly noisy or thermal (even by as much as 10(4) times greater than the variance of the vacuum mode). We show that, provided the channel transmission losses do not exceed 50%, the security of quantum cryptography is not dependent on the channel transmission, and is therefore incredibly robust against significant amounts of excess preparation noise. We extend these results to consider for the first time quantum cryptography at wavelengths considerably longer than optical and find that regions of security still exist all the way down to the microwave.

  6. Exercise-induced maximum metabolic rate scaled to body mass by ...

    African Journals Online (AJOL)

    Exercise-induced maximum metabolic rate scaled to body mass by the fractal ... rate scaling is that exercise-induced maximum aerobic metabolic rate (MMR) is ... muscle stress limitation, and maximized oxygen delivery and metabolic rates.

  7. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.; Alkhalifah, Tariq Ali

    2017-01-01

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  8. Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition

    KAUST Repository

    Wang, H.

    2017-05-26

    The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.

  9. Maximum entropy principle and hydrodynamic models in statistical mechanics

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2012-01-01

    This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the

  10. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  11. Treatment for spasmodic dysphonia: limitations of current approaches

    Science.gov (United States)

    Ludlow, Christy L.

    2009-01-01

    Purpose of review Although botulinum toxin injection is the gold standard for treatment of spasmodic dysphonia, surgical approaches aimed at providing long-term symptom control have been advancing over recent years. Recent findings When surgical approaches provide greater long-term benefits to symptom control, they also increase the initial period of side effects of breathiness and swallowing difficulties. However, recent analyses of quality-of-life questionnaires in patients undergoing regular injections of botulinum toxin demonstrate that a large proportion of patients have limited relief for relatively short periods due to early breathiness and loss-of-benefit before reinjection. Summary Most medical and surgical approaches to the treatment of spasmodic dysphonia have been aimed at denervation of the laryngeal muscles to block symptom expression in the voice, and have both adverse effects as well as treatment benefits. Research is needed to identify the central neuropathophysiology responsible for the laryngeal muscle spasms in order target treatment towards the central neurological abnormality responsible for producing symptoms. PMID:19337127

  12. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  13. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  14. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  15. Physiology-based modelling approaches to characterize fish habitat suitability: Their usefulness and limitations

    Science.gov (United States)

    Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo

    2018-02-01

    Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe

  16. Fundamental limitations on V/STOL terminal guidance due to aircraft characteristics

    Science.gov (United States)

    Wolkovitch, J.; Lamont, C. W.; Lochtie, D. W.

    1971-01-01

    A review is given of limitations on approach flight paths of V/STOL aircraft, including limits on descent angle due to maximum drag/lift ratio. A method of calculating maximum drag/lift ratio of tilt-wing and deflected slipstream aircraft is presented. Derivatives and transfer functions for the CL-84 tilt-wing and X-22A tilt-duct aircraft are presented. For the unaugmented CL-84 in steep descents the transfer function relating descent angle to thrust contains a right-half plane zero. Using optimal control theory, it is shown that this zero causes a serious degradation in the accuracy with which steep flight paths can be followed in the presence of gusts.

  17. The MCE (Maximum Credible Earthquake) - an approach to reduction of seismic risk

    International Nuclear Information System (INIS)

    Asmis, G.J.K.; Atchison, R.J.

    1979-01-01

    It is the responsibility of the Regulatory Body (in Canada, the AECB) to ensure that radiological risks resulting from the effects of earthquakes on nuclear facilities, do not exceed acceptable levels. In simplified numerical terms this means that the frequency of an unacceptable radiation dose must be kept below 10 -6 per annum. Unfortunately, seismic events fall into the class of external events which are not well defined at these low frequency levels. Thus, design earthquakes have been chosen, at the 10 -3 - 10 -4 frequency level, a level commensurate with the limits of statistical data. There exists, therefore, a need to define an additional level of earthquake. A seismic design explicitly and implicitly recognizes three levels of earthquake loading; one comfortably below yield, one at or about yield, and one at ultimate. The ultimate level earthquake, contrary to the first two, has been implicitly addressed by conscientious designers by choosing systems, materials and details compatible with postulated dynamic forces. It is the purpose of this paper to discuss the regulatory specifications required to quantify this third level, or Maximum Credible Earthquake (MCE). (orig.)

  18. The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun

    Science.gov (United States)

    Gorobets, A. Y.; Berdyugina, S. V.; Riethmüller, T. L.; Blanco Rodríguez, J.; Solanki, S. K.; Barthol, P.; Gandorfer, A.; Gizon, L.; Hirzberger, J.; van Noort, M.; Del Toro Iniesta, J. C.; Orozco Suárez, D.; Schmidt, W.; Martínez Pillet, V.; Knölker, M.

    2017-11-01

    The observed magnetic field on the solar surface is characterized by a very complex spatial and temporal behavior. Although feature-tracking algorithms have allowed us to deepen our understanding of this behavior, subjectivity plays an important role in the identification and tracking of such features. In this paper, we continue studies of the temporal stochasticity of the magnetic field on the solar surface without relying either on the concept of magnetic features or on subjective assumptions about their identification and interaction. We propose a data analysis method to quantify fluctuations of the line-of-sight magnetic field by means of reducing the temporal field’s evolution to the regular Markov process. We build a representative model of fluctuations converging to the unique stationary (equilibrium) distribution in the long time limit with maximum entropy. We obtained different rates of convergence to the equilibrium at fixed noise cutoff for two sets of data. This indicates a strong influence of the data spatial resolution and mixing-polarity fluctuations on the relaxation process. The analysis is applied to observations of magnetic fields of the relatively quiet areas around an active region carried out during the second flight of the Sunrise/IMaX and quiet Sun areas at the disk center from the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory satellite.

  19. Predicting the Maximum Dynamic Strength in Bench Press: The High Precision of the Bar Velocity Approach.

    Science.gov (United States)

    Loturco, Irineu; Kobal, Ronaldo; Moraes, José E; Kitamura, Katia; Cal Abad, César C; Pereira, Lucas A; Nakamura, Fábio Y

    2017-04-01

    Loturco, I, Kobal, R, Moraes, JE, Kitamura, K, Cal Abad, CC, Pereira, LA, and Nakamura, FY. Predicting the maximum dynamic strength in bench press: the high precision of the bar velocity approach. J Strength Cond Res 31(4): 1127-1131, 2017-The aim of this study was to determine the force-velocity relationship and test the possibility of determining the 1 repetition maximum (1RM) in "free weight" and Smith machine bench presses. Thirty-six male top-level athletes from 3 different sports were submitted to a standardized 1RM bench press assessment (free weight or Smith machine, in randomized order), following standard procedures encompassing lifts performed at 40-100% of 1RM. The mean propulsive velocity (MPV) was measured in all attempts. A linear regression was performed to establish the relationships between bar velocities and 1RM percentages. The actual and predicted 1RM for each exercise were compared using a paired t-test. Although the Smith machine 1RM was higher (10% difference) than the free weight 1RM, in both cases the actual and predicted values did not differ. In addition, the linear relationship between MPV and percentage of 1RM (coefficient of determination ≥95%) allow determination of training intensity based on the bar velocity. The linear relationships between the MPVs and the relative percentages of 1RM throughout the entire range of loads enable coaches to use the MPV to accurately monitor their athletes on a daily basis and accurately determine their actual 1RM without the need to perform standard maximum dynamic strength assessments.

  20. Identifying critical constraints for the maximum loadability of electric power systems - analysis via interior point method

    Energy Technology Data Exchange (ETDEWEB)

    Barboza, Luciano Vitoria [Sul-riograndense Federal Institute for Education, Science and Technology (IFSul), Pelotas, RS (Brazil)], E-mail: luciano@pelotas.ifsul.edu.br

    2009-07-01

    This paper presents an overview about the maximum load ability problem and aims to study the main factors that limit this load ability. Specifically this study focuses its attention on determining which electric system buses influence directly on the power demand supply. The proposed approach uses the conventional maximum load ability method modelled by an optimization problem. The solution of this model is performed using the Interior Point methodology. As consequence of this solution method, the Lagrange multipliers are used as parameters that identify the probable 'bottlenecks' in the electric power system. The study also shows the relationship between the Lagrange multipliers and the cost function in the Interior Point optimization interpreted like sensitivity parameters. In order to illustrate the proposed methodology, the approach was applied to an IEEE test system and to assess its performance, a real equivalent electric system from the South- Southeast region of Brazil was simulated. (author)

  1. Estimation of Maximum Allowable PV Connection to LV Residential Power Networks

    DEFF Research Database (Denmark)

    Demirok, Erhan; Sera, Dezso; Teodorescu, Remus

    2011-01-01

    Maximum photovoltaic (PV) hosting capacity of low voltage (LV) power networks is mainly restricted by either thermal limits of network components or grid voltage quality resulted from high penetration of distributed PV systems. This maximum hosting capacity may be lower than the available solar...... potential of geographic area due to power network limitations even though all rooftops are fully occupied with PV modules. Therefore, it becomes more of an issue to know what exactly limits higher PV penetration level and which solutions should be engaged efficiently such as over sizing distribution...

  2. A risk modelling approach for setting microbiological limits using enterococci as indicator for growth potential of Salmonella in pork.

    Science.gov (United States)

    Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck; Aabo, Søren

    2017-01-02

    Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support hygiene interventions. The objective of the present study was to develop an approach which could make it possible to define potential risk-based microbiological limits for an indicator, enterococci, in order to evaluate the risk from potential growth of Salmonella. A positive correlation between the concentration of enterococci and the prevalence and concentration of Salmonella was shown for 6640 pork samples taken at Danish cutting plants and retail butchers. The samples were collected in five different studies in 2001, 2002, 2010, 2011 and 2013. The observations that both Salmonella and enterococci are carried in the intestinal tract, contaminate pork by the same mechanisms and share similar growth characteristics (lag phase and maximum specific growth rate) at temperatures around 5-10°C, suggest a potential of enterococci to be used as an indicator of potential growth of Salmonella in pork. Elevated temperatures during processing will lead to growth of both enterococci and, if present, also Salmonella. By combining the correlation between enterococci and Salmonella with risk modelling, it is possible to predict the risk of salmonellosis based on the level of enterococci. The risk model used for this purpose includes the dose-response relationship for Salmonella and a reduction factor to account for preparation of the fresh pork. By use of the risk model, it was estimated that the majority of salmonellosis cases, caused by the consumption of pork in Denmark, is caused by the small fraction of pork products that has enterococci concentrations above 5logCFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological

  3. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  4. HOME Rent Limits

    Data.gov (United States)

    Department of Housing and Urban Development — In accordance with 24 CFR Part 92.252, HUD provides maximum HOME rent limits. The maximum HOME rents are the lesser of: The fair market rent for existing housing for...

  5. Attitude sensor alignment calibration for the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Shuster, Malcolm D.

    1990-01-01

    An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.

  6. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  7. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    International Nuclear Information System (INIS)

    Ning, A; Dykes, K

    2014-01-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent

  8. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    Science.gov (United States)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  9. Stochastic approach to the derivation of emission limits for wastewater treatment plants.

    Science.gov (United States)

    Stransky, D; Kabelkova, I; Bares, V

    2009-01-01

    Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.

  10. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  11. Problem of data quality and the limitations of the infrastructure approach

    Science.gov (United States)

    Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong

    1998-07-01

    The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.

  12. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  13. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  14. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  15. Effects of fasting on maximum thermogenesis in temperature-acclimated rats

    Science.gov (United States)

    Wang, L. C. H.

    1981-09-01

    To further investigate the limiting effect of substrates on maximum thermogenesis in acute cold exposure, the present study examined the prevalence of this effect at different thermogenic capabilities consequent to cold- or warm-acclimation. Male Sprague-Dawley rats (n=11) were acclimated to 6, 16 and 26‡C, in succession, their thermogenic capabilities after each acclimation temperature were measured under helium-oxygen (21% oxygen, balance helium) at -10‡C after overnight fasting or feeding. Regardless of feeding conditions, both maximum and total heat production were significantly greater in 6>16>26‡C-acclimated conditions. In the fed state, the total heat production was significantly greater than that in the fasted state at all acclimating temperatures but the maximum thermogenesis was significant greater only in the 6 and 16‡C-acclimated states. The results indicate that the limiting effect of substrates on maximum and total thermogenesis is independent of the magnitude of thermogenic capability, suggesting a substrate-dependent component in restricting the effective expression of existing aerobic metabolic capability even under severe stress.

  16. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  17. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  18. A Weakest-Link Approach for Fatigue Limit of 30CrNiMo8 Steels (Preprint)

    Science.gov (United States)

    2011-03-01

    34Application of a Weakest-Link Concept to the Fatigue Limit of the Bearing Steel Sae 52100 in a Bainitic Condition," Fatigue and Fracture of...AFRL-RX-WP-TP-2011-4206 A WEAKEST-LINK APPROACH FOR FATIGUE LIMIT OF 30CrNiMo8 STEELS (PREPRINT) S. Ekwaro-Osire and H.V. Kulkarni Texas...2011 4. TITLE AND SUBTITLE A WEAKEST-LINK APPROACH FOR FATIGUE LIMIT OF 30CrNiMo8 STEELS (PREPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT

  19. Multi-approach analysis of maximum riverbed scour depth above subway tunnel

    Directory of Open Access Journals (Sweden)

    Jun Chen

    2010-12-01

    Full Text Available When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the application ranges and forecasting precision of these methods vary considerably. In order to quantitatively analyze the characteristics of the different methods, a subway tunnel passing underneath a river was selected, and the aforementioned five methods were used to forecast the maximum scour depth. The fluvial process analysis method was used to characterize the river regime and evolution trend, which were the baseline for examination of the scour depth of the riverbed. The results obtained from the scour model experiment and the numerical simulation methods are reliable; these two methods are suitable for application to tunnel projects passing underneath rivers. The scour formula method was less accurate than the scour model experiment method; it is suitable for application to lower risk projects such as pipelines. The results of the geological structure analysis had low precision; the method is suitable for use as a secondary method to assist other research methods. To forecast the maximum scour depth of the riverbed above the subway tunnel, a combination of methods is suggested, and the appropriate analysis method should be chosen with respect to the local conditions.

  20. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    Science.gov (United States)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  1. Limited endoscopic transsphenoidal approach for cavernous sinus biopsy: illustration of 3 cases and discussion.

    Science.gov (United States)

    Graillon, T; Fuentes, S; Metellus, P; Adetchessi, T; Gras, R; Dufour, H

    2014-01-01

    Advances in transsphenoidal surgery and endoscopic techniques have opened new perspectives for cavernous sinus (CS) approaches. The aim of this study was to assess the advantages and disadvantages of limited endoscopic transsphenoidal approach, as performed in pituitary adenoma surgery, for CS tumor biopsy illustrated with three clinical cases. The first case was a 46-year-old woman with a prior medical history of parotid adenocarcinoma successfully treated 10 years previously. The cavernous sinus tumor was revealed by right third and sixth nerve palsy and increased over the past three years. A tumor biopsy using a limited endoscopic transsphenoidal approach revealed an adenocarcinoma metastasis. Complementary radiosurgery was performed. The second case was a 36-year-old woman who consulted for diplopia with right sixth nerve palsy and amenorrhea with hyperprolactinemia. Dopamine agonist treatment was used to restore the patient's menstrual cycle. Cerebral magnetic resonance imaging (MRI) revealed a right sided CS tumor. CS biopsy, via a limited endoscopic transsphenoidal approach, confirmed a meningothelial grade 1 meningioma. Complementary radiosurgery was performed. The third case was a 63-year-old woman with progressive installation of left third nerve palsy and visual acuity loss, revealing a left cavernous sinus tumor invading the optic canal. Surgical biopsy was performed using an enlarged endoscopic transsphenoidal approach to the decompress optic nerve. Biopsy results revealed a meningothelial grade 1 meningioma. Complementary radiotherapy was performed. In these three cases, no complications were observed. Mean hospitalization duration was 4 days. Reported anatomical studies and clinical series have shown the feasibility of reaching the cavernous sinus using an endoscopic endonasal approach. Trans-foramen ovale CS percutaneous biopsy is an interesting procedure but only provides cell analysis results, and not tissue analysis. However, radiotherapy and

  2. Experimental application of the "total maximum daily load" approach as a tool for WFD implementation in temporary rivers

    Science.gov (United States)

    Lo Porto, A.; De Girolamo, A. M.; Santese, G.

    2012-04-01

    In this presentation, the experience gained in the first experimental use in the UE (as far as we know) of the concept and methodology of the "Total Maximum Daily Load" (TMDL) is reported. The TMDL is an instrument required in the Clean Water Act in U.S.A for the management of water bodies classified impaired. The TMDL calculates the maximum amount of a pollutant that a waterbody can receive and still safely meet water quality standards. It permits to establish a scientifically-based strategy on the regulation of the emission loads control according to the characteristic of the watershed/basin. The implementation of the TMDL is a process analogous to the Programmes of Measures required by the WFD, the main difference being the analysis of the linkage between loads of different sources and the water quality of water bodies. The TMDL calculation was used in this study for the Candelaro River, a temporary Italian river, classified impaired in the first steps of the implementation of the WFD. A specific approach based on the "Load Duration Curves" was adopted for the calculation of nutrient TMDLs due to the more robust approach specific for rivers featuring large changes in river flow compared to the classic approach based on average long term flow conditions. This methodology permits to establish the maximum allowable loads across to the different flow conditions of a river. This methodology enabled: to evaluate the allowable loading of a water body; to identify the sources and estimate their loads; to estimate the total loading that the water bodies can receives meeting the water quality standards established; to link the effects of point and diffuse sources on the water quality status and finally to individuate the reduction necessary for each type of sources. The loads reductions were calculated for nitrate, total phosphorus and ammonia. The simulated measures showed a remarkable ability to reduce the pollutants for the Candelaro River. The use of the Soil and

  3. Causal nexus between energy consumption and carbon dioxide emission for Malaysia using maximum entropy bootstrap approach.

    Science.gov (United States)

    Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid

    2015-12-01

    This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.

  4. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  5. Pushing concentration of stationary solar concentrators to the limit.

    Science.gov (United States)

    Winston, Roland; Zhang, Weiya

    2010-04-26

    We give the theoretical limit of concentration allowed by nonimaging optics for stationary solar concentrators after reviewing sun- earth geometry in direction cosine space. We then discuss the design principles that we follow to approach the maximum concentration along with examples including a hollow CPC trough, a dielectric CPC trough, and a 3D dielectric stationary solar concentrator which concentrates sun light four times (4x), eight hours per day year around.

  6. Novel approach to epicardial pacemaker implantation in patients with limited venous access.

    Science.gov (United States)

    Costa, Roberto; Scanavacca, Mauricio; da Silva, Kátia Regina; Martinelli Filho, Martino; Carrillo, Roger

    2013-11-01

    Limited venous access in certain patients increases the procedural risk and complexity of conventional transvenous pacemaker implantation. The purpose of this study was to determine a minimally invasive epicardial approach using pericardial reflections for dual-chamber pacemaker implantation in patients with limited venous access. Between June 2006 and November 2011, 15 patients underwent epicardial pacemaker implantation. Procedures were performed through a minimally invasive subxiphoid approach and pericardial window with subsequent fluoroscopy-assisted lead placement. Mean patient age was 46.4 ± 15.3 years (9 male [(60.0%], 6 female [40.0%]). The new surgical approach was used in patients determined to have limited venous access due to multiple abandoned leads in 5 (33.3%), venous occlusion in 3 (20.0%), intravascular retention of lead fragments from prior extraction in 3 (20.0%), tricuspid valve vegetation currently under treatment in 2 (13.3%), and unrepaired intracardiac defects in 2 (13.3%). All procedures were successful with no perioperative complications or early deaths. Mean operating time for isolated pacemaker implantation was 231.7 ± 33.5 minutes. Lead placement on the superior aspect of right atrium, through the transverse sinus, was possible in 12 patients. In the remaining 3 patients, the atrial lead was implanted on the left atrium through the oblique sinus, the postcaval recess, or the left pulmonary vein recess. None of the patients displayed pacing or sensing dysfunction, and all parameters remained stable throughout the follow-up period of 36.8 ± 25.1 months. Epicardial pacemaker implantation through pericardial reflections is an effective alternative therapy for those patients requiring physiologic pacing in whom venous access is limited. © 2013 Heart Rhythm Society. All rights reserved.

  7. A novel approach to derive halo-independent limits on dark matter properties

    OpenAIRE

    Ferrer, Francesc; Ibarra, Alejandro; Wild, Sebastian

    2015-01-01

    We propose a method that allows to place an upper limit on the dark matter elastic scattering cross section with nucleons which is independent of the velocity distribution. Our approach combines null results from direct detection experiments with indirect searches at neutrino telescopes, and goes beyond previous attempts to remove astrophysical uncertainties in that it directly constrains the particle physics properties of the dark matter. The resulting halo-independent upper limits on the sc...

  8. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)

    2015-05-26

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R

  9. Novel methods for estimating lithium-ion battery state of energy and maximum available energy

    International Nuclear Information System (INIS)

    Zheng, Linfeng; Zhu, Jianguo; Wang, Guoxiu; He, Tingting; Wei, Yiying

    2016-01-01

    Highlights: • Study on temperature, current, aging dependencies of maximum available energy. • Study on the various factors dependencies of relationships between SOE and SOC. • A quantitative relationship between SOE and SOC is proposed for SOE estimation. • Estimate maximum available energy by means of moving-window energy-integral. • The robustness and feasibility of the proposed approaches are systematic evaluated. - Abstract: The battery state of energy (SOE) allows a direct determination of the ratio between the remaining and maximum available energy of a battery, which is critical for energy optimization and management in energy storage systems. In this paper, the ambient temperature, battery discharge/charge current rate and cell aging level dependencies of battery maximum available energy and SOE are comprehensively analyzed. An explicit quantitative relationship between SOE and state of charge (SOC) for LiMn_2O_4 battery cells is proposed for SOE estimation, and a moving-window energy-integral technique is incorporated to estimate battery maximum available energy. Experimental results show that the proposed approaches can estimate battery maximum available energy and SOE with high precision. The robustness of the proposed approaches against various operation conditions and cell aging levels is systematically evaluated.

  10. Reliability of buildings in service limit state for maximum horizontal displacements

    Directory of Open Access Journals (Sweden)

    A. G. B. Corelhano

    Full Text Available Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984 [10] are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.

  11. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  12. Derivation of some new distributions in statistical mechanics using maximum entropy approach

    Directory of Open Access Journals (Sweden)

    Ray Amritansu

    2014-01-01

    Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.

  13. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  14. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  15. Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.

  16. The limit distribution of the maximum increment of a random walk with dependent regularly varying jump sizes

    DEFF Research Database (Denmark)

    Mikosch, Thomas Valentin; Moser, Martin

    2013-01-01

    We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....

  17. An ICMP-Based Mobility Management Approach Suitable for Protocol Deployment Limitation

    Directory of Open Access Journals (Sweden)

    Jeng-Yueng Chen

    2009-01-01

    Full Text Available Mobility management is one of the important tasks on wireless networks. Many approaches have been proposed in the past, but none of them have been widely deployed so far. Mobile IP (MIP and Route Optimization (ROMIP, respectively, suffer from triangular routing problem and binding cache supporting upon each node on the entire Internet. One step toward a solution is the Mobile Routing Table (MRT, which enables edge routers to take over address binding. However, this approach demands that all the edge routers on the Internet support MRT, resulting in protocol deployment difficulties. To address this problem and to offset the limitation of the original MRT approach, we propose two different schemes, an ICMP echo scheme and an ICMP destination-unreachable scheme. These two schemes work with the MRT to efficiently find MRT-enabled routers that greatly reduce the number of triangular routes. In this paper, we analyze and compare the standard MIP and the proposed approaches. Simulation results have shown that the proposed approaches reduce transmission delay, with only a few routers supporting MRT.

  18. Detection of maximum loadability limits and weak buses using Chaotic PSO considering security constraints

    International Nuclear Information System (INIS)

    Acharjee, P.; Mallick, S.; Thakur, S.S.; Ghoshal, S.P.

    2011-01-01

    Highlights: → The unique cost function is derived considering practical Security Constraints. → New innovative formulae of PSO parameters are developed for better performance. → The inclusion and implementation of chaos in PSO technique is original and unique. → Weak buses are identified where FACTS devices can be implemented. → The CPSO technique gives the best performance for all the IEEE standard test systems. - Abstract: In the current research chaotic search is used with the optimization technique for solving non-linear complicated power system problems because Chaos can overcome the local optima problem of optimization technique. Power system problem, more specifically voltage stability, is one of the practical examples of non-linear, complex, convex problems. Smart grid, restructured energy system and socio-economic development fetch various uncertain events in power systems and the level of uncertainty increases to a great extent day by day. In this context, analysis of voltage stability is essential. The efficient method to assess the voltage stability is maximum loadability limit (MLL). MLL problem is formulated as a maximization problem considering practical security constraints (SCs). Detection of weak buses is also important for the analysis of power system stability. Both MLL and weak buses are identified by PSO methods and FACTS devices can be applied to the detected weak buses for the improvement of stability. Three particle swarm optimization (PSO) techniques namely General PSO (GPSO), Adaptive PSO (APSO) and Chaotic PSO (CPSO) are presented for the comparative study with obtaining MLL and weak buses under different SCs. In APSO method, PSO-parameters are made adaptive with the problem and chaos is incorporated in CPSO method to obtain reliable convergence and better performances. All three methods are applied on standard IEEE 14 bus, 30 bus, 57 bus and 118 bus test systems to show their comparative computing effectiveness and

  19. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  20. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  1. Experimental studies to validate model calculations and maximum solubility limits for Plutonium and Americium

    International Nuclear Information System (INIS)

    2017-01-01

    This report focuses on studies of KIT-INE to derive a significantly improved description of the chemical behaviour of Americium and Plutonium in saline NaCl, MgCl 2 and CaCl 2 brine systems. The studies are based on new experimental data and aim at deriving reliable Am and Pu solubility limits for the investigated systems as well as deriving comprehensive thermodynamic model descriptions. Both aspects are of high relevance in the context of potential source term estimations for Americium and Plutonium in aqueous brine systems and related scenarios. Americium and Plutonium are long-lived alpha emitting radionuclides which due to their high radiotoxicity need to be accounted for in a reliable and traceable way. The hydrolysis of trivalent actinides and the effect of highly alkaline pH conditions on the solubility of trivalent actinides in calcium chloride rich brine solutions were investigated and a thermodynamic model derived. The solubility of Plutonium in saline brine systems was studied under reducing and non-reducing conditions and is described within a new thermodynamic model. The influence of dissolved carbonate on Americium and Plutonium solubility in MgCl 2 solutions was investigated and quantitative information on Am and Pu solubility limits in these systems derived. Thermodynamic constants and model parameter derived in this work are implemented in the Thermodynamic Reference Database THEREDA owned by BfS. According to the quality assurance approach in THEREDA, is was necessary to publish parts of this work in peer-reviewed scientific journals. The publications are focused on solubility experiments, spectroscopy of aquatic and solid species and thermodynamic data. (Neck et al., Pure Appl. Chem., Vol. 81, (2009), pp. 1555-1568., Altmaier et al., Radiochimica Acta, 97, (2009), pp. 187-192., Altmaier et al., Actinide Research Quarterly, No 2., (2011), pp. 29-32.).

  2. 47 CFR 22.535 - Effective radiated power limits.

    Science.gov (United States)

    2010-10-01

    ... limits. The effective radiated power (ERP) of transmitters operating on the channels listed in § 22.531 must not exceed the limits in this section. (a) Maximum ERP. The ERP must not exceed the applicable limits in this paragraph under any circumstances. Frequency range (MHz) Maximum ERP (Watts) 35-36 600 43...

  3. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  4. An improved maximum power point tracking method for a photovoltaic system

    Science.gov (United States)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  5. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    Science.gov (United States)

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  6. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    Science.gov (United States)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  7. Thermodynamic limit for coherence-limited solar power conversion

    Science.gov (United States)

    Mashaal, Heylal; Gordon, Jeffrey M.

    2014-09-01

    The spatial coherence of solar beam radiation is a key constraint in solar rectenna conversion. Here, we present a derivation of the thermodynamic limit for coherence-limited solar power conversion - an expansion of Landsberg's elegant basic bound, originally limited to incoherent converters at maximum flux concentration. First, we generalize Landsberg's work to arbitrary concentration and angular confinement. Then we derive how the values are further lowered for coherence-limited converters. The results do not depend on a particular conversion strategy. As such, they pertain to systems that span geometric to physical optics, as well as classical to quantum physics. Our findings indicate promising potential for solar rectenna conversion.

  8. Quantity and quality limit detritivore growth: mechanisms revealed by ecological stoichiometry and co-limitation theory.

    Science.gov (United States)

    Halvorson, Halvor M; Sperfeld, Erik; Evans-White, Michelle A

    2017-12-01

    Resource quantity and quality are fundamental bottom-up constraints on consumers. Best understood in autotroph-based systems, co-occurrence of these constraints may be common but remains poorly studied in detrital-based systems. Here, we used a laboratory growth experiment to test limitation of the detritivorous caddisfly larvae Pycnopsyche lepida across a concurrent gradient of oak litter quantity (food supply) and quality (phosphorus : carbon [P:C ratios]). Growth increased simultaneously with quantity and quality, indicating co-limitation across the resource gradients. We merged approaches of ecological stoichiometry and co-limitation theory, showing how co-limitation reflected shifts in C and P acquisition throughout homeostatic regulation. Increased growth was best explained by elevated consumption rates and improved P assimilation, which both increased with elevated quantity and quality. Notably, C assimilation efficiencies remained unchanged and achieved maximum 18% at low quantity despite pronounced C limitation. Detrital C recalcitrance and substantive post-assimilatory C losses probably set a minimum quantity threshold to achieve positive C balance. Above this threshold, greater quality enhanced larval growth probably by improving P assimilation toward P-intensive growth. We suggest this interplay of C and P acquisition contributes to detritivore co-limitation, highlighting quantity and quality as potential simultaneous bottom-up controls in detrital-based ecosystems, including under anthropogenic change like nutrient enrichment. © 2017 by the Ecological Society of America.

  9. Novel maximum-margin training algorithms for supervised neural networks.

    Science.gov (United States)

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by

  10. Progress in basic principles of limitation in radiation protection

    International Nuclear Information System (INIS)

    Ramzaev, P.V.; Tarasov, S.I.; Troitskaya, M.N.; Ermolaeva, A.P.

    1977-01-01

    For purposes of limitation of harmful factors, e.g. radiation, it is proposed to divide all countless numbers of biological effects into three groups: 1) social important effects (ultimate and effects); 2) intermediate effects (different diseases etc.), which are connected with and controlled by the first group; 3) pure biological effects, importance of which is not known. To determine the first group effects there are identified four indices describing all significant sides of human life: time of life, life-time integral of mental and physical capacity for work, aesthetical satisfaction from organism itself, reproduction of descendants. They reflect the main social and individual interests related to functioning of organism. On the base of weighing these indices it is suggested the united general index of health in form of time of a full life. The united index can be used for different principles of limitation (based on threshold, acceptable risk, maximum benefit). To realize the principle of maximum public benefit as ideal principle in the future limitation all benefit and detriment from utilization of harmful sources must be expressed in the united index of health (instead of money), which is the greatest value of individual and society. Authors suggest to standartize ionizing radiation on the general methodological approaches that were acceptable to non-ionizing factors too

  11. Multi-approach analysis of maximum riverbed scour depth above subway tunnel

    OpenAIRE

    Jun Chen; Hong-wu Tang; Zui-sen Li; Wen-hong Dai

    2010-01-01

    When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the applic...

  12. An entropy approach for evaluating the maximum information content achievable by an urban rainfall network

    Directory of Open Access Journals (Sweden)

    E. Ridolfi

    2011-07-01

    Full Text Available Hydrological models are the basis of operational flood-forecasting systems. The accuracy of these models is strongly dependent on the quality and quantity of the input information represented by rainfall height. Finer space-time rainfall resolution results in more accurate hazard forecasting. In this framework, an optimum raingauge network is essential in predicting flood events.

    This paper develops an entropy-based approach to evaluate the maximum information content achievable by a rainfall network for different sampling time intervals. The procedure is based on the determination of the coefficients of transferred and nontransferred information and on the relative isoinformation contours.

    The nontransferred information value achieved by the whole network is strictly dependent on the sampling time intervals considered. An empirical curve is defined, to assess the objective of the research: the nontransferred information value is plotted versus the associated sampling time on a semi-log scale. The curve has a linear trend.

    In this paper, the methodology is applied to the high-density raingauge network of the urban area of Rome.

  13. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  14. Approach to DOE threshold guidance limits

    International Nuclear Information System (INIS)

    Shuman, R.D.; Wickham, L.E.

    1984-01-01

    The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references

  15. Maximum entropy decomposition of quadrupole mass spectra

    International Nuclear Information System (INIS)

    Toussaint, U. von; Dose, V.; Golan, A.

    2004-01-01

    We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast

  16. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...

  17. Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager

    Energy Technology Data Exchange (ETDEWEB)

    Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)

    2017-10-20

    Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.

  18. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    Science.gov (United States)

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  19. Perspective: Maximum caliber is a general variational principle for dynamical systems

    Science.gov (United States)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  20. Barge Train Maximum Impact Forces Using Limit States for the Lashings Between Barges

    National Research Council Canada - National Science Library

    Arroyo, Jose R; Ebeling, Robert M

    2005-01-01

    ... on: the mass including hydrodynamic added mass of the barge train, the approach velocity, the approach angle, the barge train moment of inertia, damage sustained by the barge structure, and friction...

  1. Antirandom Testing: A Distance-Based Approach

    Directory of Open Access Journals (Sweden)

    Shen Hui Wu

    2008-01-01

    Full Text Available Random testing requires each test to be selected randomly regardless of the tests previously applied. This paper introduces the concept of antirandom testing where each test applied is chosen such that its total distance from all previous tests is maximum. This spans the test vector space to the maximum extent possible for a given number of vectors. An algorithm for generating antirandom tests is presented. Compared with traditional pseudorandom testing, antirandom testing is found to be very effective when a high-fault coverage needs to be achieved with a limited number of test vectors. The superiority of the new approach is even more significant for testing bridging faults.

  2. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  3. Rapid calculation of maximum particle lifetime for diffusion in complex geometries

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    Diffusion of molecules within biological cells and tissues is strongly influenced by crowding. A key quantity to characterize diffusion is the particle lifetime, which is the time taken for a diffusing particle to exit by hitting an absorbing boundary. Calculating the particle lifetime provides valuable information, for example, by allowing us to compare the timescale of diffusion and the timescale of the reaction, thereby helping us to develop appropriate mathematical models. Previous methods to quantify particle lifetimes focus on the mean particle lifetime. Here, we take a different approach and present a simple method for calculating the maximum particle lifetime. This is the time after which only a small specified proportion of particles in an ensemble remain in the system. Our approach produces accurate estimates of the maximum particle lifetime, whereas the mean particle lifetime always underestimates this value compared with data from stochastic simulations. Furthermore, we find that differences between the mean and maximum particle lifetimes become increasingly important when considering diffusion hindered by obstacles.

  4. 26 CFR 1.401(l)-5 - Overall permitted disparity limits.

    Science.gov (United States)

    2010-04-01

    ... permitted disparity limits. (a) Introduction—(1) In general. The maximum excess allowance and maximum offset... Commissioner considers appropriate. Additional rules may include (without being limited to) rules for computing...

  5. Fundamental gravitational limitations to quantum computing

    International Nuclear Information System (INIS)

    Gambini, R.; Porto, A.; Pullin, J.

    2006-01-01

    Lloyd has considered the ultimate limitations the fundamental laws of physics place on quantum computers. He concludes in particular that for an 'ultimate laptop' (a computer of one liter of volume and one kilogram of mass) the maximum number of operations per second is bounded by 10 51 . The limit is derived considering ordinary quantum mechanics. Here we consider additional limits that are placed by quantum gravity ideas, namely the use of a relational notion of time and fundamental gravitational limits that exist on time measurements. We then particularize for the case of an ultimate laptop and show that the maximum number of operations is further constrained to 10 47 per second. (authors)

  6. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  7. A simulation study of Linsley's approach to infer elongation rate and fluctuations of the EAS maximum depth from muon arrival time distributions

    International Nuclear Information System (INIS)

    Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.

    1999-01-01

    The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle

  8. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  9. Sustainable and efficient allocation of limited blue and green water resources

    OpenAIRE

    Schyns, Joseph Franciscus

    2018-01-01

    Freshwater stems from precipitation over land, which differentiates into a blue water flow (groundwater and surface water) and a green water flow (evaporation). Both flows are partially allocated to serve the economy, resulting in blue and green water footprints (WF). There are maximum sustainable levels to the blue and green WF, since rainfall is limited and part of the flows need to be reserved for aquatic and terrestrial biodiversity. Water scarcity, the degree to which the actual approach...

  10. Approach to the thermodynamic limit in lattice QCD at μ≠0

    International Nuclear Information System (INIS)

    Splittorff, K.; Verbaarschot, J. J. M.

    2008-01-01

    The expectation value of the complex phase factor of the fermion determinant is computed to leading order in the p expansion of the chiral Lagrangian. The computation is valid for μ π /2 and determines the dependence of the sign problem on the volume and on the geometric shape of the volume. In the thermodynamic limit with L i →∞ at fixed temperature 1/L 0 , the average phase factor vanishes. In the low temperature limit where L i /L 0 is fixed as L i becomes large, the average phase factor approaches 1 for μ π /2. The results for a finite volume compare well with lattice results obtained by Allton et al. After taking appropriate limits, we reproduce previously derived results for the ε regime and for one-dimensional QCD. The distribution of the phase itself is also computed

  11. Considerations on the establishment of maximum permissible exposure of man

    International Nuclear Information System (INIS)

    Jacobi, W.

    1974-01-01

    An attempt is made in the information lecture to give a quantitative analysis of the somatic radiation risk and to illustrate a concept to fix dose limiting values. Of primary importance is the limiting values. Of primary importance is the limiting value of the radiation exposure to the whole population. By consequential application of the risk concept, the following points are considered: 1) Definition of the risk for radiation late damages (cancer, leukemia); 2) relationship between radiation dose and thus caused radiation risk; 3) radiation risk and the dose limiting values at the time; 4) criteria for the maximum acceptable radiation risk; 5) limiting value which can be expected at the time. (HP/LH) [de

  12. Maximum entropy analysis of liquid diffraction data

    International Nuclear Information System (INIS)

    Root, J.H.; Egelstaff, P.A.; Nickel, B.G.

    1986-01-01

    A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)

  13. ICRF power limitation relation to density limit in ASDEX

    International Nuclear Information System (INIS)

    Ryter, F.

    1992-01-01

    Launching high ICRF power into ASDEX plasmas required good antenna-plasma coupling. This could be achieved by sufficient electron density in front of the antennas i.e. small antenna-plasma distance (1-2 cm) and moderate to high line-averaged electron density compared to the density window in ASDEX. These are conditions eventually close to the density limit. ICRF heated discharges terminated by plasma disruptions caused by the RF pulse limited the maximum RF power which can be injected into the plasma. The disruptions occurring in these cases have clear phenomenological similarities with those observed in density limit discharges. We show in this paper that the ICRF-power limitation by plasma disruptions in ASDEX was due to reaching the density limit. (orig.)

  14. ICRF power limitation relation to density limit in ASDEX

    International Nuclear Information System (INIS)

    Ryter, F.

    1992-01-01

    Launching high ICRF power into ASDEX plasmas required good antenna-plasma coupling. This could be achieved by sufficient electron density in front of the antennas i.e. small antenna-plasma distance (1-2 cm) and moderate to high line-averaged electron density compared to the density window in ASDEX. These are conditions eventually close to the density limit. ICRF heated discharges terminated by plasma disruptions caused by the RF pulse limited the maximum RF power which can be injected into the plasma. The disruptions occurring in these cases have clear phenomenological similarities with those observed in density limit discharges. We show in this paper that the ICRF-power limitation by plasma disruptions in ASDEX was due to reaching the density limit. (author) 3 refs., 3 figs

  15. Maximum volume cuboids for arbitrarily shaped in-situ rock blocks as determined by discontinuity analysis—A genetic algorithm approach

    Science.gov (United States)

    Ülker, Erkan; Turanboy, Alparslan

    2009-07-01

    The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).

  16. Afrika Statistika ISSN 2316-090X Comparison of the maximum ...

    African Journals Online (AJOL)

    †Badji-Mokhtar University Department of Mathematics B.P.12, Annaba 23000. Algeria. ‡Laboratory of ... Using the maximum likelihood method and the Bayesian approach, we estimate the parameters and ...... Japan Statist. Soc. 14. 145-155.

  17. Reliability of one-repetition maximum performance in people with chronic heart failure.

    Science.gov (United States)

    Ellis, Rachel; Holland, Anne E; Dodd, Karen; Shields, Nora

    2018-02-24

    Evaluate intra-rater and inter-rater reliability of the one-repetition maximum strength test in people with chronic heart failure. Intra-rater and inter-rater reliability study. A public tertiary hospital in northern metropolitan Melbourne. Twenty-four participants (nine female, mean age 71.8 ± 13.1 years) with mild to moderate heart failure of any aetiology. Lower limb strength was assessed by determining the maximum weight that could be lifted using a leg press. Intra-rater reliability was tested by one assessor on two separate occasions . Inter-rater reliability was tested by two assessors in random order. Intra-class correlation coefficients and 95% confidence intervals were calculated. Bland and Altman analyses were also conducted, including calculation of mean differences between measures ([Formula: see text]) and limits of agreement . Ten intra-rater and 21 inter-rater assessments were completed. Excellent intra-rater (intra-class correlation coefficient 2,1 0.96) and inter-rater (intra-class correlation coefficient 2,1 0.93) reliability was found. Intra-rater assessment showed less variability (mean difference 4.5 kg, limits of agreement -8.11 to 17.11 kg) than inter-rater agreement (mean difference -3.81 kg, limits of agreement -23.39 to 15.77 kg). One-repetition maximum determined using a leg press is a reliable measure in people with heart failure. Given its smaller limits of agreement, intra-rater testing is recommended. Implications for Rehabilitation Using a leg press to determine a one-repetition maximum we were able to demonstrate excellent inter-rater and intra-rater reliability using an intra-class correlation coefficient. The Bland and Altman levels of agreement were wide for inter-rater reliability and so we recommend using one assessor if measuring change in strength within an individual over time.

  18. Maximum heart rate in brown trout (Salmo trutta fario) is not limited by firing rate of pacemaker cells.

    Science.gov (United States)

    Haverinen, Jaakko; Abramochkin, Denis V; Kamkin, Andre; Vornanen, Matti

    2017-02-01

    Temperature-induced changes in cardiac output (Q̇) in fish are largely dependent on thermal modulation of heart rate (f H ), and at high temperatures Q̇ collapses due to heat-dependent depression of f H This study tests the hypothesis that firing rate of sinoatrial pacemaker cells sets the upper thermal limit of f H in vivo. To this end, temperature dependence of action potential (AP) frequency of enzymatically isolated pacemaker cells (pacemaker rate, f PM ), spontaneous beating rate of isolated sinoatrial preparations (f SA ), and in vivo f H of the cold-acclimated (4°C) brown trout (Salmo trutta fario) were compared under acute thermal challenges. With rising temperature, f PM steadily increased because of the acceleration of diastolic depolarization and shortening of AP duration up to the break point temperature (T BP ) of 24.0 ± 0.37°C, at which point the electrical activity abruptly ceased. The maximum f PM at T BP was much higher [193 ± 21.0 beats per minute (bpm)] than the peak f SA (94.3 ± 6.0 bpm at 24.1°C) or peak f H (76.7 ± 2.4 at 15.7 ± 0.82°C) (P brown trout in vivo. Copyright © 2017 the American Physiological Society.

  19. Serious limitations of the QTL/Microarray approach for QTL gene discovery

    Directory of Open Access Journals (Sweden)

    Warden Craig H

    2010-07-01

    Full Text Available Abstract Background It has been proposed that the use of gene expression microarrays in nonrecombinant parental or congenic strains can accelerate the process of isolating individual genes underlying quantitative trait loci (QTL. However, the effectiveness of this approach has not been assessed. Results Thirty-seven studies that have implemented the QTL/microarray approach in rodents were reviewed. About 30% of studies showed enrichment for QTL candidates, mostly in comparisons between congenic and background strains. Three studies led to the identification of an underlying QTL gene. To complement the literature results, a microarray experiment was performed using three mouse congenic strains isolating the effects of at least 25 biometric QTL. Results show that genes in the congenic donor regions were preferentially selected. However, within donor regions, the distribution of differentially expressed genes was homogeneous once gene density was accounted for. Genes within identical-by-descent (IBD regions were less likely to be differentially expressed in chromosome 2, but not in chromosomes 11 and 17. Furthermore, expression of QTL regulated in cis (cis eQTL showed higher expression in the background genotype, which was partially explained by the presence of single nucleotide polymorphisms (SNP. Conclusions The literature shows limited successes from the QTL/microarray approach to identify QTL genes. Our own results from microarray profiling of three congenic strains revealed a strong tendency to select cis-eQTL over trans-eQTL. IBD regions had little effect on rate of differential expression, and we provide several reasons why IBD should not be used to discard eQTL candidates. In addition, mismatch probes produced false cis-eQTL that could not be completely removed with the current strains genotypes and low probe density microarrays. The reviewed studies did not account for lack of coverage from the platforms used and therefore removed genes

  20. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  1. Approaching the Hole Mobility Limit of GaSb Nanowires.

    Science.gov (United States)

    Yang, Zai-xing; Yip, SenPo; Li, Dapan; Han, Ning; Dong, Guofa; Liang, Xiaoguang; Shu, Lei; Hung, Tak Fu; Mo, Xiaoliang; Ho, Johnny C

    2015-09-22

    In recent years, high-mobility GaSb nanowires have received tremendous attention for high-performance p-type transistors; however, due to the difficulty in achieving thin and uniform nanowires (NWs), there is limited report until now addressing their diameter-dependent properties and their hole mobility limit in this important one-dimensional material system, where all these are essential information for the deployment of GaSb NWs in various applications. Here, by employing the newly developed surfactant-assisted chemical vapor deposition, high-quality and uniform GaSb NWs with controllable diameters, spanning from 16 to 70 nm, are successfully prepared, enabling the direct assessment of their growth orientation and hole mobility as a function of diameter while elucidating the role of sulfur surfactant and the interplay between surface and interface energies of NWs on their electrical properties. The sulfur passivation is found to efficiently stabilize the high-energy NW sidewalls of (111) and (311) in order to yield the thin NWs (i.e., 40 nm in diameters) would grow along the most energy-favorable close-packed planes with the orientation of ⟨111⟩, supported by the approximate atomic models. Importantly, through the reliable control of sulfur passivation, growth orientation and surface roughness, GaSb NWs with the peak hole mobility of ∼400 cm(2)V s(-1) for the diameter of 48 nm, approaching the theoretical limit under the hole concentration of ∼2.2 × 10(18) cm(-3), can be achieved for the first time. All these indicate their promising potency for utilizations in different technological domains.

  2. Revision of regional maximum flood (RMF) estimation in Namibia ...

    African Journals Online (AJOL)

    Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...

  3. Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading

    Directory of Open Access Journals (Sweden)

    Pierluigi Guerriero

    2015-01-01

    Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.

  4. Thermodynamic approach to biomass gasification

    International Nuclear Information System (INIS)

    Boissonnet, G.; Seiler, J.M.

    2003-01-01

    The document presents an approach of biomass transformation in presence of steam, hydrogen or oxygen. Calculation results based on thermodynamic equilibrium are discussed. The objective of gasification techniques is to increase the gas content in CO and H 2 . The maximum content in these gases is obtained when thermodynamic equilibrium is approached. Any optimisation action of a process. will, thus, tend to approach thermodynamic equilibrium conditions. On the other hand, such calculations can be used to determine the conditions which lead to an increase in the production of CO and H 2 . An objective is also to determine transformation enthalpies that are an important input for process calculations. Various existing processes are assessed, and associated thermodynamic limitations are evidenced. (author)

  5. ITER Experts' meeting on density limits

    International Nuclear Information System (INIS)

    Borrass, K.; Igitkhanov, Y.L.; Uckan, N.A.

    1989-12-01

    The necessity of achieving a prescribed wall load or fusion power essentially determines the plasma pressure in a device like ITER. The range of operation densities and temperatures compatible with this condition is constrained by the problems of power exhaust and the disruptive density limit. The maximum allowable heat loads on the divertor plates and the maximum allowable sheath edge temperature practically impose a lower limit on the operating densities, whereas the disruptive density limit imposes an upper limit. For most of the density limit scalings proposed in the past an overlap of the two constraints or at best a very narrow accessible density range is predicted for ITER. Improved understanding of the underlying mechanisms is therefore a crucial issue in order to provide a more reliable basis for extrapolation to ITER and to identify possible ways of alleviating the problem

  6. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  7. The Maximum Entropy Principle and the Modern Portfolio Theory

    Directory of Open Access Journals (Sweden)

    Ailton Cassetari

    2003-12-01

    Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.

  8. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  9. Impacts of trace carbon on the microstructure of as-sintered biomedical Ti-15Mo alloy and reassessment of the maximum carbon limit.

    Science.gov (United States)

    Yan, M; Qian, M; Kong, C; Dargusch, M S

    2014-02-01

    The formation of grain boundary (GB) brittle carbides with a complex three-dimensional (3-D) morphology can be detrimental to both the fatigue properties and corrosion resistance of a biomedical titanium alloy. A detailed microscopic study has been performed on an as-sintered biomedical Ti-15Mo (in wt.%) alloy containing 0.032 wt.% C. A noticeable presence of a carbon-enriched phase has been observed along the GB, although the carbon content is well below the maximum carbon limit of 0.1 wt.% specified by ASTM Standard F2066. Transmission electron microscopy (TEM) identified that the carbon-enriched phase is face-centred cubic Ti2C. 3-D tomography reconstruction revealed that the Ti2C structure has morphology similar to primary α-Ti. Nanoindentation confirmed the high hardness and high Young's modulus of the GB Ti2C phase. To avoid GB carbide formation in Ti-15Mo, the carbon content should be limited to 0.006 wt.% by Thermo-Calc predictions. Similar analyses and characterization of the carbide formation in biomedical unalloyed Ti, Ti-6Al-4V and Ti-16Nb have also been performed. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  10. Maximum power tracking in WECS (Wind energy conversion systems) via numerical and stochastic approaches

    International Nuclear Information System (INIS)

    Elnaggar, M.; Abdel Fattah, H.A.; Elshafei, A.L.

    2014-01-01

    This paper presents a complete design of a two-level control system to capture maximum power in wind energy conversion systems. The upper level of the proposed control system adopts a modified line search optimization algorithm to determine a setpoint for the wind turbine speed. The calculated speed setpoint corresponds to the maximum power point at given operating conditions. The speed setpoint is fed to a generalized predictive controller at the lower level of the control system. A different formulation, that treats the aerodynamic torque as a disturbance, is postulated to derive the control law. The objective is to accurately track the setpoint while keeping the control action free from unacceptably fast or frequent variations. Simulation results based on a realistic model of a 1.5 MW wind turbine confirm the superiority of the proposed control scheme to the conventional ones. - Highlights: • The structure of a MPPT (maximum power point tracking) scheme is presented. • The scheme is divided into the optimization algorithm and the tracking controller. • The optimization algorithm is based on an online line search numerical algorithm. • The tracking controller is treating the aerodynamics torque as a loop disturbance. • The control technique is simulated with stochastic wind speed by Simulink and FAST

  11. Aedes albopictus and Its Environmental Limits in Europe.

    Science.gov (United States)

    Cunze, Sarah; Kochmann, Judith; Koch, Lisa K; Klimpel, Sven

    2016-01-01

    The Asian tiger mosquito Aedes albopictus, native to South East Asia, is listed as one of the worst invasive vector species worldwide. In Europe the species is currently restricted to Southern Europe, but due to the ongoing climate change, Ae. albopictus is expected to expand its potential range further northwards. In addition to modelling the habitat suitability for Ae. albopictus under current and future climatic conditions in Europe by means of the maximum entropy approach, we here focused on the drivers of the habitat suitability prediction. We explored the most limiting factors for Aedes albopictus in Europe under current and future climatic conditions, a method which has been neglected in species distribution modelling so far. Ae. albopictus is one of the best-studied mosquito species, which allowed us to evaluate the applied Maxent approach for most limiting factor mapping. We identified three key limiting factors for Ae. albopictus in Europe under current climatic conditions: winter temperature in Eastern Europe, summer temperature in Southern Europe. Model findings were in good accordance with commonly known establishment thresholds in Europe based on climate chamber experiments and derived from the geographical distribution of the species. Under future climatic conditions low winter temperature were modelled to remain the most limiting factor in Eastern Europe, whereas in Central Europe annual mean temperature and summer temperatures were modelled to be replaced by summer precipitation, respectively, as most limiting factors. Changes in the climatic conditions in terms of the identified key limiting factors will be of great relevance regarding the invasive potential of the Ae. albopictus. Thus, our results may help to understand the key drivers of the suggested range expansion under climate change and may help to improve monitoring programmes. The applied approach of investigating limiting factors has proven to yield valuable results and may also provide

  12. The Maximum Flux of Star-Forming Galaxies

    Science.gov (United States)

    Crocker, Roland M.; Krumholz, Mark R.; Thompson, Todd A.; Clutterbuck, Julie

    2018-04-01

    The importance of radiation pressure feedback in galaxy formation has been extensively debated over the last decade. The regime of greatest uncertainty is in the most actively star-forming galaxies, where large dust columns can potentially produce a dust-reprocessed infrared radiation field with enough pressure to drive turbulence or eject material. Here we derive the conditions under which a self-gravitating, mixed gas-star disc can remain hydrostatic despite trapped radiation pressure. Consistently taking into account the self-gravity of the medium, the star- and dust-to-gas ratios, and the effects of turbulent motions not driven by radiation, we show that galaxies can achieve a maximum Eddington-limited star formation rate per unit area \\dot{Σ }_*,crit ˜ 10^3 M_{⊙} pc-2 Myr-1, corresponding to a critical flux of F*, crit ˜ 1013L⊙ kpc-2 similar to previous estimates; higher fluxes eject mass in bulk, halting further star formation. Conversely, we show that in galaxies below this limit, our one-dimensional models imply simple vertical hydrostatic equilibrium and that radiation pressure is ineffective at driving turbulence or ejecting matter. Because the vast majority of star-forming galaxies lie below the maximum limit for typical dust-to-gas ratios, we conclude that infrared radiation pressure is likely unimportant for all but the most extreme systems on galaxy-wide scales. Thus, while radiation pressure does not explain the Kennicutt-Schmidt relation, it does impose an upper truncation on it. Our predicted truncation is in good agreement with the highest observed gas and star formation rate surface densities found both locally and at high redshift.

  13. Developing Guided Worksheet for Cognitive Apprenticeship Approach in teaching Formal Definition of The Limit of A Function

    Science.gov (United States)

    Oktaviyanthi, R.; Dahlan, J. A.

    2018-04-01

    This study aims to develop student worksheets that correspond to the Cognitive Apprenticeship learning approach. The main subject in this student worksheet is Functions and Limits with the branch of the main subject is Continuity and Limits of Functions. There are two indicators of the achievement of this learning that are intended to be developed in the student worksheet (1) the student can explain the concept of limit by using the formal definition of limit and (2) the student can evaluate the value of limit of a function using epsilon and delta. The type of research used is development research that refers to the development of Plomp products. The research flow starts from literature review, observation, interviews, work sheet design, expert validity test, and limited trial on first-year students in academic year 2016-2017 in Universitas Serang Raya, STKIP Pelita Pratama Al-Azhar Serang, and Universitas Mathla’ul Anwar Pandeglang. Based on the product development result obtained the student worksheets that correspond to the Cognitive Apprenticeship learning approach are valid and reliable.

  14. Whole-genome sequencing approaches for conservation biology: Advantages, limitations and practical recommendations.

    Science.gov (United States)

    Fuentes-Pardo, Angela P; Ruzzante, Daniel E

    2017-10-01

    Whole-genome resequencing (WGR) is a powerful method for addressing fundamental evolutionary biology questions that have not been fully resolved using traditional methods. WGR includes four approaches: the sequencing of individuals to a high depth of coverage with either unresolved or resolved haplotypes, the sequencing of population genomes to a high depth by mixing equimolar amounts of unlabelled-individual DNA (Pool-seq) and the sequencing of multiple individuals from a population to a low depth (lcWGR). These techniques require the availability of a reference genome. This, along with the still high cost of shotgun sequencing and the large demand for computing resources and storage, has limited their implementation in nonmodel species with scarce genomic resources and in fields such as conservation biology. Our goal here is to describe the various WGR methods, their pros and cons and potential applications in conservation biology. WGR offers an unprecedented marker density and surveys a wide diversity of genetic variations not limited to single nucleotide polymorphisms (e.g., structural variants and mutations in regulatory elements), increasing their power for the detection of signatures of selection and local adaptation as well as for the identification of the genetic basis of phenotypic traits and diseases. Currently, though, no single WGR approach fulfils all requirements of conservation genetics, and each method has its own limitations and sources of potential bias. We discuss proposed ways to minimize such biases. We envision a not distant future where the analysis of whole genomes becomes a routine task in many nonmodel species and fields including conservation biology. © 2017 John Wiley & Sons Ltd.

  15. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  16. Determination of point of maximum likelihood in failure domain using genetic algorithms

    International Nuclear Information System (INIS)

    Obadage, A.S.; Harnpornchai, N.

    2006-01-01

    The point of maximum likelihood in a failure domain yields the highest value of the probability density function in the failure domain. The maximum-likelihood point thus represents the worst combination of random variables that contribute in the failure event. In this work Genetic Algorithms (GAs) with an adaptive penalty scheme have been proposed as a tool for the determination of the maximum likelihood point. The utilization of only numerical values in the GAs operation makes the algorithms applicable to cases of non-linear and implicit single and multiple limit state function(s). The algorithmic simplicity readily extends its application to higher dimensional problems. When combined with Monte Carlo Simulation, the proposed methodology will reduce the computational complexity and at the same time will enhance the possibility in rare-event analysis under limited computational resources. Since, there is no approximation done in the procedure, the solution obtained is considered accurate. Consequently, GAs can be used as a tool for increasing the computational efficiency in the element and system reliability analyses

  17. A weight limit emerges for neutron stars

    Science.gov (United States)

    Cho, Adrian

    2018-02-01

    Astrophysicists have long wondered how massive a neutron star—the corpse of certain exploding stars—could be without collapsing under its own gravity to form a black hole. Now, four teams have independently deduced a mass limit for neutron stars of about 2.2 times the mass of the sun. To do so, all four groups analyzed last year's blockbuster observations of the merger of two neutron stars, spied on 17 September 2017 by dozens of observatories. That approach may seem unpromising, as it might appear that the merging neutron stars would have immediately produced a black hole. However, the researchers argue that the merger first produced a spinning, overweight neutron star momentarily propped up by centrifugal force. They deduce that just before it collapsed, the short-lived neutron star had to be near the maximum mass for one spinning as a solid body. That inference allowed them to use a scaling relationship to estimate the maximum mass of a nonrotating, stable neutron star, starting from the total mass of the original pair and the amount of matter spewed into space.

  18. A Bayes-Maximum Entropy method for multi-sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  19. STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.

  20. The role of pressure anisotropy on the maximum mass of cold ...

    Indian Academy of Sciences (India)

    ,. Pune 411 007, India. 3 ... red-shift and mass increase in the presence of anisotropic pressures; numerical values are generated which are in ... that anisotropy may also change the limiting values of the maximum mass of com- pact stars.

  1. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  2. Deterministic network interdiction optimization via an evolutionary approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This paper introduces an evolutionary optimization approach that can be readily applied to solve deterministic network interdiction problems. The network interdiction problem solved considers the minimization of the maximum flow that can be transmitted between a source node and a sink node for a fixed network design when there is a limited amount of resources available to interdict network links. Furthermore, the model assumes that the nominal capacity of each network link and the cost associated with their interdiction can change from link to link. For this problem, the solution approach developed is based on three steps that use: (1) Monte Carlo simulation, to generate potential network interdiction strategies, (2) Ford-Fulkerson algorithm for maximum s-t flow, to analyze strategies' maximum source-sink flow and, (3) an evolutionary optimization technique to define, in probabilistic terms, how likely a link is to appear in the final interdiction strategy. Examples for different sizes of networks and network behavior are used throughout the paper to illustrate the approach. In terms of computational effort, the results illustrate that solutions are obtained from a significantly restricted solution search space. Finally, the authors discuss the need for a reliability perspective to network interdiction, so that solutions developed address more realistic scenarios of such problem

  3. Limits to anaerobic energy and cytosolic concentration in the living cell

    Science.gov (United States)

    Paglietti, A.

    2015-11-01

    For many physical systems at any given temperature, the set of all states where the system's free energy reaches its largest value can be determined from the system's constitutive equations of internal energy and entropy, once a state of that set is known. Such an approach is fraught with complications when applied to a living cell, because the cell's cytosol contains thousands of solutes, and thus thousands of state variables, which makes determination of its state impractical. We show here that, when looking for the maximum energy that the cytosol can store and release, detailed information on cytosol composition is redundant. Compatibility with cell's life requires that a single variable that represents the overall concentration of cytosol solutes must fall between defined limits, which can be determined by dehydrating and overhydrating the cell to its maximum capacity. The same limits are shown to determine, in particular, the maximum amount of free energy that a cell can supply in fast anaerobic processes, starting from any given initial state. For a typical skeletal muscle in normal physiological conditions this energy, i.e., the maximum anaerobic capacity to do work, is calculated to be about 960 J per kg of muscular mass. Such energy decreases as the overall concentration of solutes in the cytosol is increased. Similar results apply to any kind of cell. They provide an essential tool to understand and control the macroscopic response of single cells and multicellular cellular tissues alike. The applications include sport physiology, cell aging, disease produced cell damage, drug absorption capacity, to mention the most obvious ones.

  4. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  5. The maximum possible stress intensity factor for a crack in an unknown residual stress field

    International Nuclear Information System (INIS)

    Coules, H.E.; Smith, D.J.

    2015-01-01

    Residual and thermal stress fields in engineering components can act on cracks and structural flaws, promoting or inhibiting fracture. However, these stresses are limited in magnitude by the ability of materials to sustain them elastically. As a consequence, the stress intensity factor which can be applied to a given defect by a self-equilibrating stress field is also limited. We propose a simple weight function method for determining the maximum stress intensity factor which can occur for a given crack or defect in a one-dimensional self-equilibrating stress field, i.e. an upper bound for the residual stress contribution to K I . This can be used for analysing structures containing defects and subject to residual stress without any information about the actual stress field which exists in the structure being analysed. A number of examples are given, including long radial cracks and fully-circumferential cracks in thick-walled hollow cylinders containing self-equilibrating stresses. - Highlights: • An upper limit to the contribution of residual stress to stress intensity factor. • The maximum K I for self-equilibrating stresses in several geometries is calculated. • A weight function method can determine this maximum for 1-dimensional stress fields. • Simple MATLAB scripts for calculating maximum K I provided as supplementary material.

  6. Forming Limits in Sheet Metal Forming for Non-Proportional Loading Conditions - Experimental and Theoretical Approach

    International Nuclear Information System (INIS)

    Ofenheimer, Aldo; Buchmayr, Bruno; Kolleck, Ralf; Merklein, Marion

    2005-01-01

    The influence of strain paths (loading history) on material formability is well known in sheet forming processes. Sophisticated experimental methods are used to determine the entire shape of strain paths of forming limits for aluminum AA6016-T4 alloy. Forming limits for sheet metal in as-received condition as well as for different pre-deformation are presented. A theoretical approach based on Arrieux's intrinsic Forming Limit Stress Curve (FLSC) concept is employed to numerically predict the influence of loading history on forming severity. The detailed experimental strain paths are used in the theoretical study instead of any linear or bilinear simplified loading histories to demonstrate the predictive quality of forming limits in the state of stress

  7. An extension theory-based maximum power tracker using a particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Chao, Kuei-Hsiang

    2014-01-01

    Highlights: • We propose an adaptive maximum power point tracking (MPPT) approach for PV systems. • Transient and steady state performances in tracking process are improved. • The proposed MPPT can automatically tune tracking step size along a P–V curve. • A PSO algorithm is used to determine the weighting values of extension theory. - Abstract: The aim of this work is to present an adaptive maximum power point tracking (MPPT) approach for photovoltaic (PV) power generation system. Integrating the extension theory as well as the conventional perturb and observe method, an maximum power point (MPP) tracker is made able to automatically tune tracking step size by way of the category recognition along a P–V characteristic curve. Accordingly, the transient and steady state performances in tracking process are improved. Furthermore, an optimization approach is proposed on the basis of a particle swarm optimization (PSO) algorithm for the complexity reduction in the determination of weighting values. At the end of this work, a simulated improvement in the tracking performance is experimentally validated by an MPP tracker with a programmable system-on-chip (PSoC) based controller

  8. Corrosion pit depth extreme value prediction from limited inspection data

    International Nuclear Information System (INIS)

    Najjar, D.; Bigerelle, M.; Iost, A.; Bourdeau, L.; Guillou, D.

    2004-01-01

    Passive alloys like stainless steels are prone to localized corrosion in chlorides containing environments. The greater the depth of the localized corrosion phenomenon, the more dramatic the related damage that can lead to a structure weakening by fast perforation. In practical situations, because measurements are time consuming and expensive, the challenge is usually to predict the maximum pit depth that could be found in a large scale installation from the processing of a limited inspection data. As far as the parent distribution of pit depths is assumed to be of exponential type, the most successful method was found in the application of the statistical extreme-value analysis developed by Gumbel. This study aims to present a new and alternative methodology to the Gumbel approach with a view towards accurately estimating the maximum pit depth observed on a ferritic stainless steel AISI 409 subjected to an accelerated corrosion test (ECC1) used in automotive industry. This methodology consists in characterising and modelling both the morphology of pits and the statistical distribution of their depths from a limited inspection dataset. The heart of the data processing is based on the combination of two recent statistical methods that avoid making any choice about the type of the theoretical underlying parent distribution of pit depths: the Generalized Lambda Distribution (GLD) is used to model the distribution of pit depths and the Bootstrap technique to determine a confidence interval on the maximum pit depth. (authors)

  9. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    Science.gov (United States)

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  10. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  11. Maximum power point tracking for PV systems under partial shading conditions using current sweeping

    International Nuclear Information System (INIS)

    Tsang, K.M.; Chan, W.L.

    2015-01-01

    Highlights: • A novel approach for tracking the maximum power point of photovoltaic systems. • Able to handle both the uniform insolation and partial shading conditions. • Maximum power point tracking based on current sweeping. - Abstract: Partial shading on photovoltaic (PV) arrays causes multiple peaks on the output power–voltage characteristic curve and local searching technique such as perturb and observe (P&O) method could easily fail in searching for the global maximum. Moreover, existing global searching techniques are still not very satisfactory in terms of speed and implementation complexity. In this paper, a fast global maximum power point (MPPT) tracking method which is using current sweeping for photovoltaic arrays under partial shading conditions is proposed. Unlike conventional approach, the proposed method is current based rather than voltage based. The initial maximum power point will be derived based on a current sweeping test and the maximum power point can be enhanced by a finer local search. The speed of the global search is mainly governed by the apparent time constant of the PV array and the generation of a fast current sweeping test. The fast current sweeping test can easily be realized by a DC/DC boost converter with a very fast current control loop. Experimental results are included to demonstrate the effectiveness of the proposed global searching scheme

  12. Tolerance limits of X-ray image intensity

    International Nuclear Information System (INIS)

    Stargardt, A.; Juran, R.; Brandt, G.A.

    1985-01-01

    Evaluation of the tolerance limits of X-ray image density accepted by the radiologist shows that for different kinds of examinations, deviations of more than 50% from optimal density lead to images which cannot be used diagnostically. Within this range diagnostic accuracy shows a distinct maximum and diminishes to the limits by 20%. These figures are related to differences in the intensifying factor of screens, sensitivity of films, sensitometric parameters of film processing as well as the doses employed with automatic exposure control devices, measured in clinical conditions. Maximum permissible tolerance limits of the whole imaging system and of its constituents are discussed using the Gaussian law of error addition. (author)

  13. Heat Convection at the Density Maximum Point of Water

    Science.gov (United States)

    Balta, Nuri; Korganci, Nuri

    2018-01-01

    Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…

  14. Optimal Control of Polymer Flooding Based on Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  15. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  16. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  18. Maximum entropy PDF projection: A review

    Science.gov (United States)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  19. The asymptotic behaviour of the maximum likelihood function of Kriging approximations using the Gaussian correlation function

    CSIR Research Space (South Africa)

    Kok, S

    2012-07-01

    Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...

  20. LANSCE beam current limiter

    International Nuclear Information System (INIS)

    Gallegos, F.R.

    1996-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described

  1. Maximum Entropy Production Modeling of Evapotranspiration Partitioning on Heterogeneous Terrain and Canopy Cover: advantages and limitations.

    Science.gov (United States)

    Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.

    2015-12-01

    Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.

  2. WCSPH with Limiting Viscosity for Modeling Landslide Hazard at the Slopes of Artificial Reservoir

    Directory of Open Access Journals (Sweden)

    Sauro Manenti

    2018-04-01

    Full Text Available This work illustrated an application of the FOSS code SPHERA v.8.0 (RSE SpA, Milano, Italy to the simulation of landslide hazard at the slope of a water basin. SPHERA is based on the weakly compressible SPH method (WCSPH and holds a mixture model, consistent with the packing limit of the Kinetic Theory of Granular Flow (KTGF, which was previously tested for simulating two-phase free-surface rapid flows involving water-sediment interaction. In this study a limiting viscosity parameter was implemented in the previous formulation of the mixture model to limit the growth of the apparent viscosity, thus saving computational time while preserving the solution accuracy. This approach is consistent with the experimental behavior of high polymer solutions for which an almost constant value of viscosity may be approached at very low deformation rates near the transition zone of elastic–plastic regime. In this application, the limiting viscosity was used as a numerical parameter for optimization of the computation. Some preliminary tests were performed by simulating a 2D erosional dam break, proving that a proper selection of the limiting viscosity leads to a considerable drop of the computational time without altering significantly the numerical solution. SPHERA was then validated by simulating a 2D scale experiment reproducing the early phase of the Vajont landslide when a tsunami wave was generated that climbed the opposite mountain side with a maximum run-up of about 270 m. The obtained maximum run-up was very close to the experimental result. Influence of saturation of the landslide material below the still water level was also accounted, showing that the landslide dynamics can be better represented and the wave run-up can be properly estimated.

  3. Investigation on maximum transition temperature of phonon mediated superconductivity

    Energy Technology Data Exchange (ETDEWEB)

    Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)

    1989-05-01

    Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.

  4. Quantum maximum-entropy principle for closed quantum hydrodynamic transport within a Wigner function formalism

    International Nuclear Information System (INIS)

    Trovato, M.; Reggiani, L.

    2011-01-01

    By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of (ℎ/2π) 2 . In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when (ℎ/2π)→0.

  5. Review of probable maximum flood definition at B.C. Hydro

    International Nuclear Information System (INIS)

    Keenhan, P.T.; Kroeker, M.G.; Neudorf, P.A.

    1991-01-01

    Probable maximum floods (PMF) have been derived for British Columbia Hydro structures since design of the W.C. Bennet Dam in 1965. A dam safety program for estimating PMF for structures designed before that time has been ongoing since 1979. The program, which has resulted in rehabilitative measures at dams not meeting current established standards, is now being directed at the more recently constructed larger structures on the Peace and Columbia rivers. Since 1965 detailed studies have produced 23 probable maximum precipitation (PMP) and 24 PMF estimates. What defines a PMF in British Columbia in terms of an appropriate combination of meteorological conditions varies due to basin size and the climatic effect of mountain barriers. PMP is estimated using three methods: storm maximization and transposition, orographic separation method, and modification of non-orographic PMP for orography. Details of, and problems encountered with, these methods are discussed. Tools or methods to assess meterological limits for antecedant conditions and for limits to runoff during extreme events have not been developed and require research effort. 11 refs., 2 figs., 3 tabs

  6. [Evolutionary process unveiled by the maximum genetic diversity hypothesis].

    Science.gov (United States)

    Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi

    2013-05-01

    As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.

  7. Shifting distributions of adult Atlantic sturgeon amidst post-industrialization and future impacts in the Delaware River: a maximum entropy approach.

    Directory of Open Access Journals (Sweden)

    Matthew W Breece

    Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.

  8. Analysis of performance limitations for superconducting cavities

    International Nuclear Information System (INIS)

    J. R. Delayen; L. R. Doolittle; C. E. Reece

    1998-01-01

    The performance of superconducting cavities in accelerators can be limited by several factors, such as: field emission, quenches, arcing, rf power; and the maximum gradient at which a cavity can operate will be determined by the lowest of these limitations for that particular cavity. The CEBAF accelerator operates with over 300 cavities and, for each of them, the authors have determined the maximum operating gradient and its limiting factor. They have developed a model that allows them to determine the distribution of gradients that could be achieved for each of these limitations independently of the others. The result of this analysis can guide an R and D program to achieve the best overall performance improvement. The same model can be used to relate the performance of single-cell and multi-cell cavities

  9. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  10. Determination of Measurement Points in Urban Environments for Assessment of Maximum Exposure to EMF Associated with a Base Station

    Directory of Open Access Journals (Sweden)

    Agostinho Linhares

    2014-01-01

    Full Text Available A base station (BS antenna operates in accordance with the established exposure limits if the values of electromagnetic fields (EMF measured in points of maximum exposure are below these limits. In the case of BS in open areas, the maximum exposure to EMF probably occurs in the antenna’s boresight direction, from a few tens to a few hundred meters away. This is not a typical scenery for urban environments. However, in the line of sight (LOS situation, the region of maximum exposure can still be analytically estimated with good results. This paper presents a methodology for the choice of measurement points in urban areas in order to assess compliance with the limits for exposure to EMF.

  11. Design of Simplified Maximum-Likelihood Receivers for Multiuser CPM Systems

    Directory of Open Access Journals (Sweden)

    Li Bing

    2014-01-01

    Full Text Available A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases reduced complexity and marginal performance degradation.

  12. An ecological function and services approach to total maximum daily load (TMDL) prioritization.

    Science.gov (United States)

    Hall, Robert K; Guiliano, David; Swanson, Sherman; Philbin, Michael J; Lin, John; Aron, Joan L; Schafer, Robin J; Heggem, Daniel T

    2014-04-01

    Prioritizing total maximum daily load (TMDL) development starts by considering the scope and severity of water pollution and risks to public health and aquatic life. Methodology using quantitative assessments of in-stream water quality is appropriate and effective for point source (PS) dominated discharge, but less so in watersheds with mostly nonpoint source (NPS) related impairments. For NPSs, prioritization in TMDL development and implementation of associated best management practices should focus on restoration of ecosystem physical functions, including how restoration effectiveness depends on design, maintenance and placement within the watershed. To refine the approach to TMDL development, regulators and stakeholders must first ask if the watershed, or ecosystem, is at risk of losing riparian or other ecologically based physical attributes and processes. If so, the next step is an assessment of the spatial arrangement of functionality with a focus on the at-risk areas that could be lost, or could, with some help, regain functions. Evaluating stream and wetland riparian function has advantages over the traditional means of water quality and biological assessments for NPS TMDL development. Understanding how an ecosystem functions enables stakeholders and regulators to determine the severity of problem(s), identify source(s) of impairment, and predict and avoid a decline in water quality. The Upper Reese River, Nevada, provides an example of water quality impairment caused by NPS pollution. In this river basin, stream and wetland riparian proper functioning condition (PFC) protocol, water quality data, and remote sensing imagery were used to identify sediment sources, transport, distribution, and its impact on water quality and aquatic resources. This study found that assessments of ecological function could be used to generate leading (early) indicators of water quality degradation for targeting pollution control measures, while traditional in-stream water

  13. An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits

    Directory of Open Access Journals (Sweden)

    R. de J. Portillo-Vélez

    2015-01-01

    Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.

  14. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  15. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  16. Relevance of plastic limit loads to reference stress approach for surface cracked cylinder problems

    International Nuclear Information System (INIS)

    Kim, Yun-Jae; Shim, Do-Jun

    2005-01-01

    To investigate the relevance of the definition of the reference stress to estimate J and C* for surface crack problems, this paper compares finite element (FE) J and C* results for surface cracked pipes with those estimated according to the reference stress approach using various definitions of the reference stress. Pipes with part circumferential inner surface cracks and finite internal axial cracks are considered, subject to internal pressure and global bending. The crack depth and aspect ratio are systematically varied. The reference stress is defined in four different ways using (i) a local limit load (ii), a global limit load, (iii) a global limit load determined from the FE limit analysis, and (iv) the optimised reference load. It is found that the reference stress based on a local limit load gives overall excessively conservative estimates of J and C*. Use of a global limit load clearly reduces the conservatism, compared to that of a local limit load, although it can sometimes provide non-conservative estimates of J and C*. The use of the FE global limit load gives overall non-conservative estimates of J and C*. The reference stress based on the optimised reference load gives overall accurate estimates of J and C*, compared to other definitions of the reference stress. Based on the present findings, general guidance on the choice of the reference stress for surface crack problems is given

  17. High beta tokamak operation in DIII-D limited at low density/collisionality by resistive tearing modes

    International Nuclear Information System (INIS)

    La Haye, R.J.; Lao, L.L.; Strait, E.J.; Taylor, T.S.

    1997-01-01

    The maximum operational high beta in single-null divertor (SND) long pulse tokamak discharges in the DIII-D tokamak with a cross-sectional shape similar to the proposed International Thermonuclear Experimental Reactor (ITER) device is found to be limited by the onset of resistive instabilities that have the characteristics of neoclassically destabilized tearing modes. There is a soft limit due to the onset of an m/n=3/2 rotating tearing mode that saturates at low amplitude and a hard limit at slightly higher beta due to the onset of an m/n=2/1 rotating tearing mode that grows, slows down and locks. By operating at higher density and thus collisionality, the practical beta limit due to resistive tearing modes approaches the ideal magnetohydrodynamic (MHD) limit. (author). 15 refs, 4 figs

  18. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    Science.gov (United States)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  19. The impact of regulations, safety considerations and physical limitations on research progress at maximum biocontainment.

    Science.gov (United States)

    Shurtleff, Amy C; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S; Bavari, Sina

    2012-12-01

    We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.

  20. The Impact of Regulations, Safety Considerations and Physical Limitations on Research Progress at Maximum Biocontainment

    Directory of Open Access Journals (Sweden)

    Jean Patterson

    2012-12-01

    Full Text Available We describe herein, limitations on research at biosafety level 4 (BSL-4 containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.

  1. The Impact of Regulations, Safety Considerations and Physical Limitations on Research Progress at Maximum Biocontainment

    Science.gov (United States)

    Shurtleff, Amy C.; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S.; Bavari, Sina

    2012-01-01

    We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review. PMID:23342380

  2. Maximum likelihood estimation of dose-response parameters for therapeutic operating characteristic (TOC) analysis of carcinoma of the nasopharynx

    International Nuclear Information System (INIS)

    Metz, C.E.; Tokars, R.P.; Kronman, H.B.; Griem, M.L.

    1982-01-01

    A Therapeutic Operating Characteristic (TOC) curve for radiation therapy plots, for all possible treatment doses, the probability of tumor ablation as a function of the probability of radiation-induced complication. Application of this analysis to actual therapeutic situation requires that dose-response curves for ablation and for complication be estimated from clinical data. We describe an approach in which ''maximum likelihood estimates'' of these dose-response curves are made, and we apply this approach to data collected on responses to radiotherapy for carcinoma of the nasopharynx. TOC curves constructed from the estimated dose-response curves are subject to moderately large uncertainties because of the limitations of available data.These TOC curves suggest, however, that treatment doses greater than 1800 rem may substantially increase the probability of tumor ablation with little increase in the risk of radiation-induced cervical myelopathy, especially for T1 and T2 tumors

  3. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  4. Belene nuclear power plant contracting approach

    International Nuclear Information System (INIS)

    Tankosic, D.; Mignone, O.

    2004-01-01

    Historically, three main types of project execution and contractual approaches have been applied to energy and industrial projects, including nuclear projects. These approaches are grouped into three broad categories: 1) Turnkey Approach; 2) Split Package (Island) Approach; and 3)Multiple Package Approach. Based on a preliminary screening done by an ongoing feasibility study work for NPP Belene (NEK contract to Parsons E and C), the recommended approach is going to follow that general trend i.e., with some variation between the Split Package and the Turnkey approach. Before deciding on an execution approach or at least before issuing bid specifications for the nuclear power plant, it is prudent, even for a country with existing nuclear power program (like Bulgaria), to re-check/verify capabilities of the interested bidders to handle contracts of this size and nature. During the last decades, nuclear energy went through a substantial restructuring and most of the capabilities (human and financial) that existed before are not any more available. This re-checking should mainly cover the experience of the bidders as regards the design, construction and operation of the stations where they were involved, but also include items such as local experience, capability to bring favorable financing, liability coverage, general background, potential and organizational structures. The advantages and disadvantages for the Owner of the three contracting approaches can be briefly summarized as follows: Turnkey Approach - main advantages: all responsibilities rest in a Contractor or Consortium. Main disadvantages - limited project control by Owner and restricted local participation. For Split Package Contract Approach main advantage are more favorable financing conditions and increased local participation. Main disadvantage is the increased interface problems. For Multiple package Contract Approach main advantages are the opportunity to tailor the plant and maximum increase of local

  5. Maximum entropy method in momentum density reconstruction

    International Nuclear Information System (INIS)

    Dobrzynski, L.; Holas, A.

    1997-01-01

    The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig

  6. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  7. A spatiotemporal dengue fever early warning model accounting for nonlinear associations with meteorological factors: a Bayesian maximum entropy approach

    Science.gov (United States)

    Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang

    2014-05-01

    Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.

  8. A comparative study on the forming limit diagram prediction between Marciniak-Kuczynski model and modified maximum force criterion by using the evolving non-associated Hill48 plasticity model

    Science.gov (United States)

    Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian

    2018-05-01

    Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.

  9. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    Science.gov (United States)

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  10. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current

  11. Transport methods: general. 6. A Flux-Limited Diffusion Theory Derived from the Maximum Entropy Eddington Factor

    International Nuclear Information System (INIS)

    Yin, Chukai; Su, Bingjing

    2001-01-01

    The Minerbo's maximum entropy Eddington factor (MEEF) method was proposed as a low-order approximation to transport theory, in which the first two moment equations are closed for the scalar flux f and the current F through a statistically derived nonlinear Eddington factor f. This closure has the ability to handle various degrees of anisotropy of angular flux and is well justified both numerically and theoretically. Thus, a lot of efforts have been made to use this approximation in transport computations, especially in the radiative transfer and astrophysics communities. However, the method suffers numerical instability and may lead to anomalous solutions if the equations are solved by certain commonly used (implicit) mesh schemes. Studies on numerical stability in one-dimensional cases show that the MEEF equations can be solved satisfactorily by an implicit scheme (of treating δΦ/δx) if the angular flux is not too anisotropic so that f 32 , the classic diffusion solution P 1 , the MEEF solution f M obtained by Riemann solvers, and the NFLD solution D M for the two problems, respectively. In Fig. 1, NFLD and MEEF quantitatively predict very close results. However, the NFLD solution is qualitatively better because it is continuous while MEEF predicts unphysical jumps near the middle of the slab. In Fig. 2, the NFLD and MEEF solutions are almost identical, except near the material interface. In summary, the flux-limited diffusion theory derived from the MEEF description is quantitatively as accurate as the MEEF method. However, it is more qualitatively correct and user-friendly than the MEEF method and can be applied efficiently to various steady-state problems. Numerical tests show that this method is widely valid and overall predicts better results than other low-order approximations for various kinds of problems, including eigenvalue problems. Thus, it is an appealing approximate solution technique that is fast computationally and yet is accurate enough for a

  12. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    Science.gov (United States)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  13. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  14. Metabolic expenditures of lunge feeding rorquals across scale: implications for the evolution of filter feeding and the limits to maximum body size.

    Directory of Open Access Journals (Sweden)

    Jean Potvin

    Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting

  15. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    International Nuclear Information System (INIS)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R.

    2012-01-01

    : ► Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. ► Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). ► Mathematical model using vector addition approach and based on Gaussian dispersion. ► Model predicted maximum emissions of heavy metals at different wind speeds. ► Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  16. Prey size and availability limits maximum size of rainbow trout in a large tailwater: insights from a drift-foraging bioenergetics model

    Science.gov (United States)

    Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W

    2016-01-01

    The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.

  17. Evaluation of the maximum-likelihood adaptive neural system (MLANS) applications to noncooperative IFF

    Science.gov (United States)

    Chernick, Julian A.; Perlovsky, Leonid I.; Tye, David M.

    1994-06-01

    This paper describes applications of maximum likelihood adaptive neural system (MLANS) to the characterization of clutter in IR images and to the identification of targets. The characterization of image clutter is needed to improve target detection and to enhance the ability to compare performance of different algorithms using diverse imagery data. Enhanced unambiguous IFF is important for fratricide reduction while automatic cueing and targeting is becoming an ever increasing part of operations. We utilized MLANS which is a parametric neural network that combines optimal statistical techniques with a model-based approach. This paper shows that MLANS outperforms classical classifiers, the quadratic classifier and the nearest neighbor classifier, because on the one hand it is not limited to the usual Gaussian distribution assumption and can adapt in real time to the image clutter distribution; on the other hand MLANS learns from fewer samples and is more robust than the nearest neighbor classifiers. Future research will address uncooperative IFF using fused IR and MMW data.

  18. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  19. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  20. New approaches to deriving limits of the release of radioactive material into the environment

    International Nuclear Information System (INIS)

    Lindell, B.

    1977-01-01

    During the last few years, new principles have been developed for the limitation of the release of radioactive material into the environment. It is no longer considered appropriate to base the limitation on limits for the concentrations of the various radionuclides in air and water effluents. Such limits would not prevent large amounts of radioactive material from reaching the environment should effluent rates be high. A common practice has been to identify critical radionuclides and critical pathways and to base the limitation on authorized dose limits for local ''critical groups''. If this were the only limitation, however, larger releases could be permitted after installing either higher stacks or equipment to retain the more short-lived radionuclides for decay before release. Continued release at such limits would then lead to considerably higher exposure at a distance than if no such installation had been made. Accordingly there would be no immediate control of overlapping exposures from several sources, nor would the system guarantee control of the future situation. The new principles described in this paper take the future into account by limiting the annual dose commitments rather than the annual doses. They also offer means of controlling the global situation by limiting not only doses in critical groups but also global collective doses. Their objective is not only to ensure that individual dose limits will always be respected but also to meet the requirement that ''all doses be kept as low as reasonably achievable''. The new approach is based on the most recent recommendations by the ICRP and has been described in a report by an IAEA panel (Procedures for establishing limits for the release of radioactive material into the environment). It has been applied in the development of new Swedish release regulations, which illustrate some of the problems which arise in the practical application

  1. New approaches to deriving limits of the release of radioactive material into the environment

    International Nuclear Information System (INIS)

    Lindell, B.

    1977-01-01

    During the last few years, new principles have been developed for the limitation of the release of radioactive material into the environment. It is no longer considered appropriate to base the limitation on limits for the concentrations of the various radionuclides in air and water effluents. Such limits would not prevent large amounts of radioactive material from reaching the environment should effluent rates be high. A common practice has been to identify critical radionuclides and critical pathways and to base the limitation on authorized dose limits for local ''critical groups''. If this were the only limitation, however, larger releases could be permitted after installing either higher stacks or equipment to retain the more shortlived radionuclides for decay before release. Continued release at such limits would then lead to considerably higher exposure at a distance than if no such installation had been made. Accordingly there would be no immediate control of overlapping exposures from several sources, nor would the system guarantee control of the future situation. The new principles described in this paper take the future into account by limiting the annual dose commitments rather than the annual doses. They also offer means of controlling the global situation by limiting not only doses in critical groups but also global collective doses. Their objective is not only to ensure that individual dose limits will always be respected but also to meet the requirement that ''all doses be kept as low as reasonably achievable''. The new approach is based on the most recent recommendations by the ICRP and has been described in a report by an IAEA panel (Procedures for Establishing Limits for the Release of Radioactive Material into the Environment). It has been applied in the development of new Swedish release regulations, which illustrate some of the problems which arise in the practical application. (author)

  2. Maximum flow approach to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv from protein-protein interaction network.

    Science.gov (United States)

    Melak, Tilahun; Gakkhar, Sunita

    2015-12-01

    In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to

  3. Density limit experiments on FTU

    International Nuclear Information System (INIS)

    Pucella, G.; Tudisco, O.; Apicella, M.L.; Apruzzese, G.; Artaserse, G.; Belli, F.; Boncagni, L.; Botrugno, A.; Buratti, P.; Calabrò, G.; Castaldo, C.; Cianfarani, C.; Cocilovo, V.; Dimatteo, L.; Esposito, B.; Frigione, D.; Gabellieri, L.; Giovannozzi, E.; Bin, W.; Granucci, G.

    2013-01-01

    One of the main problems in tokamak fusion devices concerns the capability to operate at a high plasma density, which is observed to be limited by the appearance of catastrophic events causing loss of plasma confinement. The commonly used empirical scaling law for the density limit is the Greenwald limit, predicting that the maximum achievable line-averaged density along a central chord depends only on the average plasma current density. However, the Greenwald density limit has been exceeded in tokamak experiments in the case of peaked density profiles, indicating that the edge density is the real parameter responsible for the density limit. Recently, it has been shown on the Frascati Tokamak Upgrade (FTU) that the Greenwald density limit is exceeded in gas-fuelled discharges with a high value of the edge safety factor. In order to understand this behaviour, dedicated density limit experiments were performed on FTU, in which the high density domain was explored in a wide range of values of plasma current (I p = 500–900 kA) and toroidal magnetic field (B T = 4–8 T). These experiments confirm the edge nature of the density limit, as a Greenwald-like scaling holds for the maximum achievable line-averaged density along a peripheral chord passing at r/a ≃ 4/5. On the other hand, the maximum achievable line-averaged density along a central chord does not depend on the average plasma current density and essentially depends on the toroidal magnetic field only. This behaviour is explained in terms of density profile peaking in the high density domain, with a peaking factor at the disruption depending on the edge safety factor. The possibility that the MARFE (multifaced asymmetric radiation from the edge) phenomenon is the cause of the peaking has been considered, with the MARFE believed to form a channel for the penetration of the neutral particles into deeper layers of the plasma. Finally, the magnetohydrodynamic (MHD) analysis has shown that also the central line

  4. A review of the regional maximum flood and rational formula using ...

    African Journals Online (AJOL)

    Flood estimation methods in South Africa are based on three general approaches: empirical, deterministic and probabilistic. The \\"quick\\" methods often used as checks are the regional maximum flood (RMF) and the rational formula (RF), which form part of the empirical and deterministic methods respectively. A database of ...

  5. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  6. Maximum Evaporation Rates of Water Droplets Approaching Obstacles in the Atmosphere Under Icing Conditions

    Science.gov (United States)

    Lowell, H. H.

    1953-01-01

    When a closed body or a duct envelope moves through the atmosphere, air pressure and temperature rises occur ahead of the body or, under ram conditions, within the duct. If cloud water droplets are encountered, droplet evaporation will result because of the air-temperature rise and the relative velocity between the droplet and stagnating air. It is shown that the solution of the steady-state psychrometric equation provides evaporation rates which are the maximum possible when droplets are entrained in air moving along stagnation lines under such conditions. Calculations are made for a wide variety of water droplet diameters, ambient conditions, and flight Mach numbers. Droplet diameter, body size, and Mach number effects are found to predominate, whereas wide variation in ambient conditions are of relatively small significance in the determination of evaporation rates. The results are essentially exact for the case of movement of droplets having diameters smaller than about 30 microns along relatively long ducts (length at least several feet) or toward large obstacles (wings), since disequilibrium effects are then of little significance. Mass losses in the case of movement within ducts will often be significant fractions (one-fifth to one-half) of original droplet masses, while very small droplets within ducts will often disappear even though the entraining air is not fully stagnated. Wing-approach evaporation losses will usually be of the order of several percent of original droplet masses. Two numerical examples are given of the determination of local evaporation rates and total mass losses in cases involving cloud droplets approaching circular cylinders along stagnation lines. The cylinders chosen were of 3.95-inch (10.0+ cm) diameter and 39.5-inch 100+ cm) diameter. The smaller is representative of icing-rate measurement cylinders, while with the larger will be associated an air-flow field similar to that ahead of an airfoil having a leading-edge radius

  7. Maximum stress estimation model for multi-span waler beams with deflections at the supports using average strains.

    Science.gov (United States)

    Park, Sung Woo; Oh, Byung Kwan; Park, Hyo Seon

    2015-03-30

    The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs), the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  8. Risk based limits for Operational Safety Requirements

    International Nuclear Information System (INIS)

    Cappucci, A.J. Jr.

    1993-01-01

    OSR limits are designed to protect the assumptions made in the facility safety analysis in order to preserve the safety envelope during facility operation. Normally, limits are set based on ''worst case conditions'' without regard to the likelihood (frequency) of a credible event occurring. In special cases where the accident analyses are based on ''time at risk'' arguments, it may be desirable to control the time at which the facility is at risk. A methodology has been developed to use OSR limits to control the source terms and the times these source terms would be available, thus controlling the acceptable risk to a nuclear process facility. The methodology defines a new term ''gram-days''. This term represents the area under a source term (inventory) vs time curve which represents the risk to the facility. Using the concept of gram-days (normalized to one year) allows the use of an accounting scheme to control the risk under the inventory vs time curve. The methodology results in at least three OSR limits: (1) control of the maximum inventory or source term, (2) control of the maximum gram-days for the period based on a source term weighted average, and (3) control of the maximum gram-days at the individual source term levels. Basing OSR limits on risk based safety analysis is feasible, and a basis for development of risk based limits is defensible. However, monitoring inventories and the frequencies required to maintain facility operation within the safety envelope may be complex and time consuming

  9. The calculation of maximum permissible exposure levels for laser radiation

    International Nuclear Information System (INIS)

    Tozer, B.A.

    1979-01-01

    The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)

  10. Limitations of Boltzmann's principle

    International Nuclear Information System (INIS)

    Lavenda, B.H.

    1995-01-01

    The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2

  11. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    Science.gov (United States)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  12. Application of various FLD modelling approaches

    Science.gov (United States)

    Banabic, D.; Aretz, H.; Paraianu, L.; Jurco, P.

    2005-07-01

    This paper focuses on a comparison between different modelling approaches to predict the forming limit diagram (FLD) for sheet metal forming under a linear strain path using the recently introduced orthotropic yield criterion BBC2003 (Banabic D et al 2005 Int. J. Plasticity 21 493-512). The FLD models considered here are a finite element based approach, the well known Marciniak-Kuczynski model, the modified maximum force criterion according to Hora et al (1996 Proc. Numisheet'96 Conf. (Dearborn/Michigan) pp 252-6), Swift's diffuse (Swift H W 1952 J. Mech. Phys. Solids 1 1-18) and Hill's classical localized necking approach (Hill R 1952 J. Mech. Phys. Solids 1 19-30). The FLD of an AA5182-O aluminium sheet alloy has been determined experimentally in order to quantify the predictive capabilities of the models mentioned above.

  13. A genetic approach to shape reconstruction in limited data tomography

    International Nuclear Information System (INIS)

    Turcanu, C.; Craciunescu, T.

    2001-01-01

    The paper proposes a new method for shape reconstruction in computerized tomography. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited view angles . The problem of image reconstruction from projection may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by a genetic algorithm. The algorithm has some features common to all genetic algorithms but also some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated on a phantom typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. A genetic reconstruction is presented. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise. (authors)

  14. A macrothermodynamic approach to the limit of reversible capillary condensation.

    Science.gov (United States)

    Trens, Philippe; Tanchoux, Nathalie; Galarneau, Anne; Brunel, Daniel; Fubini, Bice; Garrone, Edoardo; Fajula, François; Di Renzo, Francesco

    2005-08-30

    The threshold of reversible capillary condensation is a well-defined thermodynamic property, as evidenced by corresponding states treatment of literature and experimental data on the lowest closure point of the hysteresis loop in capillary condensation-evaporation cycles for several adsorbates. The nonhysteretical filling of small mesopores presents the properties of a first-order phase transition, confirming that the limit of condensation reversibility does not coincide with the pore critical point. The enthalpy of reversible capillary condensation can be calculated by a Clausius-Clapeyron approach and is consistently larger than the condensation heat in unconfined conditions. Calorimetric data on the capillary condensation of tert-butyl alcohol in MCM-41 silica confirm a 20% increase of condensation heat in small mesopores. This enthalpic advantage makes easier the overcoming of the adhesion forces by the capillary forces and justifies the disappearing of the hysteresis loop.

  15. Comparison of Extremum-Seeking Control Techniques for Maximum Power Point Tracking in Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chen-Han Wu

    2011-12-01

    Full Text Available Due to Japan’s recent nuclear crisis and petroleum price hikes, the search for renewable energy sources has become an issue of immediate concern. A promising candidate attracting much global attention is solar energy, as it is green and also inexhaustible. A maximum power point tracking (MPPT controller is employed in such a way that the output power provided by a photovoltaic (PV system is boosted to its maximum level. However, in the context of abrupt changes in irradiance, conventional MPPT controller approaches suffer from insufficient robustness against ambient variation, inferior transient response and a loss of output power as a consequence of the long duration required of tracking procedures. Accordingly, in this work the maximum power point tracking is carried out successfully using a sliding mode extremum-seeking control (SMESC method, and the tracking performances of three controllers are compared by simulations, that is, an extremum-seeking controller, a sinusoidal extremum-seeking controller and a sliding mode extremum-seeking controller. Being able to track the maximum power point promptly in the case of an abrupt change in irradiance, the SMESC approach is proven by simulations to be superior in terms of system dynamic and steady state responses, and an excellent robustness along with system stability is demonstrated as well.

  16. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  17. Maximum Bandwidth Enhancement of Current Mirror using Series-Resistor and Dynamic Body Bias Technique

    Directory of Open Access Journals (Sweden)

    V. Niranjan

    2014-09-01

    Full Text Available This paper introduces a new approach for enhancing the bandwidth of a low voltage CMOS current mirror. The proposed approach is based on utilizing body effect in a MOS transistor by connecting its gate and bulk terminals together for signal input. This results in boosting the effective transconductance of MOS transistor along with reduction of the threshold voltage. The proposed approach does not affect the DC gain of the current mirror. We demonstrate that the proposed approach features compatibility with widely used series-resistor technique for enhancing the current mirror bandwidth and both techniques have been employed simultaneously for maximum bandwidth enhancement. An important consequence of using both techniques simultaneously is the reduction of the series-resistor value for achieving the same bandwidth. This reduction in value is very attractive because a smaller resistor results in smaller chip area and less noise. PSpice simulation results using 180 nm CMOS technology from TSMC are included to prove the unique results. The proposed current mirror operates at 1Volt consuming only 102 µW and maximum bandwidth extension ratio of 1.85 has been obtained using the proposed approach. Simulation results are in good agreement with analytical predictions.

  18. Site Specific Probable Maximum Precipitation Estimates and Professional Judgement

    Science.gov (United States)

    Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.

    2015-12-01

    State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially

  19. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  20. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  1. Study on wavelength of maximum absorbance for phenyl- thiourea derivatives: A topological and non-conventional physicochemical approach

    International Nuclear Information System (INIS)

    Thakur, Suprajnya; Mishra, Ashutosh; Thakur, Mamta; Thakur, Abhilash

    2014-01-01

    In present study efforts have been made to analyze the role of different structural/ topological and non-conventional physicochemical features on the X-ray absorption property wavelength of maximum absorption λ m . Efforts are also made to compare the magnitude of various parameters for optimization of the features mainly responsible to characterize the wavelength of maximum absorbance λ m in X-ray absorption. For the purpose multiple linear regression method is used and on the basis of regression and correlation value suitable model have been developed.

  2. An electromagnetism-like method for the maximum set splitting problem

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2013-01-01

    Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.

  3. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    Science.gov (United States)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  4. Estimating Probable Maximum Precipitation by Considering Combined Effect of Typhoon and Southwesterly Air Flow

    Directory of Open Access Journals (Sweden)

    Cheng-Chin Liu

    2016-01-01

    Full Text Available Typhoon Morakot hit southern Taiwan in 2009, bringing 48-hr of heavy rainfall [close to the Probable Maximum Precipitation (PMP] to the Tsengwen Reservoir catchment. This extreme rainfall event resulted from the combined (co-movement effect of two climate systems (i.e., typhoon and southwesterly air flow. Based on the traditional PMP estimation method (i.e., the storm transposition method, STM, two PMP estimation approaches, i.e., Amplification Index (AI and Independent System (IS approaches, which consider the combined effect are proposed in this work. The AI approach assumes that the southwesterly air flow precipitation in a typhoon event could reach its maximum value. The IS approach assumes that the typhoon and southwesterly air flow are independent weather systems. Based on these assumptions, calculation procedures for the two approaches were constructed for a case study on the Tsengwen Reservoir catchment. The results show that the PMP estimates for 6- to 60-hr durations using the two approaches are approximately 30% larger than the PMP estimates using the traditional STM without considering the combined effect. This work is a pioneer PMP estimation method that considers the combined effect of a typhoon and southwesterly air flow. Further studies on this issue are essential and encouraged.

  5. Approaching conversion limit with all-dielectric solar cell reflectors.

    Science.gov (United States)

    Fu, Sze Ming; Lai, Yi-Chun; Tseng, Chi Wei; Yan, Sheng Lun; Zhong, Yan Kai; Shen, Chang-Hong; Shieh, Jia-Min; Li, Yu-Ren; Cheng, Huang-Chung; Chi, Gou-chung; Yu, Peichen; Lin, Albert

    2015-02-09

    Metallic back reflectors has been used for thin-film and wafer-based solar cells for very long time. Nonetheless, the metallic mirrors might not be the best choices for photovoltaics. In this work, we show that solar cells with all-dielectric reflectors can surpass the best-configured metal-backed devices. Theoretical and experimental results all show that superior large-angle light scattering capability can be achieved by the diffuse medium reflectors, and the solar cell J-V enhancement is higher for solar cells using all-dielectric reflectors. Specifically, the measured diffused scattering efficiency (D.S.E.) of a diffuse medium reflector is >0.8 for the light trapping spectral range (600nm-1000nm), and the measured reflectance of a diffuse medium can be as high as silver if the geometry of embedded titanium oxide(TiO(2)) nanoparticles is optimized. Moreover, the diffuse medium reflectors have the additional advantage of room-temperature processing, low cost, and very high throughput. We believe that using all-dielectric solar cell reflectors is a way to approach the thermodynamic conversion limit by completely excluding metallic dissipation.

  6. Dynamic performance of maximum power point tracking circuits using sinusoidal extremum seeking control for photovoltaic generation

    Science.gov (United States)

    Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.

    2011-04-01

    The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.

  7. Limit cycles bifurcating from a perturbed quartic center

    Energy Technology Data Exchange (ETDEWEB)

    Coll, Bartomeu, E-mail: dmitcv0@ps.uib.ca [Dept. de Matematiques i Informatica, Universitat de les Illes Balears, Facultat de ciencies, 07071 Palma de Mallorca (Spain); Llibre, Jaume, E-mail: jllibre@mat.uab.ca [Dept. de Matematiques, Universitat Autonoma de Barcelona, Edifici Cc 08193 Bellaterra, Barcelona, Catalonia (Spain); Prohens, Rafel, E-mail: dmirps3@ps.uib.ca [Dept. de Matematiques i Informatica, Universitat de les Illes Balears, Facultat de ciencies, 07071 Palma de Mallorca (Spain)

    2011-04-15

    Highlights: We study polynomial perturbations of a quartic center. We get simultaneous upper and lower bounds for the bifurcating limit cycles. A higher lower bound for the maximum number of limit cycles is obtained. We obtain more limit cycles than the number obtained in the cubic case. - Abstract: We consider the quartic center x{sup .}=-yf(x,y),y{sup .}=xf(x,y), with f(x, y) = (x + a) (y + b) (x + c) and abc {ne} 0. Here we study the maximum number {sigma} of limit cycles which can bifurcate from the periodic orbits of this quartic center when we perturb it inside the class of polynomial vector fields of degree n, using the averaging theory of first order. We prove that 4[(n - 1)/2] + 4 {<=} {sigma} {<=} 5[(n - 1)/2] + 14, where [{eta}] denotes the integer part function of {eta}.

  8. An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon

    2010-07-01

    A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.

  9. Are inundation limit and maximum extent of sand useful for differentiating tsunamis and storms? An example from sediment transport simulations on the Sendai Plain, Japan

    Science.gov (United States)

    Watanabe, Masashi; Goto, Kazuhisa; Bricker, Jeremy D.; Imamura, Fumihiko

    2018-02-01

    We examined the quantitative difference in the distribution of tsunami and storm deposits based on numerical simulations of inundation and sediment transport due to tsunami and storm events on the Sendai Plain, Japan. The calculated distance from the shoreline inundated by the 2011 Tohoku-oki tsunami was smaller than that inundated by storm surges from hypothetical typhoon events. Previous studies have assumed that deposits observed farther inland than the possible inundation limit of storm waves and storm surge were tsunami deposits. However, confirming only the extent of inundation is insufficient to distinguish tsunami and storm deposits, because the inundation limit of storm surges may be farther inland than that of tsunamis in the case of gently sloping coastal topography such as on the Sendai Plain. In other locations, where coastal topography is steep, the maximum inland inundation extent of storm surges may be only several hundred meters, so marine-sourced deposits that are distributed several km inland can be identified as tsunami deposits by default. Over both gentle and steep slopes, another difference between tsunami and storm deposits is the total volume deposited, as flow speed over land during a tsunami is faster than during a storm surge. Therefore, the total deposit volume could also be a useful proxy to differentiate tsunami and storm deposits.

  10. Studying DDT Susceptibility at Discriminating Time Intervals Focusing on Maximum Limit of Exposure Time Survived by DDT Resistant Phlebotomus argentipes (Diptera: Psychodidae): an Investigative Report.

    Science.gov (United States)

    Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay

    2017-07-24

    Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.

  11. Solar maximum mission panel jettison analysis remote manipulator system

    Science.gov (United States)

    Bauer, R. B.

    1980-01-01

    A study is presented of the development of the Remote Manipulator System (RMS) configurations for jettison of the solar panels on the Solar Maximum Mission/Multimission Satellite. A valid RMS maneuver between jettison configurations was developed. Arm and longeron loads and effector excursions due to the solar panel jettison were determined to see if they were within acceptable limits. These loads and end effector excursions were analyzed under two RMS modes, servos active in position hold submode, and in the brakes on mode.

  12. Maximum likelihood phylogenetic reconstruction from high-resolution whole-genome data and a tree of 68 eukaryotes.

    Science.gov (United States)

    Lin, Yu; Hu, Fei; Tang, Jijun; Moret, Bernard M E

    2013-01-01

    The rapid accumulation of whole-genome data has renewed interest in the study of the evolution of genomic architecture, under such events as rearrangements, duplications, losses. Comparative genomics, evolutionary biology, and cancer research all require tools to elucidate the mechanisms, history, and consequences of those evolutionary events, while phylogenetics could use whole-genome data to enhance its picture of the Tree of Life. Current approaches in the area of phylogenetic analysis are limited to very small collections of closely related genomes using low-resolution data (typically a few hundred syntenic blocks); moreover, these approaches typically do not include duplication and loss events. We describe a maximum likelihood (ML) approach for phylogenetic analysis that takes into account genome rearrangements as well as duplications, insertions, and losses. Our approach can handle high-resolution genomes (with 40,000 or more markers) and can use in the same analysis genomes with very different numbers of markers. Because our approach uses a standard ML reconstruction program (RAxML), it scales up to large trees. We present the results of extensive testing on both simulated and real data showing that our approach returns very accurate results very quickly. In particular, we analyze a dataset of 68 high-resolution eukaryotic genomes, with from 3,000 to 42,000 genes, from the eGOB database; the analysis, including bootstrapping, takes just 3 hours on a desktop system and returns a tree in agreement with all well supported branches, while also suggesting resolutions for some disputed placements.

  13. Assessing suitable area for Acacia dealbata Mill. in the Ceira River Basin (Central Portugal based on maximum entropy modelling approach

    Directory of Open Access Journals (Sweden)

    Jorge Pereira

    2015-12-01

    Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.

  14. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  15. A novel approach to estimating potential maximum heavy metal exposure to ship recycling yard workers in Alang, India

    Energy Technology Data Exchange (ETDEWEB)

    Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R., E-mail: asolekar@iitb.ac.in

    2012-11-01

    yards in India. -- Highlights: Black-Right-Pointing-Pointer Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. Black-Right-Pointing-Pointer Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). Black-Right-Pointing-Pointer Mathematical model using vector addition approach and based on Gaussian dispersion. Black-Right-Pointing-Pointer Model predicted maximum emissions of heavy metals at different wind speeds. Black-Right-Pointing-Pointer Exposure impacts on a worker's health and the intertidal sediments can be assessed.

  16. Maximum Stress Estimation Model for Multi-Span Waler Beams with Deflections at the Supports Using Average Strains

    Directory of Open Access Journals (Sweden)

    Sung Woo Park

    2015-03-01

    Full Text Available The safety of a multi-span waler beam subjected simultaneously to a distributed load and deflections at its supports can be secured by limiting the maximum stress of the beam to a specific value to prevent the beam from reaching a limit state for failure or collapse. Despite the fact that the vast majority of accidents on construction sites occur at waler beams in retaining wall systems, no safety monitoring model that can consider deflections at the supports of the beam is available. In this paper, a maximum stress estimation model for a waler beam based on average strains measured from vibrating wire strain gauges (VWSGs, the most frequently used sensors in construction field, is presented. The model is derived by defining the relationship between the maximum stress and the average strains measured from VWSGs. In addition to the maximum stress, support reactions, deflections at supports, and the magnitudes of distributed loads for the beam structure can be identified by the estimation model using the average strains. Using simulation tests on two multi-span beams, the performance of the model is evaluated by estimating maximum stress, deflections at supports, support reactions, and the magnitudes of distributed loads.

  17. A generic statistical methodology to predict the maximum pit depth of a localized corrosion process

    International Nuclear Information System (INIS)

    Jarrah, A.; Bigerelle, M.; Guillemot, G.; Najjar, D.; Iost, A.; Nianga, J.-M.

    2011-01-01

    Highlights: → We propose a methodology to predict the maximum pit depth in a corrosion process. → Generalized Lambda Distribution and the Computer Based Bootstrap Method are combined. → GLD fit a large variety of distributions both in their central and tail regions. → Minimum thickness preventing perforation can be estimated with a safety margin. → Considering its applications, this new approach can help to size industrial pieces. - Abstract: This paper outlines a new methodology to predict accurately the maximum pit depth related to a localized corrosion process. It combines two statistical methods: the Generalized Lambda Distribution (GLD), to determine a model of distribution fitting with the experimental frequency distribution of depths, and the Computer Based Bootstrap Method (CBBM), to generate simulated distributions equivalent to the experimental one. In comparison with conventionally established statistical methods that are restricted to the use of inferred distributions constrained by specific mathematical assumptions, the major advantage of the methodology presented in this paper is that both the GLD and the CBBM enable a statistical treatment of the experimental data without making any preconceived choice neither on the unknown theoretical parent underlying distribution of pit depth which characterizes the global corrosion phenomenon nor on the unknown associated theoretical extreme value distribution which characterizes the deepest pits. Considering an experimental distribution of depths of pits produced on an aluminium sample, estimations of maximum pit depth using a GLD model are compared to similar estimations based on usual Gumbel and Generalized Extreme Value (GEV) methods proposed in the corrosion engineering literature. The GLD approach is shown having smaller bias and dispersion in the estimation of the maximum pit depth than the Gumbel approach both for its realization and mean. This leads to comparing the GLD approach to the GEV one

  18. Bayesian interpretation of Generalized empirical likelihood by maximum entropy

    OpenAIRE

    Rochet , Paul

    2011-01-01

    We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...

  19. Limits for the burial of the Department of Energy transuranic wastes

    Energy Technology Data Exchange (ETDEWEB)

    Healy, J.W.; Rodgers, J.C.

    1979-01-15

    Potential limits for the shallow earth burial of transuranic elements were examined by simplified models of the individual pathways to man. Pathways examined included transport to surface steams, transport to ground water, intrusion, and people living on the burial ground area after the wastes have surfaced. Limits are derived for each pathway and operational limits are suggested based upon a dose to the organ receiving the maximum dose rate of 0.5 rem/y after 70 years of exposure for the maximum exposed individual.

  20. Limits for the burial of the Department of Energy transuranic wastes

    International Nuclear Information System (INIS)

    Healy, J.W.; Rodgers, J.C.

    1979-01-01

    Potential limits for the shallow earth burial of transuranic elements were examined by simplified models of the individual pathways to man. Pathways examined included transport to surface steams, transport to ground water, intrusion, and people living on the burial ground area after the wastes have surfaced. Limits are derived for each pathway and operational limits are suggested based upon a dose to the organ receiving the maximum dose rate of 0.5 rem/y after 70 years of exposure for the maximum exposed individual

  1. Optimal Portfolio Strategy under Rolling Economic Maximum Drawdown Constraints

    Directory of Open Access Journals (Sweden)

    Xiaojian Yu

    2014-01-01

    Full Text Available This paper deals with the problem of optimal portfolio strategy under the constraints of rolling economic maximum drawdown. A more practical strategy is developed by using rolling Sharpe ratio in computing the allocation proportion in contrast to existing models. Besides, another novel strategy named “REDP strategy” is further proposed, which replaces the rolling economic drawdown of the portfolio with the rolling economic drawdown of the risky asset. The simulation tests prove that REDP strategy can ensure the portfolio to satisfy the drawdown constraint and outperforms other strategies significantly. An empirical comparison research on the performances of different strategies is carried out by using the 23-year monthly data of SPTR, DJUBS, and 3-month T-bill. The investment cases of single risky asset and two risky assets are both studied in this paper. Empirical results indicate that the REDP strategy successfully controls the maximum drawdown within the given limit and performs best in both return and risk.

  2. Proposed derivation of skin contamination and skin decontamination limits

    International Nuclear Information System (INIS)

    Schieferdecker, H.; Koelzer, W.; Henrichs, K.

    1986-01-01

    From the primary dose limits for the skin, secondary dose limits were derived for skin contamination which can be used in practical radiation protection work. Analogous to the secondary dose limit for the maximum permissible body burden in the case of incorporation, dose limits for the 'maximum permissible skin burden' were calculated, with the help of dose factors, for application in the case of skin contamination. They can be derived from the skin dose limit values. For conditions in which the skin is exposed to temporary contamination, a limit of skin contamination was derived for immediately removable contamination and for one day of exposure. For non-removable contamination a dose limit of annual skin contamination was defined, taking into account the renewal of the skin. An investigation level for skin contamination was assumed, as a threshold, above which certain measures must be taken; these to include appropriate washing not more than three times, with the subsequent procedure determined by the level of residual contamination. The dose limits are indicated for selected radionuclides. (author)

  3. Calculating the Prior Probability Distribution for a Causal Network Using Maximum Entropy: Alternative Approaches

    Directory of Open Access Journals (Sweden)

    Michael J. Markham

    2011-07-01

    Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.

  4. Quantifying environmental limiting factors on tree cover using geospatial data.

    Science.gov (United States)

    Greenberg, Jonathan A; Santos, Maria J; Dobrowski, Solomon Z; Vanderbilt, Vern C; Ustin, Susan L

    2015-01-01

    Environmental limiting factors (ELFs) are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1) Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2) How are the ELFs distributed spatially? 3) To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET) and PET minus precipitation (PET-P) as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4) environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range.

  5. Application of maximum values for radiation exposure and principles for the calculation of radiation doses

    International Nuclear Information System (INIS)

    2007-08-01

    The guide presents the definitions of equivalent dose and effective dose, the principles for calculating these doses, and instructions for applying their maximum values. The limits (Annual Limit on Intake and Derived Air Concentration) derived from dose limits are also presented for the purpose of monitoring exposure to internal radiation. The calculation of radiation doses caused to a patient from medical research and treatment involving exposure to ionizing radiation is beyond the scope of this ST Guide

  6. Density limit in ASDEX discharges with peaked density profiles

    International Nuclear Information System (INIS)

    Staebler, A.; Niedermeyer, H.; Loch, R.; Mertens, V.; Mueller, E.R.; Soeldner, F.X.; Wagner, F.

    1989-01-01

    Results concerning the density limit in OH and NI-heated ASDEX discharges with the usually observed broad density profiles have been reported earlier: In ohmic discharges with high q a (q-cylindrical is used throughout this paper) the Murakami parameter (n e R/B t ) is a good scaling parameter. At the high densities edge cooling is observed causing the plasma to shrink until an m=2-instability terminates the discharge. When approaching q a =2 the density limit is no longer proportional to I p ; a minimum exists in n e,max (q a ) at q a ∼2.15. With NI-heating the density limit increases less than proportional to the heating power; the behaviour during the pre-disruptive phase is rather similar to the one of OH discharges. There are specific operating regimes on ASDEX leading to discharges with strongly peaked density profiles: the improved ohmic confinement regime, counter neutral injection, and multipellet injection. These regimes are characterized by enhanced energy and particle confinement. The operational limit in density for these discharges is, therefore, of great interest having furthermore in mind that high central densities are favourable in achieving high fusion yields. In addition, further insight into the mechanisms of the density limit observed in tokamaks may be obtained by comparing plasmas with rather different density profiles at their maximum attainable densities. 7 refs., 2 figs

  7. Frontiere urbane e volti velati. Istanbul di Orhan Pamuk e Maximum City di Suketu Mehta

    Directory of Open Access Journals (Sweden)

    Elvira Godono

    2011-05-01

    Full Text Available Objectives and scope Analysing postmodern city, in the third millennium, means to study the main important space of contemporary imagination and creation, focusing on literary works characterized by a continuous oscillation on the border between novel and essay, with the result to refuse canons of both genres. Adding to those elements the autobiographic matrix, which is fundamental in Suketu Mehta and Orhan Pamuk, the description of the city become the unique centre of the narrative space, aiming to explore not only the various urban borders, but also the limits which divide different artistic languages and forms. Methods and approaches If Mehta inscribes many intertextual citations in Maximum City (2004 and Pamuk narrates Istanbul not only with words but also through many old family photos (Istanbul, 2003, they both create a plurivocal narration, functional to explain secular historical contradictions and dialectical elements, as for as linguistic, cultural, religious and ethnic borders. These topics need an approach that, overcoming cultural and postcolonial studies, tries to unite theory of genres and thematic criticism with anthropology, mythology and ethnic studies. Results If Pamuk and Mehta incessantly move from one space to another, they become lost as their readers, seeking to follow many different lives: women hidden by veils or guerrillas protected by helmets, thousands of bodies signed with concrete scars of invisible borders. Those limits have been deeply explained focusing, in particular, on the topic of memory, the only one parameter that, united to writing, appears to destroy the multiple borders of urban space.

  8. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    Science.gov (United States)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  9. Assessment of new limit values for wind turbines. Influence of different limit values on exposure, annoyance and potential development locations

    International Nuclear Information System (INIS)

    Verheijen, E.; Jabben, J.; Schreurs, E.; Koeman, T.; Van Poll, R.; Du Pon, B.

    2009-01-01

    Approximately 1500 people living in the close vicinity of wind turbines in the Netherlands run the risk of suffering severe annoyance effects due to noise exposure. New guidelines that limit new wind turbines to a maximum noise level of 45 dB can minimize any further increase in noise health effects. Noise levels above 45 dB will likely result in a further increase in annoyance effects and sleep disturbances. Compared to other noise sources, noise emitted from a wind turbine causes annoyance at relatively low noise levels. The amount of available land for the placement of new wind turbines depends on the limit value that is chosen. A limit value of 40 dB would enable an additional 7000 megawatt of renewable energy to be obtained from new turbines. A limit value of 45 dB, however, would enable wind turbines to be constructed on additional land, resulting in the production of approximately 25,000 megawatt. A new noise regulation for wind turbines is currently being prepared that sets limits on the noise levels experienced by residents of nearby dwellings. The regulation aims at limiting the effects of noise annoyance caused by new wind turbines within the framework of policy targets for renewable energy. This new regulation will be in line with existing ones for road- and railway traffic in setting both a preferable and maximum allowable value for the noise level (Lden) in nearby residences. Below the preferred value, there will be no noise restriction; above the maximum value, local authorities will not be allowed to issue building permits. For noise levels that fall in between, the decision for/against construction will be based on the results of a public inquiry process involving the major stakeholders. The National Institute for Public Health and Environment (RIVM) has investigated the consequences of different choices for the preferred and maximum noise limits. Aspects of annoyance and health effects, amount of land available for new turbines within the

  10. Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains

    Directory of Open Access Journals (Sweden)

    Erik Van der Straeten

    2009-11-01

    Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.

  11. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  12. Is the maximum permissible radiation burden for the population indeed permissible

    International Nuclear Information System (INIS)

    Renesse, R.L. van.

    1975-01-01

    It is argued that legislation based on the ICRP doses will, under economical influences, lead to a situation where the population is exposed to radiation doses near the maximum permissible dose. Due to cumulative radiation effects, this will introduce unacceptable health risks. Therefore, it will be necessary to lower the legal dose limit of 170 millrem per year per person by a factor 10 to 20

  13. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  14. Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator

    Science.gov (United States)

    Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun

    2017-07-01

    Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.

  15. Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator

    Directory of Open Access Journals (Sweden)

    Kyung-Eun Byun

    2017-07-01

    Full Text Available Although triboelectric nanogenerator (TENG has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.

  16. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  17. Revisiting Pocos de Caldas. Application of the co-precipitation approach to establish realistic solubility limits for performance assessment

    International Nuclear Information System (INIS)

    Bruno, J.; Duro, L.; Jordana, S.; Cera, E.

    1996-02-01

    Solubility limits constitute a critical parameter for the determination of the mobility of radionuclides in the near field and the geosphere, and consequently for the performance assessment of nuclear waste repositories. Mounting evidence from natural system studies indicate that trace elements, and consequently radionuclides, are associated to the dynamic cycling of major geochemical components. We have recently developed a thermodynamic approach to take into consideration the co-precipitation and co-dissolution processes that mainly control this linkage. The approach has been tested in various natural system studies with encouraging results. The Pocos de Caldas natural analogue was one of the sites where a full testing of our predictive geochemical modelling capabilities were done during the analogue project. We have revisited the Pocos de Caldas data and expanded the trace element solubility calculations by considering the documented trace metal/major ion interactions. This has been done by using the co-precipitation/co-dissolution approach. The outcome is as follows: A satisfactory modelling of the behaviour of U, Zn and REEs is achieved by assuming co-precipitation with ferrihydrite. Strontium concentrations are apparently controlled by its co-dissolution from Sr-rich fluorites. From the performance assessment point of view, the present work indicates that calculated solubility limits using the co-precipitation approach are in close agreement with the actual trace element concentrations. Furthermore, the calculated radionuclide concentrations are 2-4 orders of magnitude lower than conservative solubility limits calculated by assuming equilibrium with individual trace element phases. 34 refs, 18 figs, 13 tabs

  18. Revisiting Pocos de Caldas. Application of the co-precipitation approach to establish realistic solubility limits for performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Bruno, J.; Duro, L.; Jordana, S.; Cera, E. [QuantiSci, Barcelona (Spain)

    1996-02-01

    Solubility limits constitute a critical parameter for the determination of the mobility of radionuclides in the near field and the geosphere, and consequently for the performance assessment of nuclear waste repositories. Mounting evidence from natural system studies indicate that trace elements, and consequently radionuclides, are associated to the dynamic cycling of major geochemical components. We have recently developed a thermodynamic approach to take into consideration the co-precipitation and co-dissolution processes that mainly control this linkage. The approach has been tested in various natural system studies with encouraging results. The Pocos de Caldas natural analogue was one of the sites where a full testing of our predictive geochemical modelling capabilities were done during the analogue project. We have revisited the Pocos de Caldas data and expanded the trace element solubility calculations by considering the documented trace metal/major ion interactions. This has been done by using the co-precipitation/co-dissolution approach. The outcome is as follows: A satisfactory modelling of the behaviour of U, Zn and REEs is achieved by assuming co-precipitation with ferrihydrite. Strontium concentrations are apparently controlled by its co-dissolution from Sr-rich fluorites. From the performance assessment point of view, the present work indicates that calculated solubility limits using the co-precipitation approach are in close agreement with the actual trace element concentrations. Furthermore, the calculated radionuclide concentrations are 2-4 orders of magnitude lower than conservative solubility limits calculated by assuming equilibrium with individual trace element phases. 34 refs, 18 figs, 13 tabs.

  19. Power limit and quality limit of natural circulation reactor

    International Nuclear Information System (INIS)

    Zhao Guochang; Ma Changwen

    1997-01-01

    The circulation characteristics of natural circulation reactor in boiling regime are researched. It is found that, the circulation mass flow rate and the power have a peak value at a mass quality respectively. Therefore, the natural circulation reactor has a power limit under certain technological condition. It can not be increased steadily by continually increasing the mass quality. Corresponding to this, the mass quality of natural circulation reactor has a reasonable limit. The relations between the maximum power and the reactor parameters, such as the resistance coefficient, the working pressure and so on, are analyzed. It is pointed out that the power limit of natural circulation reactor is about 1000 MW at present technological condition. Taking the above result and low quality stability experimental result into account, the authors recommend that the reasonable mass quality of natural circulation reactor working in boiling regime is from 2% to 3% under the researched working pressure

  20. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory

    2010-12-15

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.

  1. Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations

    International Nuclear Information System (INIS)

    Wollaber, Allan B.; Larsen, Edward W.; Densmore, Jeffery D.

    2011-01-01

    It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle'. Previous attempts at prescribing a maximum value of the time-step size Δ t that is sufficient to eliminate these violations have recommended a Δ t that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size Δ x . This explicitly demonstrates that the effect of coarsening Δ x is to reduce the limitation on Δ t , which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent time-step restriction can impact IMC solution algorithms. (author)

  2. Response approach to the squeezed-limit bispectrum: application to the correlation of quasar and Lyman-α forest power spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794 (United States); Cieplak, Agnieszka M.; Slosar, Anže [Brookhaven National Laboratory, Blgd 510, Upton, NY 11375 (United States); Schmidt, Fabian, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: acieplak@bnl.gov, E-mail: fabians@mpa-garching.mpg.de, E-mail: anze@bnl.gov [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-06-01

    The squeezed-limit bispectrum, which is generated by nonlinear gravitational evolution as well as inflationary physics, measures the correlation of three wavenumbers, in the configuration where one wavenumber is much smaller than the other two. Since the squeezed-limit bispectrum encodes the impact of a large-scale fluctuation on the small-scale power spectrum, it can be understood as how the small-scale power spectrum ''responds'' to the large-scale fluctuation. Viewed in this way, the squeezed-limit bispectrum can be calculated using the response approach even in the cases which do not submit to perturbative treatment. To illustrate this point, we apply this approach to the cross-correlation between the large-scale quasar density field and small-scale Lyman-α forest flux power spectrum. In particular, using separate universe simulations which implement changes in the large-scale density, velocity gradient, and primordial power spectrum amplitude, we measure how the Lyman-α forest flux power spectrum responds to the local, long-wavelength quasar overdensity, and equivalently their squeezed-limit bispectrum. We perform a Fisher forecast for the ability of future experiments to constrain local non-Gaussianity using the bispectrum of quasars and the Lyman-α forest. Combining with quasar and Lyman-α forest power spectra to constrain the biases, we find that for DESI the expected 1−σ constraint is err[ f {sub NL}]∼60. Ability for DESI to measure f {sub NL} through this channel is limited primarily by the aliasing and instrumental noise of the Lyman-α forest flux power spectrum. The combination of response approach and separate universe simulations provides a novel technique to explore the constraints from the squeezed-limit bispectrum between different observables.

  3. Stochastic modelling of a single ion channel: an alternating renewal approach with application to limited time resolution.

    Science.gov (United States)

    Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W

    1988-04-22

    Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.

  4. 7 CFR 3560.612 - Loan limits.

    Science.gov (United States)

    2010-01-01

    ... DIRECT MULTI-FAMILY HOUSING LOANS AND GRANTS On-Farm Labor Housing § 3560.612 Loan limits. The maximum loan amount will be 100 percent of the allowable total development costs of on-farm labor housing and...

  5. Comparison of fuzzy logic and neural network in maximum power point tracker for PV systems

    Energy Technology Data Exchange (ETDEWEB)

    Ben Salah, Chokri; Ouali, Mohamed [Research Unit on Intelligent Control, Optimization, Design and Optimization of Complex Systems (ICOS), Department of Electrical Engineering, National School of Engineers of Sfax, BP. W, 3038, Sfax (Tunisia)

    2011-01-15

    This paper proposes two methods of maximum power point tracking using a fuzzy logic and a neural network controllers for photovoltaic systems. The two maximum power point tracking controllers receive solar radiation and photovoltaic cell temperature as inputs, and estimated the optimum duty cycle corresponding to maximum power as output. The approach is validated on a 100 Wp PVP (two parallels SM50-H panel) connected to a 24 V dc load. The new method gives a good maximum power operation of any photovoltaic array under different conditions such as changing solar radiation and PV cell temperature. From the simulation and experimental results, the fuzzy logic controller can deliver more power than the neural network controller and can give more power than other different methods in literature. (author)

  6. A new maximum power point method based on a sliding mode approach for solar energy harvesting

    International Nuclear Information System (INIS)

    Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad

    2017-01-01

    Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.

  7. Performance analysis of air-standard Diesel cycle using an alternative irreversible heat transfer approach

    International Nuclear Information System (INIS)

    Al-Hinti, I.; Akash, B.; Abu-Nada, E.; Al-Sarkhi, A.

    2008-01-01

    This study presents the investigation of air-standard Diesel cycle under irreversible heat transfer conditions. The effects of various engine parameters are presented. An alternative approach is used to evaluate net power output and cycle thermal efficiency from more realistic parameters such as air-fuel ratio, fuel mass flow rate, intake temperature, engine design parameters, etc. It is shown that for a given fuel flow rate, thermal efficiency and maximum power output increase with decreasing air-fuel ratio. Also, for a given air-fuel ratio, the maximum power output increases with increasing fuel rate. However, the effect of the thermal efficiency is limited

  8. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  9. Intelligent approach to maximum power point tracking control strategy for variable-speed wind turbine generation system

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424 (China)

    2010-06-15

    To achieve maximum power point tracking (MPPT) for wind power generation systems, the rotational speed of wind turbines should be adjusted in real time according to wind speed. In this paper, a Wilcoxon radial basis function network (WRBFN) with hill-climb searching (HCS) MPPT strategy is proposed for a permanent magnet synchronous generator (PMSG) with a variable-speed wind turbine. A high-performance online training WRBFN using a back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller is designed for a PMSG. The MPSO is adopted in this study to adapt to the learning rates in the back-propagation process of the WRBFN to improve the learning capability. The MPPT strategy locates the system operation points along the maximum power curves based on the dc-link voltage of the inverter, thus avoiding the generator speed detection. (author)

  10. Installation of the MAXIMUM microscope at the ALS

    International Nuclear Information System (INIS)

    Ng, W.; Perera, R.C.C.; Underwood, J.H.; Singh, S.; Solak, H.; Cerrina, F.

    1995-10-01

    The MAXIMUM scanning x-ray microscope, developed at the Synchrotron Radiation Center (SRC) at the University of Wisconsin, Madison was implemented on the Advanced Light Source in August of 1995. The microscope's initial operation at SRC successfully demonstrated the use of multilayer coated Schwarzschild objective for focusing 130 eV x-rays to a spot size of better than 0.1 micron with an electron energy resolution of 250meV. The performance of the microscope was severely limited, because of the relatively low brightness of SRC, which limits the available flux at the focus of the microscope. The high brightness of the ALS is expected to increase the usable flux at the sample by a factor of 1,000. The authors will report on the installation of the microscope on bending magnet beamline 6.3.2 at the ALS and the initial measurement of optical performance on the new source, and preliminary experiments with surface chemistry of HF etched Si will be described

  11. Fungic microflora of Panicum maximum and Styosanthes spp. commercial seed / Microflora fúngica de sementes comerciais de Panicum maximum e Stylosanthes spp.

    Directory of Open Access Journals (Sweden)

    Larissa Rodrigues Fabris

    2010-09-01

    Full Text Available The sanitary quality of 26 lots commercial seeds of tropical forages, produced in different regions (2004-05 and 2005-06 was analyzed. The lots were composed of seeds of Panicum maximum ('Massai', 'Mombaça' e 'Tanzânia' and stylo ('Estilosantes Campo Grande' - ECG. Additionally, seeds of two lots of P. maximum for exportation were analyzed. The blotter test was used, at 20ºC under alternating light and darkness in a 12 h photoperiod, for seven days. The Aspergillus, Cladosporium and Rhizopus genus consisted the secondary or saprophytes fungi (FSS with greatest frequency in P. maximum lots. In general, there was low incidence of these fungus in the seeds. In relation to pathogenic fungi (FP, it was detected high frequency of contaminated lots by Bipolaris, Curvularia, Fusarium and Phoma genus. Generally, there was high incidence of FP in P. maximum seeds. The occurrence of Phoma sp. was hight, because in 81% of the lots showed incidence superior to 50%. In 'ECG' seeds it was detected FSS (Aspergillus, Cladosporium, and Penicillium genus and FP (Bipolaris, Curvularia, Fusarium and Phoma genus, usually, in low incidence. FSS and FP were associated to P. maximum seeds for exportation, with significant incidence in some cases. The results indicated that there was a limiting factor in all producer regions regarding sanitary quality of the seeds.Sementes comerciais de forrageiras tropicais, pertencente a 26 lotes produzidos em diferentes regiões (safras 2004-05 e 2005-06, foram avaliadas quanto à sanidade. Foram analisadas sementes de cultivares de Panicum maximum (Massai, Mombaça e Tanzânia e de estilosantes (Estilosantes Campo Grande – ECG. Adicionalmente, avaliou-se a qualidade sanitária de dois lotes de sementes de P. maximum destinados à exportação. Para isso, as sementes foram submetidas ao teste de papel de filtro em gerbox, os quais foram incubados a 20ºC, com fotoperíodo de 12 h, durante sete dias. Os fungos saprófitos ou

  12. Safety and design limits

    International Nuclear Information System (INIS)

    Shishkov, L. K.; Gorbaev, V. A.; Tsyganov, S. V.

    2007-01-01

    The paper touches upon the issues of NPP safety ensuring at the stage of fuel load design and operation by applying special limitations for a series of parameters, that is, design limits. Two following approaches are compared: the one used by west specialists for the PWR reactor and the Russian approach employed for the WWER reactor. The closeness of approaches is established, differences that are mainly peculiarities of terms are noted (Authors)

  13. Quantifying environmental limiting factors on tree cover using geospatial data.

    Directory of Open Access Journals (Sweden)

    Jonathan A Greenberg

    Full Text Available Environmental limiting factors (ELFs are the thresholds that determine the maximum or minimum biological response for a given suite of environmental conditions. We asked the following questions: 1 Can we detect ELFs on percent tree cover across the eastern slopes of the Lake Tahoe Basin, NV? 2 How are the ELFs distributed spatially? 3 To what extent are unmeasured environmental factors limiting tree cover? ELFs are difficult to quantify as they require significant sample sizes. We addressed this by using geospatial data over a relatively large spatial extent, where the wall-to-wall sampling ensures the inclusion of rare data points which define the minimum or maximum response to environmental factors. We tested mean temperature, minimum temperature, potential evapotranspiration (PET and PET minus precipitation (PET-P as potential limiting factors on percent tree cover. We found that the study area showed system-wide limitations on tree cover, and each of the factors showed evidence of being limiting on tree cover. However, only 1.2% of the total area appeared to be limited by the four (4 environmental factors, suggesting other unmeasured factors are limiting much of the tree cover in the study area. Where sites were near their theoretical maximum, non-forest sites (tree cover < 25% were primarily limited by cold mean temperatures, open-canopy forest sites (tree cover between 25% and 60% were primarily limited by evaporative demand, and closed-canopy forests were not limited by any particular environmental factor. The detection of ELFs is necessary in order to fully understand the width of limitations that species experience within their geographic range.

  14. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  15. A risk modelling approach for setting microbiological limits using enterococci as indicator for growth potential of Salmonella in pork

    DEFF Research Database (Denmark)

    Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck

    2017-01-01

    Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support...... for this purpose includes the dose-response relationship for Salmonella and a reduction factor to account for preparation of the fresh pork. By use of the risk model, it was estimated that the majority of salmonellosis cases, caused by the consumption of pork in Denmark, is caused by the small fraction of pork...... products that has enterococci concentrations above 5. log. CFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological limits and therefore, the perspective of this novel approach is that it can be used for definition of a risk-based microbiological...

  16. Method for the determination of technical specifications limiting temperature in EBR-II operation

    International Nuclear Information System (INIS)

    Chang, L.K.; Hill, D.J.; Ku, J.Y.

    1994-01-01

    The methodology and analysis procedure to qualify the Mark-V and Mark-VA fuels for the Experimental Breeder Reactor II are summarized in this paper. Fuel performance data and design safety criteria are essential for thermal-hydraulic analysis and safety evaluations. Normal and off-normal operation duty cycles and transient classifications are required for the safety assessment of the fuels. The temperature limits of subassemblies were first determined by a steady-state thermal-structural and fuel damage analysis, in which a trial-and-error approach was used to predict the maximum allowable fuel pin temperature that satisfies the design criteria for steady-state normal operation. The steady-state temperature limits were used as the basis of the off-normal transient analysis to assess the safety performance of the fuel for anticipated, unlikely and extremely unlikely events. If the design criteria for the off-normal events are not satisfied, then the subassembly temperature limit is reduced and an iterative procedure is employed until all design criteria are met

  17. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  18. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  19. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  20. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  1. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    Science.gov (United States)

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  2. Analysis of enamel development using murine model systems: approaches and limitations.

    Directory of Open Access Journals (Sweden)

    Megan K Pugach

    2014-09-01

    Full Text Available A primary goal of enamel research is to understand and potentially treat or prevent enamel defects related to amelogenesis imperfecta (AI. Rodents are ideal models to assist our understanding of how enamel is formed because they are easily genetically modified, and their continuously erupting incisors display all stages of enamel development and mineralization. While numerous methods have been developed to generate and analyze genetically modified rodent enamel, it is crucial to understand the limitations and challenges associated with these methods in order to draw appropriate conclusions that can be applied translationally, to AI patient care. We have highlighted methods involved in generating and analyzing rodent enamel and potential approaches to overcoming limitations of these methods: 1 generating transgenic, knockout and knockin mouse models, and 2 analyzing rodent enamel mineral density and functional properties (structure, mechanics of mature enamel. There is a need for a standardized workflow to analyze enamel phenotypes in rodent models so that investigators can compare data from different studies. These methods include analyses of gene and protein expression, developing enamel histology, enamel pigment, degree of mineralization, enamel structure and mechanical properties. Standardization of these methods with regard to stage of enamel development and sample preparation is crucial, and ideally investigators can use correlative and complementary techniques with the understanding that developing mouse enamel is dynamic and complex.

  3. Maximum likelihood estimation of the position of a radiating source in a waveguide

    International Nuclear Information System (INIS)

    Hinich, M.J.

    1979-01-01

    An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

  4. Maximum Likelihood Dynamic Factor Modeling for Arbitrary "N" and "T" Using SEM

    Science.gov (United States)

    Voelkle, Manuel C.; Oud, Johan H. L.; von Oertzen, Timo; Lindenberger, Ulman

    2012-01-01

    This article has 3 objectives that build on each other. First, we demonstrate how to obtain maximum likelihood estimates for dynamic factor models (the direct autoregressive factor score model) with arbitrary "T" and "N" by means of structural equation modeling (SEM) and compare the approach to existing methods. Second, we go beyond standard time…

  5. A Life-cycle Approach to Improve the Sustainability of Rural Water Systems in Resource-Limited Countries

    Directory of Open Access Journals (Sweden)

    Nicholas Stacey

    2012-11-01

    Full Text Available A WHO and UNICEF joint report states that in 2008, 884 million people lacked access to potable drinking water. A life-cycle approach to develop potable water systems may improve the sustainability for such systems, however, a review of the literature shows that such an approach has primarily been used for urban systems located in resourced countries. Although urbanization is increasing globally, over 40 percent of the world’s population is currently rural with many considered poor. In this paper, we present a first step towards using life-cycle assessment to develop sustainable rural water systems in resource-limited countries while pointing out the needs. For example, while there are few differences in costs and environmental impacts for many improved rural water system options, a system that uses groundwater with community standpipes is substantially lower in cost that other alternatives with a somewhat lower environmental inventory. However, a LCA approach shows that from institutional as well as community and managerial perspectives, sustainability includes many other factors besides cost and environment that are a function of the interdependent decision process used across the life cycle of a water system by aid organizations, water user committees, and household users. These factors often present the biggest challenge to designing sustainable rural water systems for resource-limited countries.

  6. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    Directory of Open Access Journals (Sweden)

    Yi-Ming Kuo

    2011-06-01

    Full Text Available Fine airborne particulate matter (PM2.5 has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS, the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME method. The resulting epistemic framework can assimilate knowledge bases including: (a empirical-based spatial trends of PM concentration based on landuse regression, (b the spatio-temporal dependence among PM observation information, and (c site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan from 2005–2007.

  7. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    Science.gov (United States)

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  8. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    Directory of Open Access Journals (Sweden)

    George Kesidis

    2008-06-01

    Full Text Available In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i legacy/proprietary classifiers, and (ii spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed (untrained decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  9. Project Opalinus Clay: Radionuclide Concentration Limits in the Cementitious Near-Field of an ILW Repository

    International Nuclear Information System (INIS)

    Berner, U.

    2003-05-01

    solubility increases were calculated for U, Np, Pu, Se and Ag. Special attention is allocated to the uncertainties of the evaluated maximum concentrations, expressed as upper- and lower limits. The conceptual steps to determine these uncertainties are explained in the section 3. Due to lack of data, it was not always possible to assess uncertainties in a manner consistent with that used to assess the solubility limits. For some elements, uncertainties had to be derived from less sharply defined data or even with the help of estimates. Such less rigorous approaches are justified by the fact that in performance assessments particularly the upper limits are as important as are the maximum concentrations themselves. However, appropriate information was available to define an upper limit for nearly all of the relevant nuclides. (author)

  10. Project Opalinus Clay: Radionuclide Concentration Limits in the Cementitious Near-Field of an ILW Repository

    Energy Technology Data Exchange (ETDEWEB)

    Berner, U

    2003-05-01

    this case, significant solubility increases were calculated for U, Np, Pu, Se and Ag. Special attention is allocated to the uncertainties of the evaluated maximum concentrations, expressed as upper- and lower limits. The conceptual steps to determine these uncertainties are explained in the section 3. Due to lack of data, it was not always possible to assess uncertainties in a manner consistent with that used to assess the solubility limits. For some elements, uncertainties had to be derived from less sharply defined data or even with the help of estimates. Such less rigorous approaches are justified by the fact that in performance assessments particularly the upper limits are as important as are the maximum concentrations themselves. However, appropriate information was available to define an upper limit for nearly all of the relevant nuclides. (author)

  11. A Novel Linear Programming Formulation of Maximum Lifetime Routing Problem in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Cetin, Bilge Kartal; Prasad, Neeli R.; Prasad, Ramjee

    2011-01-01

    In wireless sensor networks, one of the key challenge is to achieve minimum energy consumption in order to maximize network lifetime. In fact, lifetime depends on many parameters: the topology of the sensor network, the data aggregation regime in the network, the channel access schemes, the routing...... protocols, and the energy model for transmission. In this paper, we tackle the routing challenge for maximum lifetime of the sensor network. We introduce a novel linear programming approach to the maximum lifetime routing problem. To the best of our knowledge, this is the first mathematical programming...

  12. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  13. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  14. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    Science.gov (United States)

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  15. ON THE MAXIMUM MASS OF STELLAR BLACK HOLES

    International Nuclear Information System (INIS)

    Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.

    2010-01-01

    We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

  16. A biased review of tau neutrino mass limits

    Energy Technology Data Exchange (ETDEWEB)

    Duboscq, J.E

    2001-04-01

    After a quick review of astrophysically relevant limits, I present a summary of MeV scale tau neutrino mass limits derived from accelerator based experiments. I argue that the current published limits appear to be too consistent, and that we therefore cannot conclude that the tau neutrino mass limit is as low as usually claimed. I provide motivational arguments calling into question the assumed statistical properties of the usual maximum likelihood estimators, and provide a prescription for deriving a more robust and understandable mass limit.

  17. Supervised maximum-likelihood weighting of composite protein networks for complex prediction

    Directory of Open Access Journals (Sweden)

    Yong Chern Han

    2012-12-01

    Full Text Available Abstract Background Protein complexes participate in many important cellular functions, so finding the set of existent complexes is essential for understanding the organization and regulation of processes in the cell. With the availability of large amounts of high-throughput protein-protein interaction (PPI data, many algorithms have been proposed to discover protein complexes from PPI networks. However, such approaches are hindered by the high rate of noise in high-throughput PPI data, including spurious and missing interactions. Furthermore, many transient interactions are detected between proteins that are not from the same complex, while not all proteins from the same complex may actually interact. As a result, predicted complexes often do not match true complexes well, and many true complexes go undetected. Results We address these challenges by integrating PPI data with other heterogeneous data sources to construct a composite protein network, and using a supervised maximum-likelihood approach to weight each edge based on its posterior probability of belonging to a complex. We then use six different clustering algorithms, and an aggregative clustering strategy, to discover complexes in the weighted network. We test our method on Saccharomyces cerevisiae and Homo sapiens, and show that complex discovery is improved: compared to previously proposed supervised and unsupervised weighting approaches, our method recalls more known complexes, achieves higher precision at all recall levels, and generates novel complexes of greater functional similarity. Furthermore, our maximum-likelihood approach allows learned parameters to be used to visualize and evaluate the evidence of novel predictions, aiding human judgment of their credibility. Conclusions Our approach integrates multiple data sources with supervised learning to create a weighted composite protein network, and uses six clustering algorithms with an aggregative clustering strategy to

  18. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    Science.gov (United States)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  19. New approach to the theory of coupled πNN-NN system. III. A three-body limit

    International Nuclear Information System (INIS)

    Avishai, Y.; Mizutani, T.

    1980-01-01

    In the limit where the pion is restricted to be emitted only by the nucleon that first absorbed it, it is shown that the equations previously developed to describe the couple πNN (πd) - NN system reduce to conventional three-body equations. Specifically, it is found in this limit that the input πN p 11 amplitude which, put on-shell, is directly related to the experimental phase shift, contrary to the original equations where the direct (dressed) nucleon pole term and the non-pole part of this partial wave enter separately. The present study clarifies the limitation of pure three-body approach to the πNN-NN problems as well as suggests a rare opportunity of observing a possible resonance behavior in the non-pole part of the πN P 11 amplitude through πd experiments

  20. Probabilistic properties of the date of maximum river flow, an approach based on circular statistics in lowland, highland and mountainous catchment

    Science.gov (United States)

    Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz

    2018-04-01

    Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.

  1. Penalized Maximum Likelihood Estimation for univariate normal mixture distributions

    International Nuclear Information System (INIS)

    Ridolfi, A.; Idier, J.

    2001-01-01

    Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test

  2. Force Limit System

    Science.gov (United States)

    Pawlik, Ralph; Krause, David; Bremenour, Frank

    2011-01-01

    The Force Limit System (FLS) was developed to protect test specimens from inadvertent overload. The load limit value is fully adjustable by the operator and works independently of the test system control as a mechanical (non-electrical) device. When a test specimen is loaded via an electromechanical or hydraulic test system, a chance of an overload condition exists. An overload applied to a specimen could result in irreparable damage to the specimen and/or fixturing. The FLS restricts the maximum load that an actuator can apply to a test specimen. When testing limited-run test articles or using very expensive fixtures, the use of such a device is highly recommended. Test setups typically use electronic peak protection, which can be the source of overload due to malfunctioning components or the inability to react quickly enough to load spikes. The FLS works independently of the electronic overload protection.

  3. Multi-level restricted maximum likelihood covariance estimation and kriging for large non-gridded spatial datasets

    KAUST Repository

    Castrillon, Julio; Genton, Marc G.; Yokota, Rio

    2015-01-01

    We develop a multi-level restricted Gaussian maximum likelihood method for estimating the covariance function parameters and computing the best unbiased predictor. Our approach produces a new set of multi-level contrasts where the deterministic

  4. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  5. Promoting physical activity among children and adolescents: the strengths and limitations of school-based approaches.

    Science.gov (United States)

    Booth, Michael; Okely, Anthony

    2005-04-01

    Paediatric overweight and obesity is recognised as one of Australia's most significant health problems and effective approaches to increasing physical activity and reducing energy consumption are being sought urgently. Every potential approach and setting should be subjected to critical review in an attempt to maximise the impact of policy and program initiatives. This paper identifies the strengths and limitations of schools as a setting for promoting physical activity. The strengths are: most children and adolescents attend school; most young people are likely to see teachers as credible sources of information; schools provide access to the facilities, infrastructure and support required for physical activity; and schools are the workplace of skilled educators. Potential limitations are: those students who like school the least are the most likely to engage in health-compromising behaviours and the least likely to be influenced by school-based programs; there are about 20 more hours per week available for physical activity outside schools hours than during school hours; enormous demands are already being made on schools; many primary school teachers have low levels of perceived competence in teaching physical education and fundamental movement skills; and opportunities for being active at school may not be consistent with how and when students prefer to be active.

  6. The influence of the key limiting factors on the limitations of heat transfer in heat pipes with various working fluids

    Directory of Open Access Journals (Sweden)

    Melnyk R. S.

    2017-04-01

    Full Text Available Aluminium and copper heat pipes with grooved and metal fibrous capillary structure are high effective heat transfer devices. They are used in different cooling systems of electronic equipment like a LED modules, microprocessors, receive-transmit modules and so on. However thus heat pipes have heat transfer limitations. There are few types of this limitations: hydraulic limitation, boiling limitation, liquid entrainment by vapor flow and sonic limitation. There is necessity to know which one of these limitations is determinant for heat pipe due to design process. At a present article calculations of maximum heat transfer ability represented. All these calculations were made for LED cooling by using heat pipes with grooved and metal fibrous capillary structures. Pentane, acetone, isobutane and water were used as a coolants. It was shown that the main operation limit for axial grooved heat pipe, which determinate maximum heat transfer ability due to inclination angle for location of cooling zone higher than evaporation zone case, is entrainment limit for pentane and acetone coolants. Nevertheless, for isobutane coolant the main limitation is a boiling limit. However, for heat pipes with metal fibrous capillary structure the main limitation is a capillary limit. This limitation was a determinant for all calculated coolants: water, pentane and acetone. For high porosity range of capillary structure, capillary limit transfer to sonic limit for heat pipes with water, that means that the vapor velocity increases to sonic velocity and can't grow any more. Due to this, coolant cant in a needed quantity infill condensation zone and the last one drained. For heat pipes with acetone and pentane, capillary limit transfer to boiling limit. All calculations were made for vapor temperature equal to 50°C, and for porosity range from 30% to 90%.

  7. Risk newsboy: approach for addressing uncertainty in developing action levels and cleanup limits

    International Nuclear Information System (INIS)

    Cooke, Roger; MacDonell, Margaret

    2007-01-01

    Site cleanup decisions involve developing action levels and residual limits for key contaminants, to assure health protection during the cleanup period and into the long term. Uncertainty is inherent in the toxicity information used to define these levels, based on incomplete scientific knowledge regarding dose-response relationships across various hazards and exposures at environmentally relevant levels. This problem can be addressed by applying principles used to manage uncertainty in operations research, as illustrated by the newsboy dilemma. Each day a newsboy must balance the risk of buying more papers than he can sell against the risk of not buying enough. Setting action levels and cleanup limits involves a similar concept of balancing and distributing risks and benefits in the face of uncertainty. The newsboy approach can be applied to develop health-based target concentrations for both radiological and chemical contaminants, with stakeholder input being crucial to assessing 'regret' levels. Associated tools include structured expert judgment elicitation to quantify uncertainty in the dose-response relationship, and mathematical techniques such as probabilistic inversion and iterative proportional fitting. (authors)

  8. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Science.gov (United States)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  9. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    Energy Technology Data Exchange (ETDEWEB)

    Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  10. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  11. Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes

    Science.gov (United States)

    Poveda, Germán

    2011-02-01

    Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.

  12. A maximum information utilization approach in X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Papp, T.; Maxwell, J.A.; Papp, A.T.

    2009-01-01

    X-ray fluorescence data bases have significant contradictions, and inconsistencies. We have identified that the main source of the contradictions, after the human factors, is rooted in the signal processing approaches. We have developed signal processors to overcome many of the problems by maximizing the information available to the analyst. These non-paralyzable, fully digital signal processors have yielded improved resolution, line shape, tailing and pile up recognition. The signal processors account for and register all events, sorting them into two spectra, one spectrum for the desirable or accepted events, and one spectrum for the rejected events. The information contained in the rejected spectrum is mandatory to have control over the measurement and to make a proper accounting and allocation of the events. It has established the basis for the application of the fundamental parameter method approach. A fundamental parameter program was also developed. The primary X-ray line shape (Lorentzian) is convoluted with a system line shape (Gaussian) and corrected for the sample material absorption, X-ray absorbers and detector efficiency. The peaks also can have, a lower and upper energy side tailing, including the physical interaction based long range functions. It also employs a peak and continuum pile up and can handle layered samples of up to five layers. The application of a fundamental parameter method demands the proper equipment characterization. We have also developed an inverse fundamental parameter method software package for equipment characterisation. The program calculates the excitation function at the sample position and the detector efficiency, supplying an internally consistent system.

  13. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  14. Chapman--Enskog approach to flux-limited diffusion theory

    International Nuclear Information System (INIS)

    Levermore, C.D.

    1979-01-01

    Using the technique developed by Chapman and Enskog for deriving the Navier--Stokes equations from the Boltzmann equation, a framework is set up for deriving diffusion theories from the transport equation. The procedure is first applied to give a derivation of isotropic diffusion theory and then of a completely new theory which is naturally flux-limited. This new flux-limited diffusion theory is then compared with asymptotic diffusion theory

  15. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  16. Proposal for derivation of limit values for skin contamination

    International Nuclear Information System (INIS)

    Schieferdecker, H.; Koelzer, W.; Henrichs, K.

    1985-04-01

    From the primary limit value for the skin dose secondary limit values are derived for skin contamination which can be used in practical radiation protection work. In analogy to the secondary limit value for the maximum permissible body burden in case of incorporation, limit values for the 'maximum permissible skin burden' are calculated with the help of dose factors for application in case of skin contamination. They can be derived from the skin dose limit values. Considering that the skin is exposed to contamination but temporarily, in analogy to the annual limit on intake in case of incorporation a 'limit value of skin contamination' is derived for immediately removable contaminations and for one day of exposure, whereas with respect to non-removable contamination and taking into account the renewal of the skin, a limit value of annual skin contamination is defined for non-removable skin contaminations. An investigation level for skin contamination is assumed as a threshold above which defined measures must be taken. Regarding these measures not more than three times appropriate washing is recommended with the subsequent procedure determined by the level of residual contamination. The respective limit values are indicated for some radionuclides selected as examples (C-14, Co-60, Sr-90, Y-90, I-131, Cs-137, Ce-141, Pu-239). (orig./HP) [de

  17. Correlation of the tokamak H-mode density limit with ballooning stability at the separatrix

    Science.gov (United States)

    Eich, T.; Goldston, R. J.; Kallenbach, A.; Sieglin, B.; Sun, H. J.; ASDEX Upgrade Team; Contributors, JET

    2018-03-01

    We show for JET and ASDEX Upgrade, based on Thomson-scattering measurements, a clear correlation of the density limit of the tokamak H-mode high-confinement regime with the approach to the ideal ballooning instability threshold at the periphery of the plasma. It is shown that the MHD ballooning parameter at the separatrix position α_sep increases about linearly with the separatrix density normalized to Greenwald density, n_e, sep/n_GW for a wide range of discharge parameters in both devices. The observed operational space is found to reach at maximum n_e, sep/n_GW≈ 0.4 -0.5 at values for α_sep≈ 2 -2.5, in the range of theoretical predictions for ballooning instability. This work supports the hypothesis that the H-mode density limit may be set by ballooning stability at the separatrix.

  18. Density limits in Tokamaks

    International Nuclear Information System (INIS)

    Tendler, M.

    1984-06-01

    The energy loss from a tokamak plasma due to neutral hydrogen radiation and recycling is of great importance for the energy balance at the periphery. It is shown that the requirement for thermal equilibrium implies a constraint on the maximum attainable edge density. The relation to other density limits is discussed. The average plasma density is shown to be a strong function of the refuelling deposition profile. (author)

  19. A maximum entropy reconstruction technique for tomographic particle image velocimetry

    International Nuclear Information System (INIS)

    Bilsky, A V; Lozhkin, V A; Markovich, D M; Tokarev, M P

    2013-01-01

    This paper studies a novel approach for reducing tomographic PIV computational complexity. The proposed approach is an algebraic reconstruction technique, termed MENT (maximum entropy). This technique computes the three-dimensional light intensity distribution several times faster than SMART, using at least ten times less memory. Additionally, the reconstruction quality remains nearly the same as with SMART. This paper presents the theoretical computation performance comparison for MENT, SMART and MART, followed by validation using synthetic particle images. Both the theoretical assessment and validation of synthetic images demonstrate significant computational time reduction. The data processing accuracy of MENT was compared to that of SMART in a slot jet experiment. A comparison of the average velocity profiles shows a high level of agreement between the results obtained with MENT and those obtained with SMART. (paper)

  20. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  1. The microscopic origin of the doping limits in semiconductors and wide-gap materials and recent developments in overcoming these limits: a review

    International Nuclear Information System (INIS)

    Zhang, S.B.

    2002-01-01

    This paper reviews the recent developments in first-principles total energy studies of the phenomenological equilibrium 'doping limit rule' that governs the maximum electrical conductivity of semiconductors via extrinsic or intrinsic doping. The rule relates the maximum equilibrium carrier concentrations (electrons or holes) of a wide range of materials to their respective band alignments. The microscopic origin of the mysterious 'doping limit rule' is the spontaneous formation of intrinsic defects: e.g., in n-type semiconductors, the formation of cation vacancies. Recent developments in overcoming the equilibrium doping limits are also discussed: it appears that a common route to significantly increase carrier concentrations is to expand the physically accessible range of the dopant atomic chemical potential by non-equilibrium doping processes, which not only suppresses the formation of the intrinsic defects but also lowers the formation energy of the impurities, thereby significantly increasing their solubility. (author)

  2. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    Science.gov (United States)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  3. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near mono-energetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the

  4. Analysis of the maximum discharge of karst springs

    Science.gov (United States)

    Bonacci, Ognjen

    2001-07-01

    Analyses are presented of the conditions that limit the discharge of some karst springs. The large number of springs studied show that, under conditions of extremely intense precipitation, a maximum value exists for the discharge of the main springs in a catchment, independent of catchment size and the amount of precipitation. Outflow modelling of karst-spring discharge is not easily generalized and schematized due to numerous specific characteristics of karst-flow systems. A detailed examination of the published data on four karst springs identified the possible reasons for the limitation on the maximum flow rate: (1) limited size of the karst conduit; (2) pressure flow; (3) intercatchment overflow; (4) overflow from the main spring-flow system to intermittent springs within the same catchment; (5) water storage in the zone above the karst aquifer or epikarstic zone of the catchment; and (6) factors such as climate, soil and vegetation cover, and altitude and geology of the catchment area. The phenomenon of limited maximum-discharge capacity of karst springs is not included in rainfall-runoff process modelling, which is probably one of the main reasons for the present poor quality of karst hydrological modelling. Résumé. Les conditions qui limitent le débit de certaines sources karstiques sont présentées. Un grand nombre de sources étudiées montrent que, sous certaines conditions de précipitations extrêmement intenses, il existe une valeur maximale pour le débit des sources principales d'un bassin, indépendante des dimensions de ce bassin et de la hauteur de précipitation. La modélisation des débits d'exhaure d'une source karstique n'est pas facilement généralisable, ni schématisable, à cause des nombreuses caractéristiques spécifiques des écoulements souterrains karstiques. Un examen détaillé des données publiées concernant quatre sources karstiques permet d'identifier les raisons possibles de la limitation de l'écoulement maximal: (1

  5. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    Science.gov (United States)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  6. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  7. Quantum Limits of Space-to-Ground Optical Communications

    Science.gov (United States)

    Hemmati, H.; Dolinar, S.

    2012-01-01

    For a pure loss channel, the ultimate capacity can be achieved with classical coherent states (i.e., ideal laser light): (1) Capacity-achieving receiver (measurement) is yet to be determined. (2) Heterodyne detection approaches the ultimate capacity at high mean photon numbers. (3) Photon-counting approaches the ultimate capacity at low mean photon numbers. A number of current technology limits drive the achievable performance of free-space communication links. Approaching fundamental limits in the bandwidth-limited regime: (1) Heterodyne detection with high-order coherent-state modulation approaches ultimate limits. SOA improvements to laser phase noise, adaptive optics systems for atmospheric transmission would help. (2) High-order intensity modulation and photon-counting can approach heterodyne detection within approximately a factor of 2. This may have advantages over coherent detection in the presence of turbulence. Approaching fundamental limits in the photon-limited regime (1) Low-duty cycle binary coherent-state modulation (OOK, PPM) approaches ultimate limits. SOA improvements to laser extinction ratio, receiver dark noise, jitter, and blocking would help. (2) In some link geometries (near field links) number-state transmission could improve over coherent-state transmission

  8. Maximum Entropy and Probability Kinematics Constrained by Conditionals

    Directory of Open Access Journals (Sweden)

    Stefan Lukits

    2015-03-01

    Full Text Available Two open questions of inductive reasoning are solved: (1 does the principle of maximum entropy (PME give a solution to the obverse Majerník problem; and (2 isWagner correct when he claims that Jeffrey’s updating principle (JUP contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating.

  9. Maximum discharge rate of liquid-vapor mixtures from vessels

    International Nuclear Information System (INIS)

    Moody, F.J.

    1975-09-01

    A discrepancy exists in theoretical predictions of the two-phase equilibrium discharge rate from pipes attached to vessels. Theory which predicts critical flow data in terms of pipe exit pressure and quality severely overpredicts flow rates in terms of vessel fluid properties. This study shows that the discrepancy is explained by the flow pattern. Due to decompression and flashing as fluid accelerates into the pipe entrance, the maximum discharge rate from a vessel is limited by choking of a homogeneous bubbly mixture. The mixture tends toward a slip flow pattern as it travels through the pipe, finally reaching a different choked condition at the pipe exit

  10. Quench limits

    International Nuclear Information System (INIS)

    Sapinski, M.

    2012-01-01

    With thirteen beam induced quenches and numerous Machine Development tests, the current knowledge of LHC magnets quench limits still contains a lot of unknowns. Various approaches to determine the quench limits are reviewed and results of the tests are presented. Attempt to reconstruct a coherent picture emerging from these results is taken. The available methods of computation of the quench levels are presented together with dedicated particle shower simulations which are necessary to understand the tests. The future experiments, needed to reach better understanding of quench limits as well as limits for the machine operation are investigated. The possible strategies to set BLM (Beam Loss Monitor) thresholds are discussed. (author)

  11. An approach to criteria, design limits and monitoring in nuclear fuel waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, G R; Baumgartner, P; Bird, G A; Davison, C C; Johnson, L H; Tamm, J A

    1994-12-01

    The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs.

  12. An approach to criteria, design limits and monitoring in nuclear fuel waste disposal

    International Nuclear Information System (INIS)

    Simmons, G.R.; Baumgartner, P.; Bird, G.A.; Davison, C.C.; Johnson, L.H.; Tamm, J.A.

    1994-12-01

    The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs

  13. Thermodynamic approach to biomass gasification; Approche thermodynamique des transformations de la biomasse

    Energy Technology Data Exchange (ETDEWEB)

    Boissonnet, G.; Seiler, J.M.

    2003-07-01

    The document presents an approach of biomass transformation in presence of steam, hydrogen or oxygen. Calculation results based on thermodynamic equilibrium are discussed. The objective of gasification techniques is to increase the gas content in CO and H{sub 2}. The maximum content in these gases is obtained when thermodynamic equilibrium is approached. Any optimisation action of a process. will, thus, tend to approach thermodynamic equilibrium conditions. On the other hand, such calculations can be used to determine the conditions which lead to an increase in the production of CO and H{sub 2}. An objective is also to determine transformation enthalpies that are an important input for process calculations. Various existing processes are assessed, and associated thermodynamic limitations are evidenced. (author)

  14. A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation

    Directory of Open Access Journals (Sweden)

    Shu Cai

    2016-12-01

    Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.

  15. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  16. EFFICACY OF SUBMUCOSAL DELIVERY THROUGH A PARAPHARYNGEAL APPROACH IN THE TREATMENT OF LIMITED CRICOID CHONDROMA

    Directory of Open Access Journals (Sweden)

    M.T. Khorsi Y. Amidi

    2008-05-01

    Full Text Available Cartilaginous tumors comprise 1% of all laryngeal masses. Since they grow slowly and metastasis is rare, long term survival is expected in cases of chondroma and chondrosarcoma. Thus, based on these facts and the fact that total salvage surgery after recurrence of previous tumor does not influence treatment outcomes, "Quality of Life" must be taken into great consideration. Based on 3 cases of limited condrosarcoma that we have successfully operated on using submucosal delivery through a parapharyngeal approach, after several years of recurrence free follow ups, authors determine this technique as an efficient method of approach to these tumors. Since this technique takes less time and there is no need for glottic incision and the patient is discharged in 2 days without insertion of endolaryngeal stent, we believe this method is superior to laryngofissure or total laryngectomy.

  17. On the Use of Maximum Force Criteria to Predict Localised Necking in Metal Sheets under Stretch-Bending

    Directory of Open Access Journals (Sweden)

    Domingo Morales-Palma

    2017-11-01

    Full Text Available The maximum force criteria and their derivatives, the Swift and Hill criteria, have been extensively used in the past to study sheet formability. Many extensions or modifications of these criteria have been proposed to improve necking predictions under only stretching conditions. This work analyses the maximum force principle under stretch-bending conditions and develops two different approaches to predict necking. The first is a generalisation of classical maximum force criteria to stretch-bending processes. The second approach is an extension of a previous work of the authors based on critical distance concepts, suggesting that necking of the sheet is controlled by the damage of a critical material volume located at the inner side of the sheet. An analytical deformation model is proposed to characterise the stretch-bending process under plane-strain conditions. Different parameters are considered, such as the thickness reduction, the gradient of variables through the sheet thickness, the thickness stress and the anisotropy of the material. The proposed necking models have been successfully applied to predict the failure in different materials, such as steel, brass and aluminium.

  18. Equations of viscous flow of silicate liquids with different approaches for universality of high temperature viscosity limit

    Directory of Open Access Journals (Sweden)

    Ana F. Kozmidis-Petrović

    2014-06-01

    Full Text Available The Vogel-Fulcher-Tammann (VFT, Avramov and Milchev (AM as well as Mauro, Yue, Ellison, Gupta and Allan (MYEGA functions of viscous flow are analysed when the compositionally independent high temperature viscosity limit is introduced instead of the compositionally dependent parameter η∞ . Two different approaches are adopted. In the first approach, it is assumed that each model should have its own (average high-temperature viscosity parameter η∞ . In that case, η∞ is different for each of these three models. In the second approach, it is assumed that the high-temperature viscosity is a truly universal value, independent of the model. In this case, the parameter η∞ would be the same and would have the same value: log η∞ = −1.93 dPa·s for all three models. 3D diagrams can successfully predict the difference in behaviour of viscous functions when average or universal high temperature limit is applied in calculations. The values of the AM functions depend, to a greater extent, on whether the average or the universal value for η∞ is used which is not the case with the VFT model. Our tests and values of standard error of estimate (SEE show that there are no general rules whether the average or universal high temperature viscosity limit should be applied to get the best agreement with the experimental functions.

  19. A qualitative risk assessment approach for Swiss dairy products: opportunities and limitations.

    Science.gov (United States)

    Menéndez González, S; Hartnack, S; Berger, T; Doherr, M; Breidenbach, E

    2011-05-01

    Switzerland implemented a risk-based monitoring of Swiss dairy products in 2002 based on a risk assessment (RA) that considered the probability of exceeding a microbiological limit value set by law. A new RA was launched in 2007 to review and further develop the previous assessment, and to make recommendations for future risk-based monitoring according to current risks. The resulting qualitative RA was designed to ascertain the risk to human health from the consumption of Swiss dairy products. The products and microbial hazards to be considered in the RA were determined based on a risk profile. The hazards included Campylobacter spp., Listeria monocytogenes, Salmonella spp., Shiga toxin-producing Escherichia coli, coagulase-positive staphylococci and Staphylococcus aureus enterotoxin. The release assessment considered the prevalence of the hazards in bulk milk samples, the influence of the process parameters on the microorganisms, and the influence of the type of dairy. The exposure assessment was linked to the production volume. An overall probability was estimated combining the probabilities of release and exposure for each combination of hazard, dairy product and type of dairy. This overall probability represents the likelihood of a product from a certain type of dairy exceeding the microbiological limit value and being passed on to the consumer. The consequences could not be fully assessed due to lack of detailed information on the number of disease cases caused by the consumption of dairy products. The results were expressed as a ranking of overall probabilities. Finally, recommendations for the design of the risk-based monitoring programme and for filling the identified data gaps were given. The aims of this work were (i) to present the qualitative RA approach for Swiss dairy products, which could be adapted to other settings and (ii) to discuss the opportunities and limitations of the qualitative method. © 2010 Blackwell Verlag GmbH.

  20. EDF's approach to determine specifications for nuclear power plant bulk chemicals

    International Nuclear Information System (INIS)

    Basile, Alix; Dijoux, Michel; Le-Calvar, Marc; Gressier, Frederic; Mole, Didier

    2012-09-01

    Chemical impurities in the primary, secondary and auxiliary nuclear power plants circuits generate risks of corrosion of the fuel cladding, steel and nickel based alloys. The PMUC (Products and Materials Used in plants) organization established by EDF intends to limit this risk by specifying maximum levels of impurities in products and materials used for the operation and maintenance of Nuclear Power Plants (NPPs). Bulk chemicals specifications, applied on primary and secondary circuit chemicals and hydrogen and nitrogen gases, are particularly important to prevent chemical species to be involved in the corrosion of the NPPs materials. The application of EDF specifications should lead to reasonably exclude any risk of degradation of the first and second containment barriers and auxiliary circuits Important to Safety (IPS) by limiting the concentrations of chlorides, fluorides, sulfates... The risk of metal embrittlement by elements with low melting point (mercury, lead...) is also included. For the primary circuit, the specifications intend to exclude the risk of activation of impurities introduced by the bulk chemicals. For the first containment barrier, to reduce the risk of deposits like zeolites, PMUC products specifications set limit values for calcium, magnesium, aluminum and silica. EDF's approach for establishing specifications for bulk chemicals is taking also into account the capacity of industrial production, as well as costs, limitations of analytical control methods (detection limits) and environmental releases issues. This paper aims to explain EDF's approach relative to specifications of impurities in bulk chemicals. Also presented are the various parameters taken into account to determine the maximum pollution levels in the chemicals, the theoretical hypothesis to set the specifications and the calculation method used to verify that the specifications are suitable. (authors)

  1. Transport simulations of a density limit in radiation-dominated tokamak discharges: II

    International Nuclear Information System (INIS)

    Stotler, D.P.

    1991-05-01

    The procedures developed previously to simulate the radiatively induced tokamak density limit are used to examine in more detail the scaling of the density limit. It is found that the maximum allowable density increases with auxiliary power and decreases with impurity concentration. However, it is demonstrated that there is little dependence of the density limit on plasma elongation. These trends are consistent with experimental results. Our previous work used coronal equilibrium impurities; the primary result of that paper was that the maximum density increases with current when peaked profiles are assumed. Here, this behavior is shown to occur with a coronal nonequilibrium impurity as well. 26 refs., 4 figs

  2. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  3. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  4. LANSCE Beam Current Limiter (XL)

    International Nuclear Information System (INIS)

    Gallegos, F.R.; Hall, M.J.

    1997-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) is an engineered safety system that provides personnel protection from prompt radiation due to accelerated proton beams. The Beam Current Limiter (XL), as an active component of the RSS, limits the maximum average current in a beamline, thus the current available for a beam spill accident. Exceeding the pre-set limit initiates action by the RSS to mitigate the hazard (insertion of beam stoppers in the low energy beam transport). The beam limiter is an electrically isolated, toroidal transformer and associated electronics. The device was designed to continuously monitor beamline currents independent of any external timing. Fail-safe operation was a prime consideration in its development. Fail-safe operation is defined as functioning as intended (due to redundant circuitry), functioning with a more sensitive fault threshold, or generating a fault condition. This report describes the design philosophy, hardware, implementation, operation, and limitations of the device

  5. Bayesian-statistical decision threshold, detection limit, and confidence interval in nuclear radiation measurement

    International Nuclear Information System (INIS)

    Weise, K.

    1998-01-01

    When a contribution of a particular nuclear radiation is to be detected, for instance, a spectral line of interest for some purpose of radiation protection, and quantities and their uncertainties must be taken into account which, such as influence quantities, cannot be determined by repeated measurements or by counting nuclear radiation events, then conventional statistics of event frequencies is not sufficient for defining the decision threshold, the detection limit, and the limits of a confidence interval. These characteristic limits are therefore redefined on the basis of Bayesian statistics for a wider applicability and in such a way that the usual practice remains as far as possible unaffected. The principle of maximum entropy is applied to establish probability distributions from available information. Quantiles of these distributions are used for defining the characteristic limits. But such a distribution must not be interpreted as a distribution of event frequencies such as the Poisson distribution. It rather expresses the actual state of incomplete knowledge of a physical quantity. The different definitions and interpretations and their quantitative consequences are presented and discussed with two examples. The new approach provides a theoretical basis for the DIN 25482-10 standard presently in preparation for general applications of the characteristic limits. (orig.) [de

  6. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  7. Thermal and structural limitations for impurity-control components in FED/INTOR

    International Nuclear Information System (INIS)

    Majumdar, S.; Cha, Y.; Mattas, R.; Abdou, M.; Cramer, B.; Haines, J.

    1983-02-01

    The successful operation of the impurity-control system of the FED/INTOR will depend to a large extent on the ability of its various components to withstand the imposed thermal and mechanical loads. The present paper explores the thermal and stress analyses aspects of the limiter and divertor operation of the FED/INTOR in its reference configuration. Three basic limitations governing the design of the limiter and the divertor are the maximum allowable metal temperature, the maximum allowable stress intensity and the allowable fatigue life of the structural material. Other important design limitations stemming from sputtering, evaporation, melting during disruptions, etc. are not considered in the present paper. The materials considered in the present analysis are a copper and a vanadium alloy for the structural material and graphite, beryllium, beryllium oxide, tungsten and silicon carbide for the coating or tile material

  8. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  9. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  10. Net-section limit moments and approximate J estimates for circumferential cracks at the interface between elbows and pipes

    International Nuclear Information System (INIS)

    Song, Tae-Kwang; Kim, Yun-Jae; Oh, Chang-Kyun; Jin, Tae-Eun; Kim, Jong-Sung

    2009-01-01

    This paper firstly presents net-section limit moments for circumferential through-wall and part-through surface cracks at the interface between elbows and attached straight pipes under in-plane bending. Closed-form solutions are proposed based on fitting results from small strain FE limit analyses using elastic-perfectly plastic materials. Net-section limit moments for circumferential cracks at the interface between elbows and attached straight pipes are found to be close to those for cracks in the centre of elbows, implying that the location of the circumferential crack within an elbow has a minimal effect on the net-section limit moment. Accordingly it is also found that the assumption that the crack locates in a straight pipe could significantly overestimate the net-section limit load (and thus maximum load-carrying capacity) of the cracked component. Based on the proposed net-section limit moment, a method to estimate elastic-plastic J based on the reference stress approach is proposed for circumferential cracks at the interface between elbows and attached straight pipes under in-plane bending.

  11. 47 CFR 22.727 - Power limits for conventional rural radiotelephone transmitters.

    Science.gov (United States)

    2010-10-01

    ... this section. (a) Maximum ERP. The effective radiated power (ERP) of central office and rural... circumstances. Frequency range (MHz) Maximum ERP (watts) 152-153 1400 157-159 150 454-455 3500 459-460 150 (b) Basic power limit. Except as provided in paragraph (d) of this section, the ERP of central office...

  12. Maximum one-shot dissipated work from Rényi divergences

    Science.gov (United States)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  13. Estimation of Road Vehicle Speed Using Two Omnidirectional Microphones: A Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    López-Valcarce Roberto

    2004-01-01

    Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.

  14. A design approach for systems based on magnetic pulse compression

    International Nuclear Information System (INIS)

    Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.

    2008-01-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results

  15. Effects of structure and defect on fatigue limit in high strength ductile irons

    International Nuclear Information System (INIS)

    Kim, Jin Hak; Kim, Min Gun

    2000-01-01

    In this paper, the influence of several factors such as hardness, internal defect and non-propagating crack on fatigue limits was investigated with three kinds of ductile iron specimens. From the experimental results the fatigue limits were examined in relation with hardness and tensile strength in case of high strength specimens under austempering treatment; in consequence the marked improvement of fatigue limits were not showed. The maximum defect size was an important factor to predict and to evaluate the fatigue limits of ductile irons. And, the quantitative relationship between the fatigue limits(σ ω ) and the maximum defect size(√area max ) was expressed as σ ω n · √area max =C 2 . Also, it was possible to explain the difference for the fatigue limits in three ductile irons by introduction of the non-propagating crack rates

  16. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  17. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    Science.gov (United States)

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  18. 5 CFR 582.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... earnings that may be garnished for a Federal, State or local tax obligation or in compliance with an order... alimony, including any amounts withheld to offset administrative costs as provided for in § 582.305(k... of an employee-obligor's aggregate disposable earnings for any workweek in compliance with legal...

  19. On the maximum-entropy method for kinetic equation of radiation, particle and gas

    International Nuclear Information System (INIS)

    El-Wakil, S.A.; Madkour, M.A.; Degheidy, A.R.; Machali, H.M.

    1995-01-01

    The maximum-entropy approach is used to calculate some problems in radiative transfer and reactor physics such as the escape probability, the emergent and transmitted intensities for a finite slab as well as the emergent intensity for a semi-infinite medium. Also, it is employed to solve problems involving spherical geometry, such as luminosity (the total energy emitted by a sphere), neutron capture probability and the albedo problem. The technique is also employed in the kinetic theory of gases to calculate the Poiseuille flow and thermal creep of a rarefied gas between two plates. Numerical calculations are achieved and compared with the published data. The comparisons demonstrate that the maximum-entropy results are good in agreement with the exact ones. (orig.)

  20. Shotgun approaches to gait analysis : insights & limitations

    NARCIS (Netherlands)

    Kaptein, Ronald G.; Wezenberg, Daphne; IJmker, Trienke; Houdijk, Han; Beek, Peter J.; Lamoth, Claudine J. C.; Daffertshofer, Andreas

    2014-01-01

    Background: Identifying features for gait classification is a formidable problem. The number of candidate measures is legion. This calls for proper, objective criteria when ranking their relevance. Methods: Following a shotgun approach we determined a plenitude of kinematic and physiological gait

  1. Determination Of Maximum Power Of The RSG-Gas At Power Operation Mode Using One Line Cooling System

    International Nuclear Information System (INIS)

    Hastuti, Endiah Puji; Kuntoro, Iman; Darwis Isnaini, M.

    2000-01-01

    In the frame of minimizing the operation-cost, operation mode using one line cooling system is being evaluated. Maximum reactor power shall be determined to assure that the existing safety criteria are not violated. The analysis was done by means of a core thermal hydraulic code, COOLOD-N. The code solves core thermal hydraulic equation at steady state conditions. By varying the reactor power as the input, thermal hydraulic parameters such as fuel cladding and fuel meat temperatures as well as safety margin against flow instability were calculated. Imposing the safety criteria to the results, maximum permissible power for this operation was obtained as much as 17.1 MW. Nevertheless, for operation the maximum power is limited to 15MW

  2. Mammographic image restoration using maximum entropy deconvolution

    International Nuclear Information System (INIS)

    Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R

    2004-01-01

    An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization

  3. Benefits and limitations of a multidisciplinary approach to individualized management of Cornelia de Lange syndrome and related diagnoses.

    Science.gov (United States)

    January, Kathleen; Conway, Laura J; Deardorff, Matthew; Harrington, Ann; Krantz, Ian D; Loomes, Kathleen; Pipan, Mary; Noon, Sarah E

    2016-06-01

    Given the clinical complexities of Cornelia de Lange Syndrome (CdLS), the Center for CdLS and Related Diagnoses at The Children's Hospital of Philadelphia (CHOP) and The Multidisciplinary Clinic for Adolescents and Adults at Greater Baltimore Medical Center (GBMC) were established to develop a comprehensive approach to clinical management and research issues relevant to CdLS. Little work has been done to evaluate the general utility of a multispecialty approach to patient care. Previous research demonstrates several advantages and disadvantages of multispecialty care. This research aims to better understand the benefits and limitations of a multidisciplinary clinic setting for individuals with CdLS and related diagnoses. Parents of children with CdLS and related diagnoses who have visited a multidisciplinary clinic (N = 52) and who have not visited a multidisciplinary clinic (N = 69) were surveyed to investigate their attitudes. About 90.0% of multispecialty clinic attendees indicated a preference for multidisciplinary care. However, some respondents cited a need for additional clinic services including more opportunity to meet with other specialists (N = 20), such as behavioral health, and increased information about research studies (N = 15). Travel distance and expenses often prevented families' multidisciplinary clinic attendance (N = 41 and N = 35, respectively). Despite identified limitations, these findings contribute to the evidence demonstrating the utility of a multispecialty approach to patient care. This approach ultimately has the potential to not just improve healthcare for individuals with CdLS but for those with medically complex diagnoses in general. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  5. 14 CFR 125.93 - Airplane limitations.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Airplane limitations. 125.93 Section 125.93...: AIRPLANES HAVING A SEATING CAPACITY OF 20 OR MORE PASSENGERS OR A MAXIMUM PAYLOAD CAPACITY OF 6,000 POUNDS OR MORE; AND RULES GOVERNING PERSONS ON BOARD SUCH AIRCRAFT Airplane Requirements § 125.93 Airplane...

  6. Flame Spread and Group-Combustion Excitation in Randomly Distributed Droplet Clouds with Low-Volatility Fuel near the Excitation Limit: a Percolation Approach Based on Flame-Spread Characteristics in Microgravity

    Science.gov (United States)

    Mikami, Masato; Saputro, Herman; Seo, Takehiko; Oyagi, Hiroshi

    2018-03-01

    Stable operation of liquid-fueled combustors requires the group combustion of fuel spray. Our study employs a percolation approach to describe unsteady group-combustion excitation based on findings obtained from microgravity experiments on the flame spread of fuel droplets. We focus on droplet clouds distributed randomly in three-dimensional square lattices with a low-volatility fuel, such as n-decane in room-temperature air, where the pre-vaporization effect is negligible. We also focus on the flame spread in dilute droplet clouds near the group-combustion-excitation limit, where the droplet interactive effect is assumed negligible. The results show that the occurrence probability of group combustion sharply decreases with the increase in mean droplet spacing around a specific value, which is termed the critical mean droplet spacing. If the lattice size is at smallest about ten times as large as the flame-spread limit distance, the flame-spread characteristics are similar to those over an infinitely large cluster. The number density of unburned droplets remaining after completion of burning attained maximum around the critical mean droplet spacing. Therefore, the critical mean droplet spacing is a good index for stable combustion and unburned hydrocarbon. In the critical condition, the flame spreads through complicated paths, and thus the characteristic time scale of flame spread over droplet clouds has a very large value. The overall flame-spread rate of randomly distributed droplet clouds is almost the same as the flame-spread rate of a linear droplet array except over the flame-spread limit.

  7. Maximum Torque and Momentum Envelopes for Reaction Wheel Arrays

    Science.gov (United States)

    Markley, F. Landis; Reynolds, Reid G.; Liu, Frank X.; Lebsock, Kenneth L.

    2009-01-01

    Spacecraft reaction wheel maneuvers are limited by the maximum torque and/or angular momentum that the wheels can provide. For an n-wheel configuration, the torque or momentum envelope can be obtained by projecting the n-dimensional hypercube, representing the domain boundary of individual wheel torques or momenta, into three dimensional space via the 3xn matrix of wheel axes. In this paper, the properties of the projected hypercube are discussed, and algorithms are proposed for determining this maximal torque or momentum envelope for general wheel configurations. Practical strategies for distributing a prescribed torque or momentum among the n wheels are presented, with special emphasis on configurations of four, five, and six wheels.

  8. The effect of coupling hydrologic and hydrodynamic models on probable maximum flood estimation

    Science.gov (United States)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2017-07-01

    Deterministic rainfall-runoff modelling usually assumes stationary hydrological system, as model parameters are calibrated with and therefore dependant on observed data. However, runoff processes are probably not stationary in the case of a probable maximum flood (PMF) where discharge greatly exceeds observed flood peaks. Developing hydrodynamic models and using them to build coupled hydrologic-hydrodynamic models can potentially improve the plausibility of PMF estimations. This study aims to assess the potential benefits and constraints of coupled modelling compared to standard deterministic hydrologic modelling when it comes to PMF estimation. The two modelling approaches are applied using a set of 100 spatio-temporal probable maximum precipitation (PMP) distribution scenarios. The resulting hydrographs, the resulting peak discharges as well as the reliability and the plausibility of the estimates are evaluated. The discussion of the results shows that coupling hydrologic and hydrodynamic models substantially improves the physical plausibility of PMF modelling, although both modelling approaches lead to PMF estimations for the catchment outlet that fall within a similar range. Using a coupled model is particularly suggested in cases where considerable flood-prone areas are situated within a catchment.

  9. Structural analysis of steam generator internals following feed water main steam line break: DLF approach

    International Nuclear Information System (INIS)

    Bhasin, Vivek; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1993-01-01

    In order to evaluate the possible release of radioactivity in extreme events, some postulated accidents are analysed and studied during the design stage of Steam Generator (SG). Among the various accidents postulated, the most important are Feed Water Line Break (FWLB) and Main Steam Line Break (MSLB). This report concerns with dynamic structural analysis of SG internals following FWLB/MSLB. The pressure/drag-force time histories considered were corresponding to the conditions leading to the accident of maximum potential. The SG internals were analysed using two approaches of structural dynamics. In first approach simplified DLF method was adopted. This method yields an upper bound values of stresses and deflection. In the second approach time history analysis by Mode Superposition Technique was adopted. This approach gives more realistic results. The structure was qualified as per ASME B and PV Code SecIII NB. It was concluded that in all the components except perforated flow distribution plate, the stress values based on elastic analysis are within the limits specified by ASME Code. In case of perforated flow distribution plate during the MSLB transient the stress values based on elastic analysis are higher than the ASME Code limits. Therefore, its limit load analysis had to be done. Finally, the collapse pressure evaluated using limit load analysis was shown to be within the limits of ASME B and PV Code SecIII Nb. (author). 31 refs., 94 figs., 16 tabs

  10. Contaminant ingress into multizone buildings: An analytical state-space approach

    KAUST Repository

    Parker, Simon

    2013-08-13

    The ingress of exterior contaminants into buildings is often assessed by treating the building interior as a single well-mixed space. Multizone modelling provides an alternative way of representing buildings that can estimate concentration time series in different internal locations. A state-space approach is adopted to represent the concentration dynamics within multizone buildings. Analysis based on this approach is used to demonstrate that the exposure in every interior location is limited to the exterior exposure in the absence of removal mechanisms. Estimates are also developed for the short term maximum concentration and exposure in a multizone building in response to a step-change in concentration. These have considerable potential for practical use. The analytical development is demonstrated using a simple two-zone building with an inner zone and a range of existing multizone models of residential buildings. Quantitative measures are provided of the standard deviation of concentration and exposure within a range of residential multizone buildings. Ratios of the maximum short term concentrations and exposures to single zone building estimates are also provided for the same buildings. © 2013 Tsinghua University Press and Springer-Verlag Berlin Heidelberg.

  11. FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm

    OpenAIRE

    BLOCK, Henry; MARUYAMA, Tsutomu

    2017-01-01

    In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...

  12. 40 CFR 421.122 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Science.gov (United States)

    2010-07-01

    ... monthly average mg/troy ounce of silver from film stripping Copper 95.670 50.350 Zinc 73.510 30.720... Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of silver... Limitations Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of...

  13. 40 CFR 421.123 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Science.gov (United States)

    2010-07-01

    .../troy ounce of silver from film stripping Copper 64.450 30.720 Zinc 51.360 21.150 Ammonia (as N) 6,712... pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of silver from... Limitations Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg/troy ounce of...

  14. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  15. Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.

    Directory of Open Access Journals (Sweden)

    Yahya Karimipanah

    Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally

  16. Physiological responses to short-term thermal stress in mayfly (Neocloeon triangulifer) larvae in relation to upper thermal limits.

    Science.gov (United States)

    Kim, Kyoung Sun; Chou, Hsuan; Funk, David H; Jackson, John K; Sweeney, Bernard W; Buchwalter, David B

    2017-07-15

    Understanding species' thermal limits and their physiological determinants is critical in light of climate change and other human activities that warm freshwater ecosystems. Here, we ask whether oxygen limitation determines the chronic upper thermal limits in larvae of the mayfly Neocloeon triangulifer , an emerging model for ecological and physiological studies. Our experiments are based on a robust understanding of the upper acute (∼40°C) and chronic thermal limits of this species (>28°C, ≤30°C) derived from full life cycle rearing experiments across temperatures. We tested two related predictions derived from the hypothesis that oxygen limitation sets the chronic upper thermal limits: (1) aerobic scope declines in mayfly larvae as they approach and exceed temperatures that are chronically lethal to larvae; and (2) genes indicative of hypoxia challenge are also responsive in larvae exposed to ecologically relevant thermal limits. Neither prediction held true. We estimated aerobic scope by subtracting measurements of standard oxygen consumption rates from measurements of maximum oxygen consumption rates, the latter of which was obtained by treating with the metabolic uncoupling agent carbonyl cyanide-4-(trifluoromethoxy) pheylhydrazone (FCCP). Aerobic scope was similar in larvae held below and above chronic thermal limits. Genes indicative of oxygen limitation (LDH, EGL-9) were only upregulated under hypoxia or during exposure to temperatures beyond the chronic (and more ecologically relevant) thermal limits of this species (LDH). Our results suggest that the chronic thermal limits of this species are likely not driven by oxygen limitation, but rather are determined by other factors, e.g. bioenergetics costs. We caution against the use of short-term thermal ramping approaches to estimate critical thermal limits (CT max ) in aquatic insects because those temperatures are typically higher than those that occur in nature. © 2017. Published by The Company of

  17. Effects of the location of a biased limiter on turbulent transport in the IR-T1 tokamak plasma

    International Nuclear Information System (INIS)

    Alipour, R.; Ghoranneviss, M.; Salar Elahi, A.; Meshkani, S.

    2017-01-01

    Plasma confinement plays an important role in fusion study. Applying an external voltage using limiter biasing system is proved to be an efficient approach for plasma confinement. In this study, the position of the limiter biasing system was changed to investigate the effect of applying external voltages at different places to the plasma. The external voltages of ±200 V were applied at the different positions of edge, 5 mm and 10 mm inside the plasma. Then, the main plasma parameters were measured. The results show that the poloidal turbulent transport and radial electric field increased about 25-35% and 35-45%, respectively (specially when the limiter biasing system was placed 5 mm inside the plasma). Also, the Reynolds stress has experienced its maximum reduction about 5-10% when the limiter biasing system was at 5 mm inside the plasma and the voltage of +200 V was applied to the plasma. When the limiter biasing system move 10 mm inside the plasma, the main plasma parameters experienced more instabilities and fluctuations than other positions. (authors)

  18. Maximum Feedrate Interpolator for Multi-axis CNC Machining with Jerk Constraints

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2012-01-01

    A key role of the CNC is to perform the feedrate interpolation which means to generate the setpoints for each machine tool axis. The aim of the VPOp algorithm is to make maximum use of the machine tool respecting both tangential and axis jerk on rotary and linear axes. The developed algorithm uses an iterative constraints intersection approach. At each sampling period, all the constraints given by each axis are expressed and by intersecting all of them the allowable interval for the next poin...

  19. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    International Nuclear Information System (INIS)

    2000-01-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance

  20. Application of maximum values for radiation exposure and principles for the calculation of radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    The guide sets out the mathematical definitions and principles involved in the calculation of the equivalent dose and the effective dose, and the instructions concerning the application of the maximum values of these quantities. further, for monitoring the dose caused by internal radiation, the guide defines the limits derived from annual dose limits (the Annual Limit on Intake and the Derived Air Concentration). Finally, the guide defines the operational quantities to be used in estimating the equivalent dose and the effective dose, and also sets out the definitions of some other quantities and concepts to be used in monitoring radiation exposure. The guide does not include the calculation of patient doses carried out for the purposes of quality assurance.

  1. Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics

    Science.gov (United States)

    Guercilena, Federico; Radice, David; Rezzolla, Luciano

    2017-07-01

    We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.

  2. ICRP-recommendations on dose limits for workers

    International Nuclear Information System (INIS)

    Beninson, D.J.

    1976-01-01

    Dose limits proposed by the ICRP have been incorporated in most national and international standards and their respect has caused a distribution of doses with a average not exceeding 1/10 of the maximum permissible dose. This distribution corresponds to a risk which is well within the risks in 'safe industries'. There are at present some inconsistancies in the current system of recommended limits, for example having the same limit of 5 rem for the whole-body and also for some organs. Hopefully, this incosistancy will be removed in the next recommendation of the ICRP. But the whole-body limit of 5 rem in a year has been safe and there is little ground to reduce this limit on the basis of comparisons with 'safe industries'. (orig./HP) [de

  3. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  4. Approximate maximum likelihood estimation for population genetic inference.

    Science.gov (United States)

    Bertl, Johanna; Ewing, Gregory; Kosiol, Carolin; Futschik, Andreas

    2017-11-27

    In many population genetic problems, parameter estimation is obstructed by an intractable likelihood function. Therefore, approximate estimation methods have been developed, and with growing computational power, sampling-based methods became popular. However, these methods such as Approximate Bayesian Computation (ABC) can be inefficient in high-dimensional problems. This led to the development of more sophisticated iterative estimation methods like particle filters. Here, we propose an alternative approach that is based on stochastic approximation. By moving along a simulated gradient or ascent direction, the algorithm produces a sequence of estimates that eventually converges to the maximum likelihood estimate, given a set of observed summary statistics. This strategy does not sample much from low-likelihood regions of the parameter space, and is fast, even when many summary statistics are involved. We put considerable efforts into providing tuning guidelines that improve the robustness and lead to good performance on problems with high-dimensional summary statistics and a low signal-to-noise ratio. We then investigate the performance of our resulting approach and study its properties in simulations. Finally, we re-estimate parameters describing the demographic history of Bornean and Sumatran orang-utans.

  5. Limit lines for risk

    International Nuclear Information System (INIS)

    Cox, D.C.; Baybutt, P.

    1982-01-01

    Approaches to the regulation of risk from technological systems, such as nuclear power plants or chemical process plants, in which potential accidents may result in a broad range of adverse consequences must take into account several different aspects of risk. These include overall or average risk, accidents posing high relative risks, the rate at which accident probability decreases with increasing accident consequences, and the impact of high frequency, low consequence accidents. A hypothetical complementary cumulative distribution function (CCDF), with appropriately chosen parametric form, meets all these requirements. The Farmer limit line, by contrast, places limits on the risks due to individual accident sequences, and cannot adequately account for overall risk. This reduces its usefulness as a regulatory tool. In practice, the CCDF is used in the Canadian nuclear licensing process, while the Farmer limit line approach, supplemented by separate qualitative limits on overall risk, is employed in the United Kingdom

  6. A Limited Survey of Aflatoxins in Poultry Feed and Feed Ingredients in Guyana

    Directory of Open Access Journals (Sweden)

    Donna M. Morrison

    2017-11-01

    Full Text Available A study was conducted to determine the presence of aflatoxins in finished poultry feed from manufacturing companies, feed ingredients, and poultry feed at the point of sale. Two collections were made. In the first collection, samples of the finished feed and feed ingredients were analyzed by high-performance liquid chromatography (HPLC. For the second collection, all samples were analyzed by ELISA while a subset was analyzed by HPLC. Of the 27 samples of finished feed, five samples had aflatoxin concentrations greater than the United States Food and Drug Administration (USFDA and European Union Commission (EUC maximum tolerable limit of 20 µg/kg, while for the feed ingredients, three of the 30 samples of feed ingredients exceeded the limit. Of the 93 samples of finished feed purchased from retailers, five samples had aflatoxin concentrations greater than the maximum tolerable limit. This survey indicates that most of the samples were below the maximum regulatory limit and maintained quality up to the point of sale for 2015 and 2016. However, given that some samples were above the limit, there is a need to monitor the production and marketing chain to ensure that the quality of the finished feed is not compromised.

  7. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano

    2012-03-17

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  8. High-frequency maximum observable shaking map of Italy from fault sources

    KAUST Repository

    Zonno, Gaetano; Basili, Roberto; Meroni, Fabrizio; Musacchio, Gemma; Mai, Paul Martin; Valensise, Gianluca

    2012-01-01

    We present a strategy for obtaining fault-based maximum observable shaking (MOS) maps, which represent an innovative concept for assessing deterministic seismic ground motion at a regional scale. Our approach uses the fault sources supplied for Italy by the Database of Individual Seismogenic Sources, and particularly by its composite seismogenic sources (CSS), a spatially continuous simplified 3-D representation of a fault system. For each CSS, we consider the associated Typical Fault, i. e., the portion of the corresponding CSS that can generate the maximum credible earthquake. We then compute the high-frequency (1-50 Hz) ground shaking for a rupture model derived from its associated maximum credible earthquake. As the Typical Fault floats within its CSS to occupy all possible positions of the rupture, the high-frequency shaking is updated in the area surrounding the fault, and the maximum from that scenario is extracted and displayed on a map. The final high-frequency MOS map of Italy is then obtained by merging 8,859 individual scenario-simulations, from which the ground shaking parameters have been extracted. To explore the internal consistency of our calculations and validate the results of the procedure we compare our results (1) with predictions based on the Next Generation Attenuation ground-motion equations for an earthquake of M w 7.1, (2) with the predictions of the official Italian seismic hazard map, and (3) with macroseismic intensities included in the DBMI04 Italian database. We then examine the uncertainties and analyse the variability of ground motion for different fault geometries and slip distributions. © 2012 Springer Science+Business Media B.V.

  9. Room at the Mountain: Estimated Maximum Amounts of Commercial Spent Nuclear Fuel Capable of Disposal in a Yucca Mountain Repository

    International Nuclear Information System (INIS)

    Kessler, John H.; Kemeny, John; King, Fraser; Ross, Alan M.; Ross, Benjamen

    2006-01-01

    The purpose of this paper is to present an initial analysis of the maximum amount of commercial spent nuclear fuel (CSNF) that could be emplaced into a geological repository at Yucca Mountain. This analysis identifies and uses programmatic, material, and geological constraints and factors that affect this estimation of maximum amount of CSNF for disposal. The conclusion of this initial analysis is that the current legislative limit on Yucca Mountain disposal capacity, 63,000 MTHM of CSNF, is a small fraction of the available physical capacity of the Yucca Mountain system assuming the current high-temperature operating mode (HTOM) design. EPRI is confident that at least four times the legislative limit for CSNF (∼260,000 MTHM) can be emplaced in the Yucca Mountain system. It is possible that with additional site characterization, upwards of nine times the legislative limit (∼570,000 MTHM) could be emplaced. (authors)

  10. Method for the determination of technical specifications limiting temperature in EBR-II operation

    International Nuclear Information System (INIS)

    Chang, L.K.; Hill, D.J.; Ku, J.Y.

    2004-01-01

    The methodology and analysis procedure to qualify the Mark-V and Mark-VA fuels for the Experimental Breeder Reactor II are summarized in this paper. Fuel performance data and design safety criteria are essential for thermal-hydraulic analyses and safety evaluations. Normal and off-normal operation duty cycles and transient classifications are required for the safety assessment of the fuels. Design safety criteria for steady-state normal and transient off-normal operations were developed to ensure structural integrity of the fuel pin. The maximum allowable coolant outlet temperatures and powers of subassemblies for steady-state normal operation conditions were first determined in a row-by-row basis by a thermal-hydraulic and fuel damage analysis, in which a trial-and-error approach was used to predict the maximum subassembly coolant outlet temperatures and powers that satisfy the design safety criteria for steady-state normal operation conditions. The limiting steady-state temperature and power were then used as the initial subassembly thermal conditions for the off-normal transient analysis to assess the safety performance of the fuel pin for anticipated, unlikely and extremely unlikely events. If the design safety criteria for the off-normal events are not satisfied, then the initial steady-state subassembly temperatures and/or powers are reduced and an iterative procedure is employed until the design safety criteria for off-normal conditions are satisfied, and the initial subassembly outlet coolant temperature and power are the technical specification limits for reactor operation. (author)

  11. Dynamic Optimization of a Polymer Flooding Process Based on Implicit Discrete Maximum Principle

    Directory of Open Access Journals (Sweden)

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and some inequality constraints as polymer concentration and injection amount limitation. The optimal control model is discretized by full implicit finite-difference method. To cope with the discrete optimal control problem (OCP, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s discrete maximum principle. A modified gradient method with new adjoint construction is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  12. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  13. Plastic limit loads for cylindrical shell intersections under combined loading

    International Nuclear Information System (INIS)

    Skopinsky, V.N.; Berkov, N.A.; Vogov, R.A.

    2015-01-01

    In this research, applied methods of nonlinear analysis and results of determining the plastic limit loads for shell intersection configurations under combined internal pressure, in-plane moment and out-plane moment loadings are presented. The numerical analysis of shell intersections is performed using the finite element method, geometrically nonlinear shell theory in quadratic approximation and plasticity theory. For determining the load parameter of proportional combined loading, the developed maximum criterion of rate of change of relative plastic work is employed. The graphical results for model of cylindrical shell intersection under different two-parameter combined loadings (as generalized plastic limit load curves) and three-parameter combined loading (as generalized plastic limit load surface) are presented on the assumption that the internal pressure, in-plane moment and out-plane moment loads were applied in a proportional manner. - Highlights: • This paper presents nonlinear two-dimensional FE analysis for shell intersections. • Determining the plastic limit loads under combined loading is considered. • Developed maximum criterion of rate of change of relative plastic work is employed. • Plastic deformation mechanism in shell intersections is discussed. • Results for generalized plastic limit load curves of branch intersection are presented

  14. The generalized Shockley-Queisser limit for nanostructured solar cells

    Science.gov (United States)

    Xu, Yunlu; Gong, Tao; Munday, Jeremy N.

    2015-09-01

    The Shockley-Queisser limit describes the maximum solar energy conversion efficiency achievable for a particular material and is the standard by which new photovoltaic technologies are compared. This limit is based on the principle of detailed balance, which equates the photon flux into a device to the particle flux (photons or electrons) out of that device. Nanostructured solar cells represent a novel class of photovoltaic devices, and questions have been raised about whether or not they can exceed the Shockley-Queisser limit. Here we show that single-junction nanostructured solar cells have a theoretical maximum efficiency of ˜42% under AM 1.5 solar illumination. While this exceeds the efficiency of a non-concentrating planar device, it does not exceed the Shockley-Queisser limit for a planar device with optical concentration. We consider the effect of diffuse illumination and find that with optical concentration from the nanostructures of only × 1,000, an efficiency of 35.5% is achievable even with 25% diffuse illumination. We conclude that nanostructured solar cells offer an important route towards higher efficiency photovoltaic devices through a built-in optical concentration.

  15. Thermoelectric Power Factor Limit of a 1D Nanowire

    Science.gov (United States)

    Chen, I.-Ju; Burke, Adam; Svilans, Artis; Linke, Heiner; Thelander, Claes

    2018-04-01

    In the past decade, there has been significant interest in the potentially advantageous thermoelectric properties of one-dimensional (1D) nanowires, but it has been challenging to find high thermoelectric power factors based on 1D effects in practice. Here we point out that there is an upper limit to the thermoelectric power factor of nonballistic 1D nanowires, as a consequence of the recently established quantum bound of thermoelectric power output. We experimentally test this limit in quasiballistic InAs nanowires by extracting the maximum power factor of the first 1D subband through I -V characterization, finding that the measured maximum power factors conform to the theoretical limit. The established limit allows the prediction of the achievable power factor of a specific nanowire material system with 1D electronic transport based on the nanowire dimension and mean free path. The power factor of state-of-the-art semiconductor nanowires with small cross section and high crystal quality can be expected to be highly competitive (on the order of mW /m K2 ) at low temperatures. However, they have no clear advantage over bulk materials at, or above, room temperature.

  16. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    International Nuclear Information System (INIS)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin

    2016-01-01

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.

  17. Thermal instability and current-voltage scaling in superconducting fault current limiters

    Energy Technology Data Exchange (ETDEWEB)

    Zeimetz, B [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Tadinada, K [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Eves, D E [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Coombs, T A [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom); Evetts, J E [Department of Materials Science and Metallurgy, Cambridge University, Pembroke Street, Cambridge CB1 3QZ (United Kingdom); Campbell, A M [Department of Engineering, Cambridge University, Trumpington Road, Cambridge CB2 1PZ (United Kingdom)

    2004-04-01

    We have developed a computer model for the simulation of resistive superconducting fault current limiters in three dimensions. The program calculates the electromagnetic and thermal response of a superconductor to a time-dependent overload voltage, with different possible cooling conditions for the surfaces, and locally variable superconducting and thermal properties. We find that the cryogen boil-off parameters critically influence the stability of a limiter. The recovery time after a fault increases strongly with thickness. Above a critical thickness, the temperature is unstable even for a small applied AC voltage. The maximum voltage and maximum current during a short fault are correlated by a simple exponential law.

  18. Maximum run-up behavior of tsunamis under non-zero initial velocity condition

    Directory of Open Access Journals (Sweden)

    Baran AYDIN

    2018-03-01

    Full Text Available The tsunami run-up problem is solved non-linearly under the most general initial conditions, that is, for realistic initial waveforms such as N-waves, as well as standard initial waveforms such as solitary waves, in the presence of initial velocity. An initial-boundary value problem governed by the non-linear shallow-water wave equations is solved analytically utilizing the classical separation of variables technique, which proved to be not only fast but also accurate analytical approach for this type of problems. The results provide important information on maximum tsunami run-up qualitatively. We observed that, although the calculated maximum run-ups increase significantly, going as high as double that of the zero-velocity case, initial waves having non-zero fluid velocity exhibit the same run-up behavior as waves without initial velocity, for all wave types considered in this study.

  19. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  20. Maximum principles and sharp constants for solutions of elliptic and parabolic systems

    CERN Document Server

    Kresin, Gershon

    2012-01-01

    The main goal of this book is to present results pertaining to various versions of the maximum principle for elliptic and parabolic systems of arbitrary order. In particular, the authors present necessary and sufficient conditions for validity of the classical maximum modulus principles for systems of second order and obtain sharp constants in inequalities of Miranda-Agmon type and in many other inequalities of a similar nature. Somewhat related to this topic are explicit formulas for the norms and the essential norms of boundary integral operators. The proofs are based on a unified approach using, on one hand, representations of the norms of matrix-valued integral operators whose target spaces are linear and finite dimensional, and, on the other hand, on solving certain finite dimensional optimization problems. This book reflects results obtained by the authors, and can be useful to research mathematicians and graduate students interested in partial differential equations.

  1. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    International Nuclear Information System (INIS)

    Zhou, W; Oodo, S O; He, H; Qiu, G Y

    2013-01-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  2. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    Science.gov (United States)

    Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.

    2013-03-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  3. Maximum spreading of liquid drop on various substrates with different wettabilities

    Science.gov (United States)

    Choudhury, Raihan; Choi, Junho; Yang, Sangsun; Kim, Yong-Jin; Lee, Donggeun

    2017-09-01

    This paper describes a novel model developed for a priori prediction of the maximal spread of a liquid drop on a surface. As a first step, a series of experiments were conducted under precise control of the initial drop diameter, its falling height, roughness, and wettability of dry surfaces. The transient liquid spreading was recorded by a high-speed camera to obtain its maximum spreading under various conditions. Eight preexisting models were tested for accurate prediction of the maximum spread; however, most of the model predictions were not satisfactory except one, in comparison with our experimental data. A comparative scaling analysis of the literature models was conducted to elucidate the condition-dependent prediction characteristics of the models. The conditioned bias in the predictions was mainly attributed to the inappropriate formulations of viscous dissipation or interfacial energy of liquid on the surface. Hence, a novel model based on energy balance during liquid impact was developed to overcome the limitations of the previous models. As a result, the present model was quite successful in predicting the liquid spread in all the conditions.

  4. Evaluation of the Charm maximum residue limit β-lactam and tetracycline test for the detection of antibiotics in ewe and goat milk.

    Science.gov (United States)

    Beltrán, M C; Romero, T; Althaus, R L; Molina, M P

    2013-05-01

    The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects β-lactam or tetracycline drugs in raw commingled cow milk at or below European Union maximum residue levels (EU-MRL). The Charm MRL BLTET test procedure was recently modified (dilution in buffer and longer incubation) by the manufacturers to be used with raw ewe and goat milk. To assess the Charm MRL BLTET test for the detection of β-lactams and tetracyclines in milk of small ruminants, an evaluation study was performed at Instituto de Ciencia y Tecnologia Animal of Universitat Politècnica de València (Spain). The test specificity and detection capability (CCβ) were studied following Commission Decision 2002/657/EC. Specificity results obtained in this study were optimal for individual milk free of antimicrobials from ewes (99.2% for β-lactams and 100% for tetracyclines) and goats (97.9% for β-lactams and 100% for tetracyclines) along the entire lactation period regardless of whether the results were visually or instrumentally interpreted. Moreover, no positive results were obtained when a relatively high concentration of different substances belonging to antimicrobial families other than β-lactams and tetracyclines were present in ewe and goat milk. For both types of milk, the CCβ calculated was lower or equal to EU-MRL for amoxicillin (4 µg/kg), ampicillin (4 µg/kg), benzylpenicillin (≤ 2 µg/kg), dicloxacillin (30 µg/kg), oxacillin (30 µg/kg), cefacetrile (≤ 63 µg/kg), cefalonium (≤ 10 µg/kg), cefapirin (≤ 30 µg/kg), desacetylcefapirin (≤ 30 µg/kg), cefazolin (≤ 25 µg/kg), cefoperazone (≤ 25 µg/kg), cefquinome (20 µg/kg), ceftiofur (≤ 50 µg/kg), desfuroylceftiofur (≤ 50µg/kg), and cephalexin (≤ 50 µg/kg). However, this test could neither detect cloxacillin nor nafcillin at or below EU-MRL (CCβ >30 µg/kg). The

  5. Limits on inelastic dark matter from ZEPLIN-III

    Energy Technology Data Exchange (ETDEWEB)

    Akimov, D.Yu. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Araujo, H.M. [Blackett Laboratory, Imperial College, London (United Kingdom); Barnes, E.J. [School of Physics and Astronomy, University of Edinburgh (United Kingdom); Belov, V.A. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Bewick, A. [Blackett Laboratory, Imperial College, London (United Kingdom); Burenkov, A.A. [Institute for Theoretical and Experimental Physics, Moscow (Russian Federation); Cashmore, R. [Brasenose College, University of Oxford (United Kingdom); Chepel, V. [LIP-Coimbra and Department of Physics of the University of Coimbra (Portugal); Currie, A., E-mail: alastair.currie08@imperial.ac.u [Blackett Laboratory, Imperial College, London (United Kingdom); Davidge, D.; Dawson, J. [Blackett Laboratory, Imperial College, London (United Kingdom); Durkin, T.; Edwards, B. [Particle Physics Department, Rutherford Appleton Laboratory, Chilton (United Kingdom); Ghag, C.; Hollingsworth, A. [School of Physics and Astronomy, University of Edinburgh (United Kingdom); Horn, M.; Howard, A.S. [Blackett Laboratory, Imperial College, London (United Kingdom); Hughes, A.J. [Particle Physics Department, Rutherford Appleton Laboratory, Chilton (United Kingdom); Jones, W.G. [Blackett Laboratory, Imperial College, London (United Kingdom); Kalmus, G.E. [Particle Physics Department, Rutherford Appleton Laboratory, Chilton (United Kingdom)

    2010-08-30

    We present limits on the WIMP-nucleon cross section for inelastic dark matter from a reanalysis of the 2008 run of ZEPLIN-III. Cuts, notably on scintillation pulse shape and scintillation-to-ionisation ratio, give a net exposure of 63kgday in the range 20-80keV nuclear recoil energy, in which 6 events are observed. Upper limits on signal rate are derived from the maximum empty patch in the data. Under standard halo assumptions a small region of parameter space consistent, at 99% CL, with causing the 1.17tonyr DAMA modulation signal is allowed at 90% CL: it is in the mass range 45-60GeVc{sup -2} with a minimum CL of 87%, again derived from the maximum patch. This is the tightest constraint yet presented using xenon, a target nucleus whose similarity to iodine mitigiates systematic error from the assumed halo.

  6. The voluntary offset - approaches and limitations

    International Nuclear Information System (INIS)

    2012-06-01

    After having briefly presented the voluntary offset mechanism which aims at funding a project of reduction or capture of greenhouse gas emissions, this document describes the approach to be followed to adopt this voluntary offset, for individuals as well as for companies, communities or event organisations. It describes other important context issues (projects developed under the voluntary offset, actors of the voluntary offsetting market, market status, offset labels), and how to proceed in practice (definition of objectives and expectations, search for needed requirements, to ensure the meeting of requirements with respect to expectations). It addresses the case of voluntary offset in France (difficult implantation, possible solutions)

  7. Density limit study on the W7-AS stellarator

    International Nuclear Information System (INIS)

    Grigull, P.; Giannone, L.; Stroth, U.

    1998-01-01

    Data from currentless NBI discharges in W7-AS strongly indicate that the maximum density for quasi-stationary operation is limited by detachment from limiters. The threshold density at the edge scales with P s 0.5 B 0.8 (with P s being the net power flow across the LCMS) which is consistent with an edge based analytic estimation presuming constant threshold downstream temperatures. (author)

  8. Credit card spending limit and personal finance: system dynamics approach

    Directory of Open Access Journals (Sweden)

    Mirjana Pejić Bach

    2014-03-01

    Full Text Available Credit cards have become one of the major ways for conducting cashless transactions. However, they have a long term impact on the well being of their owner through the debt generated by credit card usage. Credit card issuers approve high credit limits to credit card owners, thereby influencing their credit burden. A system dynamics model has been used to model behavior of a credit card owner in different scenarios according to the size of a credit limit. Experiments with the model demonstrated that a higher credit limit approved on the credit card decreases the budget available for spending in the long run. This is a contribution toward the evaluation of action for credit limit control based on their consequences.

  9. Beyond maximum speed—a novel two-stimulus paradigm for brain-computer interfaces based on event-related potentials (P300-BCI)

    Science.gov (United States)

    Kaufmann, Tobias; Kübler, Andrea

    2014-10-01

    Objective. The speed of brain-computer interfaces (BCI), based on event-related potentials (ERP), is inherently limited by the commonly used one-stimulus paradigm. In this paper, we introduce a novel paradigm that can increase the spelling speed by a factor of 2, thereby extending the one-stimulus paradigm to a two-stimulus paradigm. Two different stimuli (a face and a symbol) are presented at the same time, superimposed on different characters and ERPs are classified using a multi-class classifier. Here, we present the proof-of-principle that is achieved with healthy participants. Approach. Eight participants were confronted with the novel two-stimulus paradigm and, for comparison, with two one-stimulus paradigms that used either one of the stimuli. Classification accuracies (percentage of correctly predicted letters) and elicited ERPs from the three paradigms were compared in a comprehensive offline analysis. Main results. The accuracies slightly decreased with the novel system compared to the established one-stimulus face paradigm. However, the use of two stimuli allowed for spelling at twice the maximum speed of the one-stimulus paradigms, and participants still achieved an average accuracy of 81.25%. This study introduced an alternative way of increasing the spelling speed in ERP-BCIs and illustrated that ERP-BCIs may not yet have reached their speed limit. Future research is needed in order to improve the reliability of the novel approach, as some participants displayed reduced accuracies. Furthermore, a comparison to the most recent BCI systems with individually adjusted, rapid stimulus timing is needed to draw conclusions about the practical relevance of the proposed paradigm. Significance. We introduced a novel two-stimulus paradigm that might be of high value for users who have reached the speed limit with the current one-stimulus ERP-BCI systems.

  10. Maximum overpressure in gastight containers of the storage and transport of dangerous liquids

    International Nuclear Information System (INIS)

    Steen, H.

    1977-11-01

    For a design of containers suitable under safety aspects for the transport and storage of dangerous liquids the maximum overpressure to be expected is an important value. The fundamentals for the determination of the internal pressure are pointed out for the simplified model of a rigid (i.e. not elastically or plastically deforming) and gastight container. By assuming of extreme storage and transport conditions (e.g. for the maximum liquid temperatures due to sun radiation) the figures of the maximum overpressure are calculated for about hundred liquids being of practical interest. The results show a significant influence of the compression of air in the ullage space caused by liquid expansion due to temperature rise (compression effect), particularly for liquids with a higher boiling point. The influence of the solubility of air in the liquid on the internal pressure can be neglected under the assumed transport conditions. The estimation of the volume increase of the container due to the effect of the internal pressure leads to the limitation, that the assumption of a rigid container is only justified for cylindrical and spherical steel tanks. The enlargement of the container volume due to a heating of the container shell does play no significant roll for all metal containers under the assumed conditions of storage and transport. The results obtained bear out essentially the stipulations for the test pressure and the filling limits laid down in the older German regulations for the transport of dangerous liquids in rail tank waggons and road tank vehicles without pressure relief valves. For the recently fixed and internationally harmonized regulations for tankcontainers the considerations and the results pointed out in this paper give rise to a review. (orig.) [de

  11. Geometry Optimization Approaches of Inductively Coupled Printed Spiral Coils for Remote Powering of Implantable Biomedical Sensors

    Directory of Open Access Journals (Sweden)

    Sondos Mehri

    2016-01-01

    Full Text Available Electronic biomedical implantable sensors need power to perform. Among the main reported approaches, inductive link is the most commonly used method for remote powering of such devices. Power efficiency is the most important characteristic to be considered when designing inductive links to transfer energy to implantable biomedical sensors. The maximum power efficiency is obtained for maximum coupling and quality factors of the coils and is generally limited as the coupling between the inductors is usually very small. This paper is dealing with geometry optimization of inductively coupled printed spiral coils for powering a given implantable sensor system. For this aim, Iterative Procedure (IP and Genetic Algorithm (GA analytic based optimization approaches are proposed. Both of these approaches implement simple mathematical models that approximate the coil parameters and the link efficiency values. Using numerical simulations based on Finite Element Method (FEM and with experimental validation, the proposed analytic approaches are shown to have improved accurate performance results in comparison with the obtained performance of a reference design case. The analytical GA and IP optimization methods are also compared to a purely Finite Element Method based on numerical optimization approach (GA-FEM. Numerical and experimental validations confirmed the accuracy and the effectiveness of the analytical optimization approaches to design the optimal coil geometries for the best values of efficiency.

  12. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  13. Maximum Entropy Closure of Balance Equations for Miniband Semiconductor Superlattices

    Directory of Open Access Journals (Sweden)

    Luis L. Bonilla

    2016-07-01

    Full Text Available Charge transport in nanosized electronic systems is described by semiclassical or quantum kinetic equations that are often costly to solve numerically and difficult to reduce systematically to macroscopic balance equations for densities, currents, temperatures and other moments of macroscopic variables. The maximum entropy principle can be used to close the system of equations for the moments but its accuracy or range of validity are not always clear. In this paper, we compare numerical solutions of balance equations for nonlinear electron transport in semiconductor superlattices. The equations have been obtained from Boltzmann–Poisson kinetic equations very far from equilibrium for strong fields, either by the maximum entropy principle or by a systematic Chapman–Enskog perturbation procedure. Both approaches produce the same current-voltage characteristic curve for uniform fields. When the superlattices are DC voltage biased in a region where there are stable time periodic solutions corresponding to recycling and motion of electric field pulses, the differences between the numerical solutions produced by numerically solving both types of balance equations are smaller than the expansion parameter used in the perturbation procedure. These results and possible new research venues are discussed.

  14. Solving University Scheduling Problem Using Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Aftab Ahmed Shaikh

    2011-10-01

    Full Text Available In universities scheduling curriculum activity is an essential job. Primarily, scheduling is a distribution of limited resources under interrelated constraints. The set of hard constraints demand the highest priority and should not to be violated at any cost, while the maximum soft constraints satisfaction mounts the quality scale of solution. In this research paper, a novel bisected approach is introduced that is comprisesd of GA (Genetic Algorithm as well as Backtracking Recursive Search. The employed technique deals with both hard and soft constraints successively. The first phase decisively is focused over elimination of all the hard constraints bounded violations and eventually produces partial solution for subsequent step. The second phase is supposed to draw the best possible solution on the search space. Promising results are obtained by implementation on the real dataset. The key points of the research approach are to get assurance of hard constraints removal from the dataset and minimizing computational time for GA by initializing pre-processed set of chromosomes.

  15. DNA isolation protocols affect the detection limit of PCR approaches of bacteria in samples from the human gastrointestinal tract

    NARCIS (Netherlands)

    Zoetendal, E.G.; Ben-Amor, K.; Akkermans, A.D.L.; Abee, T.; Vos, de W.M.

    2001-01-01

    A major concern in molecular ecological studies is the lysis efficiency of different bacteria in a complex ecosystem. We used a PCR-based 16S rDNA approach to determine the effect of two DNA isolation protocols (i.e. the bead beating and Triton-X100 method) on the detection limit of seven

  16. Maximum margin semi-supervised learning with irrelevant data.

    Science.gov (United States)

    Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R

    2015-10-01

    Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright

  17. Automatic processing of gamma ray spectra employing classical and modified Fourier transform approach

    International Nuclear Information System (INIS)

    Rattan, S.S.; Madan, V.K.

    1994-01-01

    This report describes methods for automatic processing of gamma ray spectra acquired with HPGe detectors. The processing incorporated both classical and signal processing approach. The classical method was used for smoothing, detecting significant peaks, finding peak envelope limits and a proposed method of finding peak limits, peak significance index, full width at half maximum, detecting doublets for further analysis. To facilitate application of signal processing to nuclear spectra, Madan et al. gave a new classification of signals and identified nuclear spectra as Type II signals, mathematically formalized modified Fourier transform and pioneered its application to process doublet envelopes acquired with modern spectrometers. It was extended to facilitate routine analysis of the spectra. A facility for energy and efficiency calibration was also included. The results obtained by analyzing observed gamma-ray spectra using the above approach compared favourably with those obtained with SAMPO and also those derived from table of radioisotopes. (author). 15 refs., 3 figs., 3 tabs

  18. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  19. Optimal Ge/SiGe nanofin geometries for hole mobility enhancement: Technology limit from atomic simulations

    Science.gov (United States)

    Vedula, Ravi Pramod; Mehrotra, Saumitra; Kubis, Tillmann; Povolotskyi, Michael; Klimeck, Gerhard; Strachan, Alejandro

    2015-05-01

    We use first principles simulations to engineer Ge nanofins for maximum hole mobility by controlling strain tri-axially through nano-patterning. Large-scale molecular dynamics predict fully relaxed, atomic structures for experimentally achievable nanofins, and orthogonal tight binding is used to obtain the corresponding electronic structure. Hole transport properties are then obtained via a linearized Boltzmann formalism. This approach explicitly accounts for free surfaces and associated strain relaxation as well as strain gradients which are critical for quantitative predictions in nanoscale structures. We show that the transverse strain relaxation resulting from the reduction in the aspect ratio of the fins leads to a significant enhancement in phonon limited hole mobility (7× over unstrained, bulk Ge, and 3.5× over biaxially strained Ge). Maximum enhancement is achieved by reducing the width to be approximately 1.5 times the height and further reduction in width does not result in additional gains. These results indicate significant room for improvement over current-generation Ge nanofins, provide geometrical guidelines to design optimized geometries and insight into the physics behind the significant mobility enhancement.

  20. Optimal Ge/SiGe nanofin geometries for hole mobility enhancement: Technology limit from atomic simulations

    International Nuclear Information System (INIS)

    Vedula, Ravi Pramod; Mehrotra, Saumitra; Kubis, Tillmann; Povolotskyi, Michael; Klimeck, Gerhard; Strachan, Alejandro

    2015-01-01

    We use first principles simulations to engineer Ge nanofins for maximum hole mobility by controlling strain tri-axially through nano-patterning. Large-scale molecular dynamics predict fully relaxed, atomic structures for experimentally achievable nanofins, and orthogonal tight binding is used to obtain the corresponding electronic structure. Hole transport properties are then obtained via a linearized Boltzmann formalism. This approach explicitly accounts for free surfaces and associated strain relaxation as well as strain gradients which are critical for quantitative predictions in nanoscale structures. We show that the transverse strain relaxation resulting from the reduction in the aspect ratio of the fins leads to a significant enhancement in phonon limited hole mobility (7× over unstrained, bulk Ge, and 3.5× over biaxially strained Ge). Maximum enhancement is achieved by reducing the width to be approximately 1.5 times the height and further reduction in width does not result in additional gains. These results indicate significant room for improvement over current-generation Ge nanofins, provide geometrical guidelines to design optimized geometries and insight into the physics behind the significant mobility enhancement