Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong
2013-01-01
In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...
Directory of Open Access Journals (Sweden)
Ning-Cong Xiao
2013-12-01
Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Thermoelectric cooler concepts and the limit for maximum cooling
International Nuclear Information System (INIS)
Seifert, W; Hinsche, N F; Pluschke, V
2014-01-01
The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)
Maximum penetration level of distributed generation without violating voltage limits
Morren, J.; Haan, de S.W.H.
2009-01-01
Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Feedback Limits to Maximum Seed Masses of Black Holes
International Nuclear Information System (INIS)
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-01-01
The most massive black holes observed in the universe weigh up to ∼10 10 M ⊙ , nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M • ≳ 10 4 M ⊙ ) hosted in small isolated halos ( M h ≲ 10 9 M ⊙ ) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M • – σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10 4–6 M ⊙ , we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum organic carbon limits at different melter feed rates (U)
International Nuclear Information System (INIS)
Choi, A.S.
1995-01-01
This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed
Feedback Limits to Maximum Seed Masses of Black Holes
Energy Technology Data Exchange (ETDEWEB)
Pacucci, Fabio; Natarajan, Priyamvada [Department of Physics, Yale University, P.O. Box 208121, New Haven, CT 06520 (United States); Ferrara, Andrea [Scuola Normale Superiore, Piazza dei Cavalieri 7, I-56126 Pisa (Italy)
2017-02-01
The most massive black holes observed in the universe weigh up to ∼10{sup 10} M {sub ⊙}, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds ( M {sub •} ≳ 10{sup 4} M {sub ⊙}) hosted in small isolated halos ( M {sub h} ≲ 10{sup 9} M {sub ⊙}) accreting with relatively small radiative efficiencies ( ϵ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M {sub •}– σ relation observed at z ∼ 0 cannot be established in isolated halos at high- z , but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 10{sup 4–6} M {sub ⊙}, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Maximum time-dependent space-charge limited diode currents
Energy Technology Data Exchange (ETDEWEB)
Griswold, M. E. [Tri Alpha Energy, Inc., Rancho Santa Margarita, California 92688 (United States); Fisch, N. J. [Princeton Plasma Physics Laboratory, Princeton University, Princeton, New Jersey 08543 (United States)
2016-01-15
Recent papers claim that a one dimensional (1D) diode with a time-varying voltage drop can transmit current densities that exceed the Child-Langmuir (CL) limit on average, apparently contradicting a previous conjecture that there is a hard limit on the average current density across any 1D diode, as t → ∞, that is equal to the CL limit. However, these claims rest on a different definition of the CL limit, namely, a comparison between the time-averaged diode current and the adiabatic average of the expression for the stationary CL limit. If the current were considered as a function of the maximum applied voltage, rather than the average applied voltage, then the original conjecture would not have been refuted.
Maximum total organic carbon limit for DWPF melter feed
International Nuclear Information System (INIS)
Choi, A.S.
1995-01-01
DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit
Radiation pressure acceleration: The factors limiting maximum attainable ion energy
Energy Technology Data Exchange (ETDEWEB)
Bulanov, S. S.; Esarey, E.; Schroeder, C. B. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Bulanov, S. V. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); A. M. Prokhorov Institute of General Physics RAS, Moscow 119991 (Russian Federation); Esirkepov, T. Zh.; Kando, M. [KPSI, National Institutes for Quantum and Radiological Science and Technology, Kizugawa, Kyoto 619-0215 (Japan); Pegoraro, F. [Physics Department, University of Pisa and Istituto Nazionale di Ottica, CNR, Pisa 56127 (Italy); Leemans, W. P. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California, Berkeley, California 94720 (United States)
2016-05-15
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it transparent for radiation and effectively terminating the acceleration. The off-normal incidence of the laser on the target, due either to the experimental setup, or to the deformation of the target, will also lead to establishing a limit on maximum ion energy.
Global Harmonization of Maximum Residue Limits for Pesticides.
Ambrus, Árpád; Yang, Yong Zhen
2016-01-13
International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.
Noise and physical limits to maximum resolution of PET images
Energy Technology Data Exchange (ETDEWEB)
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Noise and physical limits to maximum resolution of PET images
International Nuclear Information System (INIS)
Herraiz, J.L.; Espana, S.; Vicente, E.; Vaquero, J.J.; Desco, M.; Udias, J.M.
2007-01-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners
5 CFR 581.402 - Maximum garnishment limitations.
2010-01-01
... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq; Al-Naffouri, Tareq Y.; Al-Ghadhban, Samir N.
2012-01-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous
Longitudinal and transverse space charge limitations on transport of maximum power beams
International Nuclear Information System (INIS)
Khoe, T.K.; Martin, R.L.
1977-01-01
The maximum transportable beam power is a critical issue in selecting the most favorable approach to generating ignition pulses for inertial fusion with high energy accelerators. Maschke and Courant have put forward expressions for the limits on transport power for quadrupole and solenoidal channels. Included in a more general way is the self consistent effect of space charge defocusing on the power limit. The results show that no limits on transmitted power exist in principal. In general, quadrupole transport magnets appear superior to solenoids except for transport of very low energy and highly charged particles. Longitudinal space charge effects are very significant for transport of intense beams
Reduced oxygen at high altitude limits maximum size.
Peck, L S; Chapelle, G
2003-11-07
The trend towards large size in marine animals with latitude, and the existence of giant marine species in polar regions have long been recognized, but remained enigmatic until a recent study showed it to be an effect of increased oxygen availability in sea water of a low temperature. The effect was apparent in data from 12 sites worldwide because of variations in water oxygen content controlled by differences in temperature and salinity. Another major physical factor affecting oxygen content in aquatic environments is reduced pressure at high altitude. Suitable data from high-altitude sites are very scarce. However, an exceptionally rich crustacean collection, which remains largely undescribed, was obtained by the British 1937 expedition from Lake Titicaca on the border between Peru and Bolivia in the Andes at an altitude of 3809 m. We show that in Lake Titicaca the maximum length of amphipods is 2-4 times smaller than other low-salinity sites (Caspian Sea and Lake Baikal).
Vehicle Maximum Weight Limitation Based on Intelligent Weight Sensor
Raihan, W.; Tessar, R. M.; Ernest, C. O. S.; E Byan, W. R.; Winda, A.
2017-03-01
Vehicle weight is an important factor to be maintained for transportation safety. A weight limitation system is proposed to make sure the vehicle weight is always below its designation prior the vehicle is being used by the driver. The proposed system is divided into two systems, namely vehicle weight confirmation system and weight warning system. In vehicle weight confirmation system, the weight sensor work for the first time after the ignition switch is turned on. When the weight is under the weight limit, the starter engine can be switched on to start the engine system, otherwise it will be locked. The seconds system, will operated after checking all the door at close position, once the door of the car is closed, the weight warning system will check once again the weight during runing engine condition. The results of these two systems, vehicle weight confirmation system and weight warning system have 100 % accuracy, respectively. These show that the proposed vehicle weight limitation system operate well.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Energy Technology Data Exchange (ETDEWEB)
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
A Maximum Entropy Approach to Loss Distribution Analysis
Directory of Open Access Journals (Sweden)
Marco Bee
2013-03-01
Full Text Available In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME, a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation,we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS, for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.
DEFF Research Database (Denmark)
Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna
2014-01-01
High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...
Maximum likelihood approach for several stochastic volatility models
International Nuclear Information System (INIS)
Camprodon, Jordi; Perelló, Josep
2012-01-01
Volatility measures the amplitude of price fluctuations. Despite it being one of the most important quantities in finance, volatility is not directly observable. Here we apply a maximum likelihood method which assumes that price and volatility follow a two-dimensional diffusion process where volatility is the stochastic diffusion coefficient of the log-price dynamics. We apply this method to the simplest versions of the expOU, the OU and the Heston stochastic volatility models and we study their performance in terms of the log-price probability, the volatility probability, and its Mean First-Passage Time. The approach has some predictive power on the future returns amplitude by only knowing the current volatility. The assumed models do not consider long-range volatility autocorrelation and the asymmetric return-volatility cross-correlation but the method still yields very naturally these two important stylized facts. We apply the method to different market indices and with a good performance in all cases. (paper)
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan; Popov, Bojan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Maximum β limited by ideal MHD ballooning instabilites in JT-60
International Nuclear Information System (INIS)
Seki, Shogo; Azumi, Masashi
1986-03-01
Maximum β limited by ideal MHD ballooning instabilities is investigated on divertor configurations in JT-60. Maximum β against ballooning modes in JT-60 has strong dependecy on the distribution of the safety factor over the magnetic surfaces. Maximum β is ∼ 2 % for q 0 = 1.0, while more than 3 % for q 0 = 1.5. These results suggest that the profile control of the safety factor, especially on the magnetic axis, is attractive to the higher β operation in JT-60. (author)
International Nuclear Information System (INIS)
Wenzel, H; Crump, P; Pietrzak, A; Wang, X; Erbert, G; Traenkle, G
2010-01-01
The factors that limit both the continuous wave (CW) and the pulsed output power of broad-area laser diodes driven at very high currents are investigated theoretically and experimentally. The decrease in the gain due to self-heating under CW operation and spectral holeburning under pulsed operation, as well as heterobarrier carrier leakage and longitudinal spatial holeburning, are the dominant mechanisms limiting the maximum achievable output power.
Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power
DEFF Research Database (Denmark)
Yang, Yongheng; Wang, Huai; Blaabjerg, Frede
2014-01-01
Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level...... devices, allowing a quantitative prediction of the power device lifetime. A study case on a 3 kW single-phase PV inverter has demonstrated the advantages of the CPG control in terms of improved reliability.......Grid operation experiences have revealed the necessity to limit the maximum feed-in power from PV inverter systems under a high penetration scenario in order to avoid voltage and frequency instability issues. A Constant Power Generation (CPG) control method has been proposed at the inverter level....... The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...
Maximum total organic carbon limits at different DWPF melter feed maters (U)
International Nuclear Information System (INIS)
Choi, A.S.
1996-01-01
The document presents information on the maximum total organic carbon (TOC) limits that are allowable in the DWPF melter feed without forming a potentially flammable vapor in the off-gas system were determined at feed rates varying from 0.7 to 1.5 GPM. At the maximum TOC levels predicted, the peak concentration of combustible gases in the quenched off-gas will not exceed 60 percent of the lower flammable limit during a 3X off-gas surge, provided that the indicated melter vapor space temperature and the total air supply to the melter are maintained. All the necessary calculations for this study were made using the 4-stage cold cap model and the melter off-gas dynamics model. A high-degree of conservatism was included in the calculational bases and assumptions. As a result, the proposed correlations are believed to by conservative enough to be used for the melter off-gas flammability control purposes
Maximum Throughput in a C-RAN Cluster with Limited Fronthaul Capacity
Duan , Jialong; Lagrange , Xavier; Guilloud , Frédéric
2016-01-01
International audience; Centralized/Cloud Radio Access Network (C-RAN) is a promising future mobile network architecture which can ease the cooperation between different cells to manage interference. However, the feasibility of C-RAN is limited by the large bit rate requirement in the fronthaul. This paper study the maximum throughput of different transmission strategies in a C-RAN cluster with transmission power constraints and fronthaul capacity constraints. Both transmission strategies wit...
Avinash-Shukla mass limit for the maximum dust mass supported against gravity by electric fields
Avinash, K.
2010-08-01
The existence of a new class of astrophysical objects, where gravity is balanced by the shielded electric fields associated with the electric charge on the dust, is shown. Further, a mass limit MA for the maximum dust mass that can be supported against gravitational collapse by these fields is obtained. If the total mass of the dust in the interstellar cloud MD > MA, the dust collapses, while if MD < MA, stable equilibrium may be achieved. Heuristic arguments are given to show that the physics of the mass limit is similar to the Chandrasekar's mass limit for compact objects and the similarity of these dust configurations with neutron and white dwarfs is pointed out. The effect of grain size distribution on the mass limit and strong correlation effects in the core of such objects is discussed. Possible location of these dust configurations inside interstellar clouds is pointed out.
Mechanical limits to maximum weapon size in a giant rhinoceros beetle.
McCullough, Erin L
2014-07-07
The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Maximum entropy method approach to the θ term
International Nuclear Information System (INIS)
Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi
2004-01-01
In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)
Li, Zijian
2018-08-01
To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)
2015-02-01
One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)
A maximum pseudo-likelihood approach for estimating species trees under the coalescent model
Directory of Open Access Journals (Sweden)
Edwards Scott V
2010-10-01
Full Text Available Abstract Background Several phylogenetic approaches have been developed to estimate species trees from collections of gene trees. However, maximum likelihood approaches for estimating species trees under the coalescent model are limited. Although the likelihood of a species tree under the multispecies coalescent model has already been derived by Rannala and Yang, it can be shown that the maximum likelihood estimate (MLE of the species tree (topology, branch lengths, and population sizes from gene trees under this formula does not exist. In this paper, we develop a pseudo-likelihood function of the species tree to obtain maximum pseudo-likelihood estimates (MPE of species trees, with branch lengths of the species tree in coalescent units. Results We show that the MPE of the species tree is statistically consistent as the number M of genes goes to infinity. In addition, the probability that the MPE of the species tree matches the true species tree converges to 1 at rate O(M -1. The simulation results confirm that the maximum pseudo-likelihood approach is statistically consistent even when the species tree is in the anomaly zone. We applied our method, Maximum Pseudo-likelihood for Estimating Species Trees (MP-EST to a mammal dataset. The four major clades found in the MP-EST tree are consistent with those in the Bayesian concatenation tree. The bootstrap supports for the species tree estimated by the MP-EST method are more reasonable than the posterior probability supports given by the Bayesian concatenation method in reflecting the level of uncertainty in gene trees and controversies over the relationship of four major groups of placental mammals. Conclusions MP-EST can consistently estimate the topology and branch lengths (in coalescent units of the species tree. Although the pseudo-likelihood is derived from coalescent theory, and assumes no gene flow or horizontal gene transfer (HGT, the MP-EST method is robust to a small amount of HGT in the
Physical Limits on Hmax, the Maximum Height of Glaciers and Ice Sheets
Lipovsky, B. P.
2017-12-01
The longest glaciers and ice sheets on Earth never achieve a topographic relief, or height, greater than about Hmax = 4 km. What laws govern this apparent maximum height to which a glacier or ice sheet may rise? Two types of answer appear possible: one relating to geological process and the other to ice dynamics. In the first type of answer, one might suppose that if Earth had 100 km tall mountains then there would be many 20 km tall glaciers. The counterpoint to this argument is that recent evidence suggests that glaciers themselves limit the maximum height of mountain ranges. We turn, then, to ice dynamical explanations for Hmax. The classical ice dynamical theory of Nye (1951), however, does not predict any break in scaling to give rise to a maximum height, Hmax. I present a simple model for the height of glaciers and ice sheets. The expression is derived from a simplified representation of a thermomechanically coupled ice sheet that experiences a basal shear stress governed by Coulomb friction (i.e., a stress proportional to the overburden pressure minus the water pressure). I compare this model to satellite-derived digital elevation map measurements of glacier surface height profiles for the 200,000 glaciers in the Randolph Glacier Inventory (Pfeffer et al., 2014) as well as flowlines from the Greenland and Antarctic Ice Sheets. The simplified model provides a surprisingly good fit to these global observations. Small glaciers less than 1 km in length are characterized by having negligible influence of basal melt water, cold ( -15C) beds, and high surface slopes ( 30 deg). Glaciers longer than a critical distance 30km are characterized by having an ice-bed interface that is weakened by the presence of meltwater and is therefore not capable of supporting steep surface slopes. The simplified model makes predictions of ice volume change as a function of surface temperature, accumulation rate, and geothermal heat flux. For this reason, it provides insights into
DEFF Research Database (Denmark)
Rakhshan, Mohsen; Vafamand, Navid; Khooban, Mohammad Hassan
2018-01-01
This paper introduces a polynomial fuzzy model (PFM)-based maximum power point tracking (MPPT) control approach to increase the performance and efficiency of the solar photovoltaic (PV) electricity generation. The proposed method relies on a polynomial fuzzy modeling, a polynomial parallel......, a direct maximum power (DMP)-based control structure is considered for MPPT. Using the PFM representation, the DMP-based control structure is formulated in terms of SOS conditions. Unlike the conventional approaches, the proposed approach does not require exploring the maximum power operational point...
Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources
Directory of Open Access Journals (Sweden)
Nacer Kouider Msirdi
2015-08-01
Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.
The separate universe approach to soft limits
Energy Technology Data Exchange (ETDEWEB)
Kenton, Zachary; Mulryne, David J., E-mail: z.a.kenton@qmul.ac.uk, E-mail: d.mulryne@qmul.ac.uk [School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London, E1 4NS (United Kingdom)
2016-10-01
We develop a formalism for calculating soft limits of n -point inflationary correlation functions using separate universe techniques. Our method naturally allows for multiple fields and leads to an elegant diagrammatic approach. As an application we focus on the trispectrum produced by inflation with multiple light fields, giving explicit formulae for all possible single- and double-soft limits. We also investigate consistency relations and present an infinite tower of inequalities between soft correlation functions which generalise the Suyama-Yamaguchi inequality.
The Maximum Entropy Limit of Small-scale Magnetic Field Fluctuations in the Quiet Sun
Gorobets, A. Y.; Berdyugina, S. V.; Riethmüller, T. L.; Blanco Rodríguez, J.; Solanki, S. K.; Barthol, P.; Gandorfer, A.; Gizon, L.; Hirzberger, J.; van Noort, M.; Del Toro Iniesta, J. C.; Orozco Suárez, D.; Schmidt, W.; Martínez Pillet, V.; Knölker, M.
2017-11-01
The observed magnetic field on the solar surface is characterized by a very complex spatial and temporal behavior. Although feature-tracking algorithms have allowed us to deepen our understanding of this behavior, subjectivity plays an important role in the identification and tracking of such features. In this paper, we continue studies of the temporal stochasticity of the magnetic field on the solar surface without relying either on the concept of magnetic features or on subjective assumptions about their identification and interaction. We propose a data analysis method to quantify fluctuations of the line-of-sight magnetic field by means of reducing the temporal field’s evolution to the regular Markov process. We build a representative model of fluctuations converging to the unique stationary (equilibrium) distribution in the long time limit with maximum entropy. We obtained different rates of convergence to the equilibrium at fixed noise cutoff for two sets of data. This indicates a strong influence of the data spatial resolution and mixing-polarity fluctuations on the relaxation process. The analysis is applied to observations of magnetic fields of the relatively quiet areas around an active region carried out during the second flight of the Sunrise/IMaX and quiet Sun areas at the disk center from the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory satellite.
Limits of the endoscopic transnasal transtubercular approach.
Gellner, Verena; Tomazic, Peter V
2018-06-01
The endoscopic transnasal trans-sphenoidal transtubercular approach has become a standard alternative approach to neurosurgical transcranial routes for lesions of the anterior skull base in particular pathologies of the anterior tubercle, sphenoid plane, and midline lesions up to the interpeduncular cistern. For both the endoscopic and the transcranial approach indications must strictly be evaluated and tailored to the patients' morphology and condition. The purpose of this review was to evaluate the evidence in literature of the limitations of the endoscopic transtubercular approach. A PubMed/Medline search was conducted in January 2018 entering following keywords. Upon initial screening 7 papers were included in this review. There are several other papers describing the endoscopic transtubercular approach (ETTA). We tried to list the limitation factors according to the actual existing literature as cited. The main limiting factors are laterally extending lesions in relation to the optic canal and vascular encasement and/or unfavorable tumor tissue consistency. The ETTA is considered as a high level transnasal endoscopic extended skull base approach and requires excellent training, skills and experience.
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
DEFF Research Database (Denmark)
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
International Nuclear Information System (INIS)
Acharjee, P.; Mallick, S.; Thakur, S.S.; Ghoshal, S.P.
2011-01-01
Highlights: → The unique cost function is derived considering practical Security Constraints. → New innovative formulae of PSO parameters are developed for better performance. → The inclusion and implementation of chaos in PSO technique is original and unique. → Weak buses are identified where FACTS devices can be implemented. → The CPSO technique gives the best performance for all the IEEE standard test systems. - Abstract: In the current research chaotic search is used with the optimization technique for solving non-linear complicated power system problems because Chaos can overcome the local optima problem of optimization technique. Power system problem, more specifically voltage stability, is one of the practical examples of non-linear, complex, convex problems. Smart grid, restructured energy system and socio-economic development fetch various uncertain events in power systems and the level of uncertainty increases to a great extent day by day. In this context, analysis of voltage stability is essential. The efficient method to assess the voltage stability is maximum loadability limit (MLL). MLL problem is formulated as a maximization problem considering practical security constraints (SCs). Detection of weak buses is also important for the analysis of power system stability. Both MLL and weak buses are identified by PSO methods and FACTS devices can be applied to the detected weak buses for the improvement of stability. Three particle swarm optimization (PSO) techniques namely General PSO (GPSO), Adaptive PSO (APSO) and Chaotic PSO (CPSO) are presented for the comparative study with obtaining MLL and weak buses under different SCs. In APSO method, PSO-parameters are made adaptive with the problem and chaos is incorporated in CPSO method to obtain reliable convergence and better performances. All three methods are applied on standard IEEE 14 bus, 30 bus, 57 bus and 118 bus test systems to show their comparative computing effectiveness and
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Roothaan approach in the thermodynamic limit
Gutierrez, G.; Plastino, A.
1982-02-01
A systematic method for the solution of the Hartree-Fock equations in the thermodynamic limit is presented. The approach is seen to be a natural extension of the one usually employed in the finite-fermion case, i.e., that developed by Roothaan. The new techniques developed here are applied, as an example, to neutron matter, employing the so-called V1 Bethe "homework" potential. The results obtained are, by far, superior to those that the ordinary plane-wave Hartree-Fock theory yields. NUCLEAR STRUCTURE Hartree-Fock approach; nuclear and neutron matter.
International Nuclear Information System (INIS)
2017-01-01
This report focuses on studies of KIT-INE to derive a significantly improved description of the chemical behaviour of Americium and Plutonium in saline NaCl, MgCl 2 and CaCl 2 brine systems. The studies are based on new experimental data and aim at deriving reliable Am and Pu solubility limits for the investigated systems as well as deriving comprehensive thermodynamic model descriptions. Both aspects are of high relevance in the context of potential source term estimations for Americium and Plutonium in aqueous brine systems and related scenarios. Americium and Plutonium are long-lived alpha emitting radionuclides which due to their high radiotoxicity need to be accounted for in a reliable and traceable way. The hydrolysis of trivalent actinides and the effect of highly alkaline pH conditions on the solubility of trivalent actinides in calcium chloride rich brine solutions were investigated and a thermodynamic model derived. The solubility of Plutonium in saline brine systems was studied under reducing and non-reducing conditions and is described within a new thermodynamic model. The influence of dissolved carbonate on Americium and Plutonium solubility in MgCl 2 solutions was investigated and quantitative information on Am and Pu solubility limits in these systems derived. Thermodynamic constants and model parameter derived in this work are implemented in the Thermodynamic Reference Database THEREDA owned by BfS. According to the quality assurance approach in THEREDA, is was necessary to publish parts of this work in peer-reviewed scientific journals. The publications are focused on solubility experiments, spectroscopy of aquatic and solid species and thermodynamic data. (Neck et al., Pure Appl. Chem., Vol. 81, (2009), pp. 1555-1568., Altmaier et al., Radiochimica Acta, 97, (2009), pp. 187-192., Altmaier et al., Actinide Research Quarterly, No 2., (2011), pp. 29-32.).
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
DEFF Research Database (Denmark)
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Gao, J.; Lythe, M. B.
1996-06-01
This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.
Barge Train Maximum Impact Forces Using Limit States for the Lashings Between Barges
National Research Council Canada - National Science Library
Arroyo, Jose R; Ebeling, Robert M
2005-01-01
... on: the mass including hydrodynamic added mass of the barge train, the approach velocity, the approach angle, the barge train moment of inertia, damage sustained by the barge structure, and friction...
Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion
Costogue, E. N.; Lindena, S.
1976-01-01
A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.
Roothaan approach in the thermodynamic limit
International Nuclear Information System (INIS)
Gutierrez, G.; Plastino, A.
1982-01-01
A systematic method for the solution of the Hartree-Fock equations in the thermodynamic limit is presented. The approach is seen to be a natural extension of the one usually employed in the finite-fermion case, i.e., that developed by Roothaan. The new techniques developed here are applied, as an example, to neutron matter, employing the so-called V 1 Bethe homework potential. The results obtained are, by far, superior to those that the ordinary plane-wave Hartree-Fock theory yields
Shurtleff, Amy C; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S; Bavari, Sina
2012-12-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Directory of Open Access Journals (Sweden)
Jean Patterson
2012-12-01
Full Text Available We describe herein, limitations on research at biosafety level 4 (BSL-4 containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Shurtleff, Amy C.; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S.; Bavari, Sina
2012-01-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review. PMID:23342380
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Total maximum daily loads (TMDL) and individual water quality-based effluent limitations. 130.7 Section 130.7 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS WATER QUALITY PLANNING AND MANAGEMENT § 130.7 Total...
Quantum cryptography approaching the classical limit.
Weedbrook, Christian; Pirandola, Stefano; Lloyd, Seth; Ralph, Timothy C
2010-09-10
We consider the security of continuous-variable quantum cryptography as we approach the classical limit, i.e., when the unknown preparation noise at the sender's station becomes significantly noisy or thermal (even by as much as 10(4) times greater than the variance of the vacuum mode). We show that, provided the channel transmission losses do not exceed 50%, the security of quantum cryptography is not dependent on the channel transmission, and is therefore incredibly robust against significant amounts of excess preparation noise. We extend these results to consider for the first time quantum cryptography at wavelengths considerably longer than optical and find that regions of security still exist all the way down to the microwave.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Rackauskas, Alfredas
2010-01-01
In this paper, we deal with the asymptotic distribution of the maximum increment of a random walk with a regularly varying jump size distribution. This problem is motivated by a long-standing problem on change point detection for epidemic alternatives. It turns out that the limit distribution...... of the maximum increment of the random walk is one of the classical extreme value distributions, the Fréchet distribution. We prove the results in the general framework of point processes and for jump sizes taking values in a separable Banach space...
Reliability of buildings in service limit state for maximum horizontal displacements
Directory of Open Access Journals (Sweden)
A. G. B. Corelhano
Full Text Available Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984 [10] are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.
Reliability analysis - systematic approach based on limited data
International Nuclear Information System (INIS)
Bourne, A.J.
1975-11-01
The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)
Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong
2011-01-01
Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.
Approach to DOE threshold guidance limits
International Nuclear Information System (INIS)
Shuman, R.D.; Wickham, L.E.
1984-01-01
The need for less restrictive criteria governing disposal of extremely low-level radioactive waste has long been recognized. The Low-Level Waste Management Program has been directed by the Department of Energy (DOE) to aid in the development of a threshold guidance limit for DOE low-level waste facilities. Project objectives are concernd with the definition of a threshold limit dose and pathway analysis of radionuclide transport within selected exposure scenarios at DOE sites. Results of the pathway analysis will be used to determine waste radionuclide concentration guidelines that meet the defined threshold limit dose. Methods of measurement and verification of concentration limits round out the project's goals. Work on defining a threshold limit dose is nearing completion. Pathway analysis of sanitary landfill operations at the Savannah River Plant and the Idaho National Engineering Laboratory is in progress using the DOSTOMAN computer code. Concentration limit calculations and determination of implementation procedures shall follow completion of the pathways work. 4 references
Manzoni, Stefano; Vico, Giulia; Katul, Gabriel; Palmroth, Sari; Jackson, Robert B; Porporato, Amilcare
2013-04-01
Soil and plant hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon uptake by leaves. While more negative xylem water potentials provide a larger driving force for water transport, they also cause cavitation that limits hydraulic conductivity. An optimum balance between driving force and cavitation occurs at intermediate water potentials, thus defining the maximum transpiration rate the xylem can sustain (denoted as E(max)). The presence of this maximum raises the question as to whether plants regulate transpiration through stomata to function near E(max). To address this question, we calculated E(max) across plant functional types and climates using a hydraulic model and a global database of plant hydraulic traits. The predicted E(max) compared well with measured peak transpiration across plant sizes and growth conditions (R = 0.86, P efficiency trade-off in plant xylem. Stomatal conductance allows maximum transpiration rates despite partial cavitation in the xylem thereby suggesting coordination between stomatal regulation and xylem hydraulic characteristics. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications
DEFF Research Database (Denmark)
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua
2015-01-01
Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
BMRC: A Bitmap-Based Maximum Range Counting Approach for Temporal Data in Sensor Monitoring Networks
Directory of Open Access Journals (Sweden)
Bin Cao
2017-09-01
Full Text Available Due to the rapid development of the Internet of Things (IoT, many feasible deployments of sensor monitoring networks have been made to capture the events in physical world, such as human diseases, weather disasters and traffic accidents, which generate large-scale temporal data. Generally, the certain time interval that results in the highest incidence of a severe event has significance for society. For example, there exists an interval that covers the maximum number of people who have the same unusual symptoms, and knowing this interval can help doctors to locate the reason behind this phenomenon. As far as we know, there is no approach available for solving this problem efficiently. In this paper, we propose the Bitmap-based Maximum Range Counting (BMRC approach for temporal data generated in sensor monitoring networks. Since sensor nodes can update their temporal data at high frequency, we present a scalable strategy to support the real-time insert and delete operations. The experimental results show that the BMRC outperforms the baseline algorithm in terms of efficiency.
International Nuclear Information System (INIS)
Zakeri, Behnam; Syri, Sanna; Rinne, Samuli
2015-01-01
Finland is to increase the share of RES (renewable energy sources) up to 38% in final energy consumption by 2020. While benefiting from local biomass resources Finnish energy system is deemed to achieve this goal, increasing the share of other intermittent renewables is under development, namely wind power and solar energy. Yet the maximum flexibility of the existing energy system in integration of renewable energy is not investigated, which is an important step before undertaking new renewable energy obligations. This study aims at filling this gap by hourly analysis and comprehensive modeling of the energy system including electricity, heat, and transportation, by employing EnergyPLAN tool. Focusing on technical and economic implications, we assess the maximum potential of different RESs separately (including bioenergy, hydropower, wind power, solar heating and PV, and heat pumps), as well as an optimal mix of different technologies. Furthermore, we propose a new index for assessing the maximum flexibility of energy systems in absorbing variable renewable energy. The results demonstrate that wind energy can be harvested at maximum levels of 18–19% of annual power demand (approx. 16 TWh/a), without major enhancements in the flexibility of energy infrastructure. With today's energy demand, the maximum feasible renewable energy for Finland is around 44–50% by an optimal mix of different technologies, which promises 35% reduction in carbon emissions from 2012's level. Moreover, Finnish energy system is flexible to augment the share of renewables in gross electricity consumption up to 69–72%, at maximum. Higher shares of RES calls for lower energy consumption (energy efficiency) and more flexibility in balancing energy supply and consumption (e.g. by energy storage). - Highlights: • By hourly analysis, we model the whole energy system of Finland. • With existing energy infrastructure, RES (renewable energy sources) in primary energy cannot go beyond 50%.
Shotgun approaches to gait analysis : insights & limitations
Kaptein, Ronald G.; Wezenberg, Daphne; IJmker, Trienke; Houdijk, Han; Beek, Peter J.; Lamoth, Claudine J. C.; Daffertshofer, Andreas
2014-01-01
Background: Identifying features for gait classification is a formidable problem. The number of candidate measures is legion. This calls for proper, objective criteria when ranking their relevance. Methods: Following a shotgun approach we determined a plenitude of kinematic and physiological gait
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Loturco, Irineu; Kobal, Ronaldo; Moraes, José E; Kitamura, Katia; Cal Abad, César C; Pereira, Lucas A; Nakamura, Fábio Y
2017-04-01
Loturco, I, Kobal, R, Moraes, JE, Kitamura, K, Cal Abad, CC, Pereira, LA, and Nakamura, FY. Predicting the maximum dynamic strength in bench press: the high precision of the bar velocity approach. J Strength Cond Res 31(4): 1127-1131, 2017-The aim of this study was to determine the force-velocity relationship and test the possibility of determining the 1 repetition maximum (1RM) in "free weight" and Smith machine bench presses. Thirty-six male top-level athletes from 3 different sports were submitted to a standardized 1RM bench press assessment (free weight or Smith machine, in randomized order), following standard procedures encompassing lifts performed at 40-100% of 1RM. The mean propulsive velocity (MPV) was measured in all attempts. A linear regression was performed to establish the relationships between bar velocities and 1RM percentages. The actual and predicted 1RM for each exercise were compared using a paired t-test. Although the Smith machine 1RM was higher (10% difference) than the free weight 1RM, in both cases the actual and predicted values did not differ. In addition, the linear relationship between MPV and percentage of 1RM (coefficient of determination ≥95%) allow determination of training intensity based on the bar velocity. The linear relationships between the MPVs and the relative percentages of 1RM throughout the entire range of loads enable coaches to use the MPV to accurately monitor their athletes on a daily basis and accurately determine their actual 1RM without the need to perform standard maximum dynamic strength assessments.
The voluntary offset - approaches and limitations
International Nuclear Information System (INIS)
2012-06-01
After having briefly presented the voluntary offset mechanism which aims at funding a project of reduction or capture of greenhouse gas emissions, this document describes the approach to be followed to adopt this voluntary offset, for individuals as well as for companies, communities or event organisations. It describes other important context issues (projects developed under the voluntary offset, actors of the voluntary offsetting market, market status, offset labels), and how to proceed in practice (definition of objectives and expectations, search for needed requirements, to ensure the meeting of requirements with respect to expectations). It addresses the case of voluntary offset in France (difficult implantation, possible solutions)
An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach
Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani
2013-02-01
TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
2010-04-01
... insurance premium excluded from limitations on maximum mortgage amounts. 203.18c Section 203.18c Housing and...-front mortgage insurance premium excluded from limitations on maximum mortgage amounts. After... LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES SINGLE FAMILY MORTGAGE...
Energy Technology Data Exchange (ETDEWEB)
Kinoshita, Takashi, E-mail: tkino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Kawayama, Tomotaka, E-mail: kawayama_tomotaka@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Imamura, Youhei, E-mail: mamura_youhei@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Sakazaki, Yuki, E-mail: sakazaki@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Hirai, Ryo, E-mail: hirai_ryou@kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Ishii, Hidenobu, E-mail: shii_hidenobu@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Suetomo, Masashi, E-mail: jin_t_f_c@yahoo.co.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Matsunaga, Kazuko, E-mail: kmatsunaga@kouhoukai.or.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Azuma, Koichi, E-mail: azuma@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Fujimoto, Kiminori, E-mail: kimichan@med.kurume-u.ac.jp [Department of Radiology, Kurume University School of Medicine, Kurume (Japan); Hoshino, Tomoaki, E-mail: hoshino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan)
2015-04-15
Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and
International Nuclear Information System (INIS)
Kinoshita, Takashi; Kawayama, Tomotaka; Imamura, Youhei; Sakazaki, Yuki; Hirai, Ryo; Ishii, Hidenobu; Suetomo, Masashi; Matsunaga, Kazuko; Azuma, Koichi; Fujimoto, Kiminori; Hoshino, Tomoaki
2015-01-01
Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and
Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid
2015-12-01
This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.
The MCE (Maximum Credible Earthquake) - an approach to reduction of seismic risk
International Nuclear Information System (INIS)
Asmis, G.J.K.; Atchison, R.J.
1979-01-01
It is the responsibility of the Regulatory Body (in Canada, the AECB) to ensure that radiological risks resulting from the effects of earthquakes on nuclear facilities, do not exceed acceptable levels. In simplified numerical terms this means that the frequency of an unacceptable radiation dose must be kept below 10 -6 per annum. Unfortunately, seismic events fall into the class of external events which are not well defined at these low frequency levels. Thus, design earthquakes have been chosen, at the 10 -3 - 10 -4 frequency level, a level commensurate with the limits of statistical data. There exists, therefore, a need to define an additional level of earthquake. A seismic design explicitly and implicitly recognizes three levels of earthquake loading; one comfortably below yield, one at or about yield, and one at ultimate. The ultimate level earthquake, contrary to the first two, has been implicitly addressed by conscientious designers by choosing systems, materials and details compatible with postulated dynamic forces. It is the purpose of this paper to discuss the regulatory specifications required to quantify this third level, or Maximum Credible Earthquake (MCE). (orig.)
Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.
Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Directory of Open Access Journals (Sweden)
E. Ridolfi
2011-07-01
Full Text Available Hydrological models are the basis of operational flood-forecasting systems. The accuracy of these models is strongly dependent on the quality and quantity of the input information represented by rainfall height. Finer space-time rainfall resolution results in more accurate hazard forecasting. In this framework, an optimum raingauge network is essential in predicting flood events.
This paper develops an entropy-based approach to evaluate the maximum information content achievable by a rainfall network for different sampling time intervals. The procedure is based on the determination of the coefficients of transferred and nontransferred information and on the relative isoinformation contours.
The nontransferred information value achieved by the whole network is strictly dependent on the sampling time intervals considered. An empirical curve is defined, to assess the objective of the research: the nontransferred information value is plotted versus the associated sampling time on a semi-log scale. The curve has a linear trend.
In this paper, the methodology is applied to the high-density raingauge network of the urban area of Rome.
A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
Directory of Open Access Journals (Sweden)
Viviana Meruane
2014-05-01
Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America
Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.
2017-12-01
The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.
Haverinen, Jaakko; Abramochkin, Denis V; Kamkin, Andre; Vornanen, Matti
2017-02-01
Temperature-induced changes in cardiac output (Q̇) in fish are largely dependent on thermal modulation of heart rate (f H ), and at high temperatures Q̇ collapses due to heat-dependent depression of f H This study tests the hypothesis that firing rate of sinoatrial pacemaker cells sets the upper thermal limit of f H in vivo. To this end, temperature dependence of action potential (AP) frequency of enzymatically isolated pacemaker cells (pacemaker rate, f PM ), spontaneous beating rate of isolated sinoatrial preparations (f SA ), and in vivo f H of the cold-acclimated (4°C) brown trout (Salmo trutta fario) were compared under acute thermal challenges. With rising temperature, f PM steadily increased because of the acceleration of diastolic depolarization and shortening of AP duration up to the break point temperature (T BP ) of 24.0 ± 0.37°C, at which point the electrical activity abruptly ceased. The maximum f PM at T BP was much higher [193 ± 21.0 beats per minute (bpm)] than the peak f SA (94.3 ± 6.0 bpm at 24.1°C) or peak f H (76.7 ± 2.4 at 15.7 ± 0.82°C) (P brown trout in vivo. Copyright © 2017 the American Physiological Society.
An ecological function and services approach to total maximum daily load (TMDL) prioritization.
Hall, Robert K; Guiliano, David; Swanson, Sherman; Philbin, Michael J; Lin, John; Aron, Joan L; Schafer, Robin J; Heggem, Daniel T
2014-04-01
Prioritizing total maximum daily load (TMDL) development starts by considering the scope and severity of water pollution and risks to public health and aquatic life. Methodology using quantitative assessments of in-stream water quality is appropriate and effective for point source (PS) dominated discharge, but less so in watersheds with mostly nonpoint source (NPS) related impairments. For NPSs, prioritization in TMDL development and implementation of associated best management practices should focus on restoration of ecosystem physical functions, including how restoration effectiveness depends on design, maintenance and placement within the watershed. To refine the approach to TMDL development, regulators and stakeholders must first ask if the watershed, or ecosystem, is at risk of losing riparian or other ecologically based physical attributes and processes. If so, the next step is an assessment of the spatial arrangement of functionality with a focus on the at-risk areas that could be lost, or could, with some help, regain functions. Evaluating stream and wetland riparian function has advantages over the traditional means of water quality and biological assessments for NPS TMDL development. Understanding how an ecosystem functions enables stakeholders and regulators to determine the severity of problem(s), identify source(s) of impairment, and predict and avoid a decline in water quality. The Upper Reese River, Nevada, provides an example of water quality impairment caused by NPS pollution. In this river basin, stream and wetland riparian proper functioning condition (PFC) protocol, water quality data, and remote sensing imagery were used to identify sediment sources, transport, distribution, and its impact on water quality and aquatic resources. This study found that assessments of ecological function could be used to generate leading (early) indicators of water quality degradation for targeting pollution control measures, while traditional in-stream water
Lowell, H. H.
1953-01-01
When a closed body or a duct envelope moves through the atmosphere, air pressure and temperature rises occur ahead of the body or, under ram conditions, within the duct. If cloud water droplets are encountered, droplet evaporation will result because of the air-temperature rise and the relative velocity between the droplet and stagnating air. It is shown that the solution of the steady-state psychrometric equation provides evaporation rates which are the maximum possible when droplets are entrained in air moving along stagnation lines under such conditions. Calculations are made for a wide variety of water droplet diameters, ambient conditions, and flight Mach numbers. Droplet diameter, body size, and Mach number effects are found to predominate, whereas wide variation in ambient conditions are of relatively small significance in the determination of evaporation rates. The results are essentially exact for the case of movement of droplets having diameters smaller than about 30 microns along relatively long ducts (length at least several feet) or toward large obstacles (wings), since disequilibrium effects are then of little significance. Mass losses in the case of movement within ducts will often be significant fractions (one-fifth to one-half) of original droplet masses, while very small droplets within ducts will often disappear even though the entraining air is not fully stagnated. Wing-approach evaporation losses will usually be of the order of several percent of original droplet masses. Two numerical examples are given of the determination of local evaporation rates and total mass losses in cases involving cloud droplets approaching circular cylinders along stagnation lines. The cylinders chosen were of 3.95-inch (10.0+ cm) diameter and 39.5-inch 100+ cm) diameter. The smaller is representative of icing-rate measurement cylinders, while with the larger will be associated an air-flow field similar to that ahead of an airfoil having a leading-edge radius
Energy Technology Data Exchange (ETDEWEB)
Serghiuta, D.; Tholammakkil, J.; Shen, W., E-mail: Dumitru.Serghiuta@cnsc-ccsn.gc.ca [Canadian Nuclear Safety Commission, Ottawa, Ontario (Canada)
2014-07-01
A stochastic-deterministic approach based on representation of uncertainties by subjective probabilities is proposed for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins. The approach is designed for screening and limited independent review verification. Its application is illustrated for a postulated generic CANDU LBLOCA and evaluation of the possibility distribution function of maximum bundle enthalpy considering the reactor physics part of LBLOCA power pulse simulation only. The computer codes HELIOS and NESTLE-CANDU were used in a stochastic procedure driven by the computer code DAKOTA to simulate the LBLOCA power pulse using combinations of core neutronic characteristics randomly generated from postulated subjective probability distributions with deterministic constraints and fixed transient bundle-wise thermal hydraulic conditions. With this information, a bounding estimate of functional failure probability using the limit for the maximum fuel bundle enthalpy can be derived for use in evaluation of core damage frequency. (author)
International Nuclear Information System (INIS)
Yin, Chukai; Su, Bingjing
2001-01-01
The Minerbo's maximum entropy Eddington factor (MEEF) method was proposed as a low-order approximation to transport theory, in which the first two moment equations are closed for the scalar flux f and the current F through a statistically derived nonlinear Eddington factor f. This closure has the ability to handle various degrees of anisotropy of angular flux and is well justified both numerically and theoretically. Thus, a lot of efforts have been made to use this approximation in transport computations, especially in the radiative transfer and astrophysics communities. However, the method suffers numerical instability and may lead to anomalous solutions if the equations are solved by certain commonly used (implicit) mesh schemes. Studies on numerical stability in one-dimensional cases show that the MEEF equations can be solved satisfactorily by an implicit scheme (of treating δΦ/δx) if the angular flux is not too anisotropic so that f 32 , the classic diffusion solution P 1 , the MEEF solution f M obtained by Riemann solvers, and the NFLD solution D M for the two problems, respectively. In Fig. 1, NFLD and MEEF quantitatively predict very close results. However, the NFLD solution is qualitatively better because it is continuous while MEEF predicts unphysical jumps near the middle of the slab. In Fig. 2, the NFLD and MEEF solutions are almost identical, except near the material interface. In summary, the flux-limited diffusion theory derived from the MEEF description is quantitatively as accurate as the MEEF method. However, it is more qualitatively correct and user-friendly than the MEEF method and can be applied efficiently to various steady-state problems. Numerical tests show that this method is widely valid and overall predicts better results than other low-order approximations for various kinds of problems, including eigenvalue problems. Thus, it is an appealing approximate solution technique that is fast computationally and yet is accurate enough for a
Finite mixture model: A maximum likelihood estimation approach on time series data
Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad
2014-09-01
Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force
Directory of Open Access Journals (Sweden)
Davidek Pavel
2018-03-01
Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.
Park, Junhong; Palumbo, Daniel L.
2004-01-01
The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.
[95/95] Approach for design limits analysis in WWER
International Nuclear Information System (INIS)
Shishkov, L.; Tsyganov, S.
2008-01-01
The paper discusses a well-known condition [95%/95%], which is important for monitoring some limits of core parameters in the course of designing the reactors (such as PWR or WWER). The condition ensures the postulate 'there is at least a 95 % probability at a 95 % confidence level that' some parameter does not exceed the limit. Such conditions are stated, for instance, in US standards and IAEA norms as recommendations for DNBR and fuel temperature. A question may arise: why can such approach for the limits be only applied to these parameters, while not normally applied to any other parameters? What is the way to ensure the limits in design practice? Using the general statements of mathematical statistics the authors interpret the [95/95] approach as applied to WWER design limits. (Authors)
International Nuclear Information System (INIS)
Ning, A; Dykes, K
2014-01-01
For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent
Ning, A.; Dykes, K.
2014-06-01
For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.
Derivation of some new distributions in statistical mechanics using maximum entropy approach
Directory of Open Access Journals (Sweden)
Ray Amritansu
2014-01-01
Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.
Multi-approach analysis of maximum riverbed scour depth above subway tunnel
Jun Chen; Hong-wu Tang; Zui-sen Li; Wen-hong Dai
2010-01-01
When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the applic...
Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.
2012-01-01
We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.
Directory of Open Access Journals (Sweden)
Z. Rahnamaei
2012-01-01
Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.
A new maximum power point method based on a sliding mode approach for solar energy harvesting
International Nuclear Information System (INIS)
Farhat, Maissa; Barambones, Oscar; Sbita, Lassaad
2017-01-01
Highlights: • Create a simple, easy of implement and accurate V_M_P_P estimator. • Stability analysis of the proposed system based on the Lyapunov’s theory. • A comparative study versus P&O, highlight SMC good performances. • Construct a new PS-SMC algorithm to include the partial shadow case. • Experimental validation of the SMC MPP tracker. - Abstract: This paper presents a photovoltaic (PV) system with a maximum power point tracking (MPPT) facility. The goal of this work is to maximize power extraction from the photovoltaic generator (PVG). This goal is achieved using a sliding mode controller (SMC) that drives a boost converter connected between the PVG and the load. The system is modeled and tested under MATLAB/SIMULINK environment. In simulation, the sliding mode controller offers fast and accurate convergence to the maximum power operating point that outperforms the well-known perturbation and observation method (P&O). The sliding mode controller performance is evaluated during steady-state, against load varying and panel partial shadow (PS) disturbances. To confirm the above conclusion, a practical implementation of the maximum power point tracker based sliding mode controller on a hardware setup is performed on a dSPACE real time digital control platform. The data acquisition and the control system are conducted all around dSPACE 1104 controller board and its RTI environment. The experimental results demonstrate the validity of the proposed control scheme over a stand-alone real photovoltaic system.
International Nuclear Information System (INIS)
Elnaggar, M.; Abdel Fattah, H.A.; Elshafei, A.L.
2014-01-01
This paper presents a complete design of a two-level control system to capture maximum power in wind energy conversion systems. The upper level of the proposed control system adopts a modified line search optimization algorithm to determine a setpoint for the wind turbine speed. The calculated speed setpoint corresponds to the maximum power point at given operating conditions. The speed setpoint is fed to a generalized predictive controller at the lower level of the control system. A different formulation, that treats the aerodynamic torque as a disturbance, is postulated to derive the control law. The objective is to accurately track the setpoint while keeping the control action free from unacceptably fast or frequent variations. Simulation results based on a realistic model of a 1.5 MW wind turbine confirm the superiority of the proposed control scheme to the conventional ones. - Highlights: • The structure of a MPPT (maximum power point tracking) scheme is presented. • The scheme is divided into the optimization algorithm and the tracking controller. • The optimization algorithm is based on an online line search numerical algorithm. • The tracking controller is treating the aerodynamics torque as a loop disturbance. • The control technique is simulated with stochastic wind speed by Simulink and FAST
Multi-approach analysis of maximum riverbed scour depth above subway tunnel
Directory of Open Access Journals (Sweden)
Jun Chen
2010-12-01
Full Text Available When subway tunnels are routed underneath rivers, riverbed scour may expose the structure, with potentially severe consequences. Thus, it is important to identify the maximum scour depth to ensure that the designed buried depth is adequate. There are a range of methods that may be applied to this problem, including the fluvial process analysis method, geological structure analysis method, scour formula method, scour model experiment method, and numerical simulation method. However, the application ranges and forecasting precision of these methods vary considerably. In order to quantitatively analyze the characteristics of the different methods, a subway tunnel passing underneath a river was selected, and the aforementioned five methods were used to forecast the maximum scour depth. The fluvial process analysis method was used to characterize the river regime and evolution trend, which were the baseline for examination of the scour depth of the riverbed. The results obtained from the scour model experiment and the numerical simulation methods are reliable; these two methods are suitable for application to tunnel projects passing underneath rivers. The scour formula method was less accurate than the scour model experiment method; it is suitable for application to lower risk projects such as pipelines. The results of the geological structure analysis had low precision; the method is suitable for use as a secondary method to assist other research methods. To forecast the maximum scour depth of the riverbed above the subway tunnel, a combination of methods is suggested, and the appropriate analysis method should be chosen with respect to the local conditions.
Directory of Open Access Journals (Sweden)
Michael J. Markham
2011-07-01
Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.
International Nuclear Information System (INIS)
Vossier, Alexis; Gualdi, Federico; Dollet, Alain; Ares, Richard; Aimez, Vincent
2015-01-01
In principle, the upper efficiency limit of any solar cell technology can be determined using the detailed-balance limit formalism. However, “real” solar cells show efficiencies which are always below this theoretical value due to several limiting mechanisms. We study the ability of a solar cell architecture to approach its own theoretical limit, using a novel index introduced in this work, and the amplitude with which the different limiting mechanisms affect the cell efficiency is scrutinized as a function of the electronic gap and the illumination level to which the cell is submitted. The implications for future generations of solar cells aiming at an improved conversion of the solar spectrum are also addressed
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Directory of Open Access Journals (Sweden)
López-Valcarce Roberto
2004-01-01
Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Directory of Open Access Journals (Sweden)
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Spectroscopy of 211Rn approaching the valence limit
International Nuclear Information System (INIS)
Davidson, P.M.; Dracoulis, G.D.; Kibedi, T.; Fabricius, B.; Baxter, A.M.; Stuchbery, A.E.; Poletti, A.R.; Schiffer, K.J.
1993-02-01
High spin states in 211 Rn were populated using the reaction 198 Pt( 18 O,5n) at 96 MeV. The decay was studied using γ-ray and electron spectroscopy. The known level scheme is extended up to a spin of greater than 69/2 and many non-yrast states are added. Semi-empirical shell model calculations and the properties of related states in 210 Rn and 212 Rn are used to assign configurations to some of the non-yrast states. The properties of the high spin states observed are compared to the predictions of the Multi-Particle Octupole Coupling model and the semi-empirical shell model. The maximum reasonable spin available from the valence particles and holes is 77/2 and states are observed to near this limit. 12 refs., 4 tabs., 8 figs
Spectroscopy of 211Rn approaching the valence limit
International Nuclear Information System (INIS)
Davidson, P.M.; Dracoulis, G.D.; Byrne, A.P.; Kibedi, T.; Fabricus, B.; Baxter, A.M.; Stuchbery, A.E.; Poletti, A.R.; Schiffer, K.J.
1993-01-01
High-spin states in 211 Rn were populated using the reaction 198 Pt( 18 O, 5n) at 96 MeV. Their decay was studied using γ-ray and electron spectroscopy. The known level scheme is extended up to a spin of greater than 69/2 and many non-yrast states are added. Semi-empirical shell-model calculations and the properties of related states in 210 Rn and 212 Rn are used to assign configurations to some of the non-yrast states. The properties of the high-spin states observed are compared to the predictions of the multi-particle octupole-coupling model and the semi-empirical shell model. The maximum reasonable spin available from the valence particles and holes in 77/2 and states are observed to near this limit. (orig.)
Van Rooy, Wilhelmina S.
2012-04-01
Background: The ubiquity, availability and exponential growth of digital information and communication technology (ICT) creates unique opportunities for learning and teaching in the senior secondary school biology curriculum. Digital technologies make it possible for emerging disciplinary knowledge and understanding of biological processes previously too small, large, slow or fast to be taught. Indeed, much of bioscience can now be effectively taught via digital technology, since its representational and symbolic forms are in digital formats. Purpose: This paper is part of a larger Australian study dealing with the technologies and modalities of learning biology in secondary schools. Sample: The classroom practices of three experienced biology teachers, working in a range of NSW secondary schools, are compared and contrasted to illustrate how the challenges of limited technologies are confronted to seamlessly integrate what is available into a number of molecular genetics lessons to enhance student learning. Design and method: The data are qualitative and the analysis is based on video classroom observations and semi-structured teacher interviews. Results: Findings indicate that if professional development opportunities are provided where the pedagogy of learning and teaching of both the relevant biology and its digital representations are available, then teachers see the immediate pedagogic benefit to student learning. In particular, teachers use ICT for challenging genetic concepts despite limited computer hardware and software availability. Conclusion: Experienced teachers incorporate ICT, however limited, in order to improve the quality of student learning.
Scalable pumping approach for extracting the maximum TEM(00) solar laser power.
Liang, Dawei; Almeida, Joana; Vistas, Cláudia R
2014-10-20
A scalable TEM(00) solar laser pumping approach is composed of four pairs of first-stage Fresnel lens-folding mirror collectors, four fused-silica secondary concentrators with light guides of rectangular cross-section for radiation homogenization, four hollow two-dimensional compound parabolic concentrators for further concentration of uniform radiations from the light guides to a 3 mm diameter, 76 mm length Nd:YAG rod within four V-shaped pumping cavities. An asymmetric resonator ensures an efficient large-mode matching between pump light and oscillating laser light. Laser power of 59.1 W TEM(00) is calculated by ZEMAX and LASCAD numerical analysis, revealing 20 times improvement in brightness figure of merit.
A maximum information utilization approach in X-ray fluorescence analysis
International Nuclear Information System (INIS)
Papp, T.; Maxwell, J.A.; Papp, A.T.
2009-01-01
X-ray fluorescence data bases have significant contradictions, and inconsistencies. We have identified that the main source of the contradictions, after the human factors, is rooted in the signal processing approaches. We have developed signal processors to overcome many of the problems by maximizing the information available to the analyst. These non-paralyzable, fully digital signal processors have yielded improved resolution, line shape, tailing and pile up recognition. The signal processors account for and register all events, sorting them into two spectra, one spectrum for the desirable or accepted events, and one spectrum for the rejected events. The information contained in the rejected spectrum is mandatory to have control over the measurement and to make a proper accounting and allocation of the events. It has established the basis for the application of the fundamental parameter method approach. A fundamental parameter program was also developed. The primary X-ray line shape (Lorentzian) is convoluted with a system line shape (Gaussian) and corrected for the sample material absorption, X-ray absorbers and detector efficiency. The peaks also can have, a lower and upper energy side tailing, including the physical interaction based long range functions. It also employs a peak and continuum pile up and can handle layered samples of up to five layers. The application of a fundamental parameter method demands the proper equipment characterization. We have also developed an inverse fundamental parameter method software package for equipment characterisation. The program calculates the excitation function at the sample position and the detector efficiency, supplying an internally consistent system.
Novel maximum likelihood approach for passive detection and localisation of multiple emitters
Hernandez, Marcel
2017-12-01
In this paper, a novel target acquisition and localisation algorithm (TALA) is introduced that offers a capability for detecting and localising multiple targets using the intermittent "signals-of-opportunity" (e.g. acoustic impulses or radio frequency transmissions) they generate. The TALA is a batch estimator that addresses the complex multi-sensor/multi-target data association problem in order to estimate the locations of an unknown number of targets. The TALA is unique in that it does not require measurements to be of a specific type, and can be implemented for systems composed of either homogeneous or heterogeneous sensors. The performance of the TALA is demonstrated in simulated scenarios with a network of 20 sensors and up to 10 targets. The sensors generate angle-of-arrival (AOA), time-of-arrival (TOA), or hybrid AOA/TOA measurements. It is shown that the TALA is able to successfully detect 83-99% of the targets, with a negligible number of false targets declared. Furthermore, the localisation errors of the TALA are typically within 10% of the errors generated by a "genie" algorithm that is given the correct measurement-to-target associations. The TALA also performs well in comparison with an optimistic Cramér-Rao lower bound, with typical differences in performance of 10-20%, and differences in performance of 40-50% in the most difficult scenarios considered. The computational expense of the TALA is also controllable, which allows the TALA to maintain computational feasibility even in the most challenging scenarios considered. This allows the approach to be implemented in time-critical scenarios, such as in the localisation of artillery firing events. It is concluded that the TALA provides a powerful situational awareness aid for passive surveillance operations.
Treatment for spasmodic dysphonia: limitations of current approaches
Ludlow, Christy L.
2009-01-01
Purpose of review Although botulinum toxin injection is the gold standard for treatment of spasmodic dysphonia, surgical approaches aimed at providing long-term symptom control have been advancing over recent years. Recent findings When surgical approaches provide greater long-term benefits to symptom control, they also increase the initial period of side effects of breathiness and swallowing difficulties. However, recent analyses of quality-of-life questionnaires in patients undergoing regular injections of botulinum toxin demonstrate that a large proportion of patients have limited relief for relatively short periods due to early breathiness and loss-of-benefit before reinjection. Summary Most medical and surgical approaches to the treatment of spasmodic dysphonia have been aimed at denervation of the laryngeal muscles to block symptom expression in the voice, and have both adverse effects as well as treatment benefits. Research is needed to identify the central neuropathophysiology responsible for the laryngeal muscle spasms in order target treatment towards the central neurological abnormality responsible for producing symptoms. PMID:19337127
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Lo Porto, A.; De Girolamo, A. M.; Santese, G.
2012-04-01
In this presentation, the experience gained in the first experimental use in the UE (as far as we know) of the concept and methodology of the "Total Maximum Daily Load" (TMDL) is reported. The TMDL is an instrument required in the Clean Water Act in U.S.A for the management of water bodies classified impaired. The TMDL calculates the maximum amount of a pollutant that a waterbody can receive and still safely meet water quality standards. It permits to establish a scientifically-based strategy on the regulation of the emission loads control according to the characteristic of the watershed/basin. The implementation of the TMDL is a process analogous to the Programmes of Measures required by the WFD, the main difference being the analysis of the linkage between loads of different sources and the water quality of water bodies. The TMDL calculation was used in this study for the Candelaro River, a temporary Italian river, classified impaired in the first steps of the implementation of the WFD. A specific approach based on the "Load Duration Curves" was adopted for the calculation of nutrient TMDLs due to the more robust approach specific for rivers featuring large changes in river flow compared to the classic approach based on average long term flow conditions. This methodology permits to establish the maximum allowable loads across to the different flow conditions of a river. This methodology enabled: to evaluate the allowable loading of a water body; to identify the sources and estimate their loads; to estimate the total loading that the water bodies can receives meeting the water quality standards established; to link the effects of point and diffuse sources on the water quality status and finally to individuate the reduction necessary for each type of sources. The loads reductions were calculated for nitrate, total phosphorus and ammonia. The simulated measures showed a remarkable ability to reduce the pollutants for the Candelaro River. The use of the Soil and
A macrothermodynamic approach to the limit of reversible capillary condensation.
Trens, Philippe; Tanchoux, Nathalie; Galarneau, Anne; Brunel, Daniel; Fubini, Bice; Garrone, Edoardo; Fajula, François; Di Renzo, Francesco
2005-08-30
The threshold of reversible capillary condensation is a well-defined thermodynamic property, as evidenced by corresponding states treatment of literature and experimental data on the lowest closure point of the hysteresis loop in capillary condensation-evaporation cycles for several adsorbates. The nonhysteretical filling of small mesopores presents the properties of a first-order phase transition, confirming that the limit of condensation reversibility does not coincide with the pore critical point. The enthalpy of reversible capillary condensation can be calculated by a Clausius-Clapeyron approach and is consistently larger than the condensation heat in unconfined conditions. Calorimetric data on the capillary condensation of tert-butyl alcohol in MCM-41 silica confirm a 20% increase of condensation heat in small mesopores. This enthalpic advantage makes easier the overcoming of the adhesion forces by the capillary forces and justifies the disappearing of the hysteresis loop.
Approaching the Hole Mobility Limit of GaSb Nanowires.
Yang, Zai-xing; Yip, SenPo; Li, Dapan; Han, Ning; Dong, Guofa; Liang, Xiaoguang; Shu, Lei; Hung, Tak Fu; Mo, Xiaoliang; Ho, Johnny C
2015-09-22
In recent years, high-mobility GaSb nanowires have received tremendous attention for high-performance p-type transistors; however, due to the difficulty in achieving thin and uniform nanowires (NWs), there is limited report until now addressing their diameter-dependent properties and their hole mobility limit in this important one-dimensional material system, where all these are essential information for the deployment of GaSb NWs in various applications. Here, by employing the newly developed surfactant-assisted chemical vapor deposition, high-quality and uniform GaSb NWs with controllable diameters, spanning from 16 to 70 nm, are successfully prepared, enabling the direct assessment of their growth orientation and hole mobility as a function of diameter while elucidating the role of sulfur surfactant and the interplay between surface and interface energies of NWs on their electrical properties. The sulfur passivation is found to efficiently stabilize the high-energy NW sidewalls of (111) and (311) in order to yield the thin NWs (i.e., 40 nm in diameters) would grow along the most energy-favorable close-packed planes with the orientation of ⟨111⟩, supported by the approximate atomic models. Importantly, through the reliable control of sulfur passivation, growth orientation and surface roughness, GaSb NWs with the peak hole mobility of ∼400 cm(2)V s(-1) for the diameter of 48 nm, approaching the theoretical limit under the hole concentration of ∼2.2 × 10(18) cm(-3), can be achieved for the first time. All these indicate their promising potency for utilizations in different technological domains.
A genetic approach to shape reconstruction in limited data tomography
International Nuclear Information System (INIS)
Turcanu, C.; Craciunescu, T.
2001-01-01
The paper proposes a new method for shape reconstruction in computerized tomography. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited view angles . The problem of image reconstruction from projection may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by a genetic algorithm. The algorithm has some features common to all genetic algorithms but also some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated on a phantom typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. A genetic reconstruction is presented. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise. (authors)
Directory of Open Access Journals (Sweden)
Jorge Pereira
2015-12-01
Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.
International Nuclear Information System (INIS)
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-01-01
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates. (paper)
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Approaching conversion limit with all-dielectric solar cell reflectors.
Fu, Sze Ming; Lai, Yi-Chun; Tseng, Chi Wei; Yan, Sheng Lun; Zhong, Yan Kai; Shen, Chang-Hong; Shieh, Jia-Min; Li, Yu-Ren; Cheng, Huang-Chung; Chi, Gou-chung; Yu, Peichen; Lin, Albert
2015-02-09
Metallic back reflectors has been used for thin-film and wafer-based solar cells for very long time. Nonetheless, the metallic mirrors might not be the best choices for photovoltaics. In this work, we show that solar cells with all-dielectric reflectors can surpass the best-configured metal-backed devices. Theoretical and experimental results all show that superior large-angle light scattering capability can be achieved by the diffuse medium reflectors, and the solar cell J-V enhancement is higher for solar cells using all-dielectric reflectors. Specifically, the measured diffused scattering efficiency (D.S.E.) of a diffuse medium reflector is >0.8 for the light trapping spectral range (600nm-1000nm), and the measured reflectance of a diffuse medium can be as high as silver if the geometry of embedded titanium oxide(TiO(2)) nanoparticles is optimized. Moreover, the diffuse medium reflectors have the additional advantage of room-temperature processing, low cost, and very high throughput. We believe that using all-dielectric solar cell reflectors is a way to approach the thermodynamic conversion limit by completely excluding metallic dissipation.
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
Melak, Tilahun; Gakkhar, Sunita
2015-12-01
In spite of the implementations of several strategies, tuberculosis (TB) is overwhelmingly a serious global public health problem causing millions of infections and deaths every year. This is mainly due to the emergence of drug-resistance varieties of TB. The current treatment strategies for the drug-resistance TB are of longer duration, more expensive and have side effects. This highlights the importance of identification and prioritization of targets for new drugs. This study has been carried out to prioritize potential drug targets of Mycobacterium tuberculosis H37Rv based on their flow to resistance genes. The weighted proteome interaction network of the pathogen was constructed using a dataset from STRING database. Only a subset of the dataset with interactions that have a combined score value ≥770 was considered. Maximum flow approach has been used to prioritize potential drug targets. The potential drug targets were obtained through comparative genome and network centrality analysis. The curated set of resistance genes was retrieved from literatures. Detail literature review and additional assessment of the method were also carried out for validation. A list of 537 proteins which are essential to the pathogen and non-homologous with human was obtained from the comparative genome analysis. Through network centrality measures, 131 of them were found within the close neighborhood of the centre of gravity of the proteome network. These proteins were further prioritized based on their maximum flow value to resistance genes and they are proposed as reliable drug targets of the pathogen. Proteins which interact with the host were also identified in order to understand the infection mechanism. Potential drug targets of Mycobacterium tuberculosis H37Rv were successfully prioritized based on their flow to resistance genes of existing drugs which is believed to increase the druggability of the targets since inhibition of a protein that has a maximum flow to
Energy Technology Data Exchange (ETDEWEB)
Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R., E-mail: asolekar@iitb.ac.in
2012-11-01
yards in India. -- Highlights: Black-Right-Pointing-Pointer Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. Black-Right-Pointing-Pointer Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). Black-Right-Pointing-Pointer Mathematical model using vector addition approach and based on Gaussian dispersion. Black-Right-Pointing-Pointer Model predicted maximum emissions of heavy metals at different wind speeds. Black-Right-Pointing-Pointer Exposure impacts on a worker's health and the intertidal sediments can be assessed.
International Nuclear Information System (INIS)
Deshpande, Paritosh C.; Tilwankar, Atit K.; Asolekar, Shyam R.
2012-01-01
: ► Conceptual framework to apportion pollution loads from plate-cutting in ship recycling. ► Estimates upper bound (pollutants in air) and lower bound (intertidal sediments). ► Mathematical model using vector addition approach and based on Gaussian dispersion. ► Model predicted maximum emissions of heavy metals at different wind speeds. ► Exposure impacts on a worker's health and the intertidal sediments can be assessed.
Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics
Guercilena, Federico; Radice, David; Rezzolla, Luciano
2017-07-01
We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.
International Nuclear Information System (INIS)
Seo, Yoshinobu; Sasaki, Takehiko; Nakamura, Hirohiko
2010-01-01
The cochlea is one of the most important organs to preserve during skull base surgery. However, no definite landmark for the cochlea has been identified during maximum drilling of the petrous apex such as anterior transpetrosal approach. The relationship between the cochlea and the petrous portion of the internal carotid artery (ICA) was assessed with computed tomography (CT) in 70 petrous bones of 35 patients, 16 males and 19 females aged 12-85 years (mean 48.6 years). After accumulation of volume data with multidetector CT, axial bone window images of 1-mm thickness were obtained to identify the cochlea and the horizontal petrous portion of the ICA. The distance was measured between the extended line of the posteromedial side of the horizontal petrous portion of the ICA and the basal turn of the cochlea. If the cochlea was located posteromedial to the ICA, the distance was expressed as a positive number, but if anterolateral, as a negative number. The mean distance was 0.6 mm (range -4.9 to 3.9 mm) and had no significant correlation with sex or age. The cochlea varies in location compared with the horizontal petrous portion of the ICA. Measurement of the depth and distance between the extended line of the posteromedial side of the horizontal intrapetrous ICA and the cochlea before surgery will save time, increase safety, and maximize bone evacuation during drilling of the petrous apex. (author)
Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.
2016-09-01
Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.
Population pressure on coral atolls: trends and approaching limits.
Rapaport, M
1990-09-01
Trends and approaching limits of population pressure on coral atolls is discussed by examining the atoll environment in terms of the physical geography, the production systems, and resource distribution. Atoll populations are grouped as dependent and independent, and demographic trends in population growth, migraiton, urbanization, and political dependency are reviewed. Examination of the carrying capacity includes a dynamic model, the influences of the West, and philopsophical considerations. The carrying capacity is the "maximal population supportable in a given area". Traditional models are criticized because of a lack in accounting for external linkages. The proposed model is dynamic and considers perceived needs and overseas linkages. It also explains regional disparities in population distribution, and provides a continuing model for population movement from outer islands to district centers and mainland areas. Because of increased expectations and perceived needs, there is a lower carrying capacity for outlying areas, and expanded capacity in district centers. This leads to urbanization, emigration, and carrying capacity overshot in regional and mainland areas. Policy intervention is necessary at the regional and island community level. Atolls, which are islands surrounding deep lagoons, exist in archipelagoes across the oceans, and are rich in aquatic life. The balance in this small land area with a vulnerable ecosystem may be easily disturbed by scarce water supplies, barren soils, rising sea levels in the future, hurricanes, and tsunamis. Traditionally, fisheries and horticulture (pit-taro, coconuts, and breadfruit) have sustained populations, but modern influences such as blasting, reef mining, new industrial technologies, population pressure, and urbanization threaten the balance. Population pressure, which has lead to pollution, epidemics, malnutrition, crime, social disintegration, and foreign dependence, is evidenced in the areas of Tuvalu, Kiribati
Watanabe, Masashi; Goto, Kazuhisa; Bricker, Jeremy D.; Imamura, Fumihiko
2018-02-01
We examined the quantitative difference in the distribution of tsunami and storm deposits based on numerical simulations of inundation and sediment transport due to tsunami and storm events on the Sendai Plain, Japan. The calculated distance from the shoreline inundated by the 2011 Tohoku-oki tsunami was smaller than that inundated by storm surges from hypothetical typhoon events. Previous studies have assumed that deposits observed farther inland than the possible inundation limit of storm waves and storm surge were tsunami deposits. However, confirming only the extent of inundation is insufficient to distinguish tsunami and storm deposits, because the inundation limit of storm surges may be farther inland than that of tsunamis in the case of gently sloping coastal topography such as on the Sendai Plain. In other locations, where coastal topography is steep, the maximum inland inundation extent of storm surges may be only several hundred meters, so marine-sourced deposits that are distributed several km inland can be identified as tsunami deposits by default. Over both gentle and steep slopes, another difference between tsunami and storm deposits is the total volume deposited, as flow speed over land during a tsunami is faster than during a storm surge. Therefore, the total deposit volume could also be a useful proxy to differentiate tsunami and storm deposits.
Yan, M; Qian, M; Kong, C; Dargusch, M S
2014-02-01
The formation of grain boundary (GB) brittle carbides with a complex three-dimensional (3-D) morphology can be detrimental to both the fatigue properties and corrosion resistance of a biomedical titanium alloy. A detailed microscopic study has been performed on an as-sintered biomedical Ti-15Mo (in wt.%) alloy containing 0.032 wt.% C. A noticeable presence of a carbon-enriched phase has been observed along the GB, although the carbon content is well below the maximum carbon limit of 0.1 wt.% specified by ASTM Standard F2066. Transmission electron microscopy (TEM) identified that the carbon-enriched phase is face-centred cubic Ti2C. 3-D tomography reconstruction revealed that the Ti2C structure has morphology similar to primary α-Ti. Nanoindentation confirmed the high hardness and high Young's modulus of the GB Ti2C phase. To avoid GB carbide formation in Ti-15Mo, the carbon content should be limited to 0.006 wt.% by Thermo-Calc predictions. Similar analyses and characterization of the carbide formation in biomedical unalloyed Ti, Ti-6Al-4V and Ti-16Nb have also been performed. Copyright © 2013 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Turek, Jan; Braïda, Benoît; De Proft, Frank
2017-10-17
The bonding in heavier Group 14 zero-valent complexes of a general formula L 2 E (E=Si-Pb; L=phosphine, N-heterocyclic and acyclic carbene, cyclic tetrylene and carbon monoxide) is probed by combining valence bond (VB) theory and maximum probability domain (MPD) approaches. All studied complexes are initially evaluated on the basis of the structural parameters and the shape of frontier orbitals revealing a bent structural motif and the presence of two lone pairs at the central E atom. For the VB calculations three resonance structures are suggested, representing the "ylidone", "ylidene" and "bent allene" structures, respectively. The influence of both ligands and central atoms on the bonding situation is clearly expressed in different weights of the resonance structures for the particular complexes. In general, the bonding in the studied E 0 compounds, the tetrylones, is best described as a resonating combination of "ylidone" and "ylidene" structures with a minor contribution of the "bent allene" structure. Moreover, the VB calculations allow for a straightforward assessment of the π-backbonding (E→L) stabilization energy. The validity of the suggested resonance model is further confirmed by the complementary MPD calculations focusing on the E lone pair region as well as the E-L bonding region. Likewise, the MPD method reveals a strong influence of the σ-donating and π-accepting properties of the ligand. In particular, either one single domain or two symmetrical domains are found in the lone pair region of the central atom, supporting the predominance of either the "ylidene" or "ylidone" structures having one or two lone pairs at the central atom, respectively. Furthermore, the calculated average populations in the lone pair MPDs correlate very well with the natural bond orbital (NBO) populations, and can be related to the average number of electrons that is backdonated to the ligands. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
33 CFR 401.52 - Limit of approach to a bridge.
2010-07-01
... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Limit of approach to a bridge... approach to a bridge. (a) No vessel shall pass the limit of approach sign at any movable bridge until the bridge is in a fully open position and the signal light shows green. (b) No vessel shall pass the limit...
Directory of Open Access Journals (Sweden)
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Credit card spending limit and personal finance: system dynamics approach
Directory of Open Access Journals (Sweden)
Mirjana Pejić Bach
2014-03-01
Full Text Available Credit cards have become one of the major ways for conducting cashless transactions. However, they have a long term impact on the well being of their owner through the debt generated by credit card usage. Credit card issuers approve high credit limits to credit card owners, thereby influencing their credit burden. A system dynamics model has been used to model behavior of a credit card owner in different scenarios according to the size of a credit limit. Experiments with the model demonstrated that a higher credit limit approved on the credit card decreases the budget available for spending in the long run. This is a contribution toward the evaluation of action for credit limit control based on their consequences.
Chapman--Enskog approach to flux-limited diffusion theory
International Nuclear Information System (INIS)
Levermore, C.D.
1979-01-01
Using the technique developed by Chapman and Enskog for deriving the Navier--Stokes equations from the Boltzmann equation, a framework is set up for deriving diffusion theories from the transport equation. The procedure is first applied to give a derivation of isotropic diffusion theory and then of a completely new theory which is naturally flux-limited. This new flux-limited diffusion theory is then compared with asymptotic diffusion theory
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W
2016-01-01
The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.
A Practical Approach for Parameter Identification with Limited Information
DEFF Research Database (Denmark)
Zeni, Lorenzo; Yang, Guangya; Tarnowski, Germán Claudio
2014-01-01
A practical parameter estimation procedure for a real excitation system is reported in this paper. The core algorithm is based on genetic algorithm (GA) which estimates the parameters of a real AC brushless excitation system with limited information about the system. Practical considerations are ...... parameters. The whole methodology is described and the estimation strategy is presented in this paper....
Fracture mechanics approach to estimate rail wear limits
2009-10-01
This paper describes a systematic methodology to estimate allowable limits for rail head wear in terms of vertical head-height loss, gage-face side wear, and/or the combination of the two. This methodology is based on the principles of engineering fr...
Can we still comply with the maximum limit of 2°C? Approaches to a New Climate Contract
Directory of Open Access Journals (Sweden)
F. J. Radermacher
2014-10-01
Full Text Available The international climate policy is in trouble. CO2 emissions are rising instead of shrinking. The 2025 climate summit in Paris should lead to a global agreement, but what should be its design? In an earlier paper in Cadmus on the issue, the author outlined a contract formula based on the so-called ‘Copenhagen Accord’ that is based on a dynamic cap and an intelligent burden sharing between politics and the private sector. The private sector was brought into the deal via the idea of a voluntary climate neutrality of private emissions culminating in a ‘Global Neutral’ promoted by the United Nations. All this was based on a global cap-and-trade system. For a number of reasons, it may be that a global cap-and-trade system cannot or will not be established. States may use other instruments to fulfil their promises. The present paper elaborates that even under such conditions, the basic proposal can still be implemented. This may prove useful for the Paris negotiations.
Accuracy, precision, and lower detection limits (a deficit reduction approach)
International Nuclear Information System (INIS)
Bishop, C.T.
1993-01-01
The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a open-quotes deficit reduction approachclose quotes has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared
Chandrasekhar Limit: An Elementary Approach Based on Classical Physics and Quantum Theory
Pinochet, Jorge; Van Sint Jan, Michael
2016-01-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the "Chandrasekhar limit." This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons…
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Thakur, Suprajnya; Mishra, Ashutosh; Thakur, Mamta; Thakur, Abhilash
2014-01-01
In present study efforts have been made to analyze the role of different structural/ topological and non-conventional physicochemical features on the X-ray absorption property wavelength of maximum absorption λ m . Efforts are also made to compare the magnitude of various parameters for optimization of the features mainly responsible to characterize the wavelength of maximum absorbance λ m in X-ray absorption. For the purpose multiple linear regression method is used and on the basis of regression and correlation value suitable model have been developed.
Geometrical Optimization Approach to Isomerization: Models and Limitations.
Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R
2017-11-02
We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.
A general approach to total repair cost limit replacement policies
Directory of Open Access Journals (Sweden)
F. Beichelt
2014-01-01
Full Text Available A common replacement policy for technical systems consists in replacing a system by a new one after its economic lifetime, i.e. at that moment when its long-run maintenance cost rate is minimal. However, the strict application of the economic lifetime does not take into account the individual deviations of maintenance cost rates of single systems from the average cost development. Hence, Beichet proposed the total repair cost limit replacement policy: the system is replaced by a new one as soon as its total repair cost reaches or exceeds a given level. He modelled the repair cost development by functions of the Wiener process with drift. Here the same policy is considered under the assumption that the one-dimensional probability distribution of the process describing the repair cost development is given. In the examples analysed, applying the total repair cost limit replacement policy instead of the economic life-time leads to cost savings of between 4% and 30%. Finally, it is illustrated how to include the reliability aspect into the policy.
Data Smearing: An Approach to Disclosure Limitation for Tabular Data
Directory of Open Access Journals (Sweden)
Toth Daniell
2014-12-01
Full Text Available Statistical agencies often collect sensitive data for release to the public at aggregated levels in the form of tables. To protect confidential data, some cells are suppressed in the publicly released data. One problem with this method is that many cells of interest must be suppressed in order to protect a much smaller number of sensitive cells. Another problem is that the covariates used to aggregate and level of aggregation must be fixed before the data is released. Both of these restrictions can severely limit the utility of the data. We propose a new disclosure limitation method that replaces the full set of microdata with synthetic data for use in producing released data in tabular form. This synthetic data set is obtained by replacing each unit’s values with a weighted average of sampled values from the surrounding area. The synthetic data is produced in a way to give asymptotically unbiased estimates for aggregate cells as the number of units in the cell increases. The method is applied to the U.S. Bureau of Labor Statistics Quarterly Census of Employment and Wages data, which is released to the public quarterly in tabular form and aggregated across varying scales of time, area, and economic sector.
[Limitation of therapeutic effort: Approach to a combined view].
Bueno Muñoz, M J
2013-01-01
Over the past few decades, we have been witnessing that increasing fewer people pass away at home and increasing more do so within the hospital. More specifically, 20% of deaths now occur in an intensive care unit (ICU). However, death in the ICU has become a highly technical process. This sometimes originates excesses because the resources used are not proportionate related to the purposes pursued (futility). It may create situations that do not respect the person's dignity throughout the death process. It is within this context that the situation of the clinical procedure called "limitation of the therapeutic effort" (LTE) is reviewed. This has become a true bridge between Intensive Care and Palliative Care. Its final goal is to guarantee a dignified and painless death for the terminally ill. Copyright © 2012 Elsevier España, S.L. y SEEIUC. All rights reserved.
Stochastic resonance a mathematical approach in the small noise limit
Herrmann, Samuel; Pavlyukevich, Ilya; Peithmann, Dierk
2013-01-01
Stochastic resonance is a phenomenon arising in a wide spectrum of areas in the sciences ranging from physics through neuroscience to chemistry and biology. This book presents a mathematical approach to stochastic resonance which is based on a large deviations principle (LDP) for randomly perturbed dynamical systems with a weak inhomogeneity given by an exogenous periodicity of small frequency. Resonance, the optimal tuning between period length and noise amplitude, is explained by optimizing the LDP's rate function. The authors show that not all physical measures of tuning quality are robust with respect to dimension reduction. They propose measures of tuning quality based on exponential transition rates explained by large deviations techniques and show that these measures are robust. The book sheds some light on the shortcomings and strengths of different concepts used in the theory and applications of stochastic resonance without attempting to give a comprehensive overview of the many facets of stochastic ...
Energy Technology Data Exchange (ETDEWEB)
Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424 (China)
2010-06-15
To achieve maximum power point tracking (MPPT) for wind power generation systems, the rotational speed of wind turbines should be adjusted in real time according to wind speed. In this paper, a Wilcoxon radial basis function network (WRBFN) with hill-climb searching (HCS) MPPT strategy is proposed for a permanent magnet synchronous generator (PMSG) with a variable-speed wind turbine. A high-performance online training WRBFN using a back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller is designed for a PMSG. The MPSO is adopted in this study to adapt to the learning rates in the back-propagation process of the WRBFN to improve the learning capability. The MPPT strategy locates the system operation points along the maximum power curves based on the dc-link voltage of the inverter, thus avoiding the generator speed detection. (author)
Multicore in Production: Advantages and Limits of the Multiprocess Approach
Binet, S; The ATLAS collaboration; Lavrijsen, W; Leggett, Ch; Lesny, D; Jha, M K; Severini, H; Smith, D; Snyder, S; Tatarkhanov, M; Tsulaia, V; van Gemmeren, P; Washbrook, A
2011-01-01
The shared memory architecture of multicore CPUs provides HENP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelizing HENP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm which allows to share up to 50% memory pages among event worker processes with negligible CPU overhead. By leaving the task of managing shared memory pages to the operating system, we have been able to run in parallel large reconstruction and simulation applications originally written to be run in a single thread of execution with little to no change to the application code. In spite of this, the process of validating athena multi-process for production took ten months of concentrated effort and is expected to continue for several more months. In general terms, we had two classes of problems in the multi-process port: merging the output fil...
Bianco, Antonino; Filingeri, Davide; Paoli, Antonio; Palma, Antonio
2015-04-01
The aim of this study was to evaluate a new method to perform the one repetition maximum (1RM) bench press test, by combining previously validated predictive and practical procedures. Eight young male and 7 females participants, with no previous experience of resistance training, performed a first set of repetitions to fatigue (RTF) with a workload corresponding to ⅓ of their body mass (BM) for a maximum of 25 repetitions. Following a 5-min recovery period, a second set of RTF was performed with a workload corresponding to ½ of participants' BM. The number of repetitions performed in this set was then used to predict the workload to be used for the 1RM bench press test using Mayhew's equation. Oxygen consumption, heart rate and blood lactate were monitored before, during and after each 1RM attempt. A significant effect of gender was found on the maximum number of repetitions achieved during the RTF set performed with ½ of participants' BM (males: 25.0 ± 6.3; females: 11.0x± 10.6; t = 6.2; p bench press test. We conclude that, by combining previously validated predictive equations with practical procedures (i.e. using a fraction of participants' BM to determine the workload for an RTF set), the new method we tested appeared safe, accurate (particularly in females) and time-effective in the practical evaluation of 1RM performance in inexperienced individuals. Copyright © 2014 Elsevier Ltd. All rights reserved.
Teal, Lorna R.; Marras, Stefano; Peck, Myron A.; Domenici, Paolo
2018-02-01
Models are useful tools for predicting the impact of global change on species distribution and abundance. As ectotherms, fish are being challenged to adapt or track changes in their environment, either in time through a phenological shift or in space by a biogeographic shift. Past modelling efforts have largely been based on correlative Species Distribution Models, which use known occurrences of species across landscapes of interest to define sets of conditions under which species are likely to maintain populations. The practical advantages of this correlative approach are its simplicity and the flexibility in terms of data requirements. However, effective conservation management requires models that make projections beyond the range of available data. One way to deal with such an extrapolation is to use a mechanistic approach based on physiological processes underlying climate change effects on organisms. Here we illustrate two approaches for developing physiology-based models to characterize fish habitat suitability. (i) Aerobic Scope Models (ASM) are based on the relationship between environmental factors and aerobic scope (defined as the difference between maximum and standard (basal) metabolism). This approach is based on experimental data collected by using a number of treatments that allow a function to be derived to predict aerobic metabolic scope from the stressor/environmental factor(s). This function is then integrated with environmental (oceanographic) data of current and future scenarios. For any given species, this approach allows habitat suitability maps to be generated at various spatiotemporal scales. The strength of the ASM approach relies on the estimate of relative performance when comparing, for example, different locations or different species. (ii) Dynamic Energy Budget (DEB) models are based on first principles including the idea that metabolism is organised in the same way within all animals. The (standard) DEB model aims to describe
Energy Technology Data Exchange (ETDEWEB)
Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)
2015-05-26
A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R
Beltrán, M C; Romero, T; Althaus, R L; Molina, M P
2013-05-01
The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects β-lactam or tetracycline drugs in raw commingled cow milk at or below European Union maximum residue levels (EU-MRL). The Charm MRL BLTET test procedure was recently modified (dilution in buffer and longer incubation) by the manufacturers to be used with raw ewe and goat milk. To assess the Charm MRL BLTET test for the detection of β-lactams and tetracyclines in milk of small ruminants, an evaluation study was performed at Instituto de Ciencia y Tecnologia Animal of Universitat Politècnica de València (Spain). The test specificity and detection capability (CCβ) were studied following Commission Decision 2002/657/EC. Specificity results obtained in this study were optimal for individual milk free of antimicrobials from ewes (99.2% for β-lactams and 100% for tetracyclines) and goats (97.9% for β-lactams and 100% for tetracyclines) along the entire lactation period regardless of whether the results were visually or instrumentally interpreted. Moreover, no positive results were obtained when a relatively high concentration of different substances belonging to antimicrobial families other than β-lactams and tetracyclines were present in ewe and goat milk. For both types of milk, the CCβ calculated was lower or equal to EU-MRL for amoxicillin (4 µg/kg), ampicillin (4 µg/kg), benzylpenicillin (≤ 2 µg/kg), dicloxacillin (30 µg/kg), oxacillin (30 µg/kg), cefacetrile (≤ 63 µg/kg), cefalonium (≤ 10 µg/kg), cefapirin (≤ 30 µg/kg), desacetylcefapirin (≤ 30 µg/kg), cefazolin (≤ 25 µg/kg), cefoperazone (≤ 25 µg/kg), cefquinome (20 µg/kg), ceftiofur (≤ 50 µg/kg), desfuroylceftiofur (≤ 50µg/kg), and cephalexin (≤ 50 µg/kg). However, this test could neither detect cloxacillin nor nafcillin at or below EU-MRL (CCβ >30 µg/kg). The
Energy Technology Data Exchange (ETDEWEB)
NONE
2017-02-16
This report focuses on studies of KIT-INE to derive a significantly improved description of the chemical behaviour of Americium and Plutonium in saline NaCl, MgCl{sub 2} and CaCl{sub 2} brine systems. The studies are based on new experimental data and aim at deriving reliable Am and Pu solubility limits for the investigated systems as well as deriving comprehensive thermodynamic model descriptions. Both aspects are of high relevance in the context of potential source term estimations for Americium and Plutonium in aqueous brine systems and related scenarios. Americium and Plutonium are long-lived alpha emitting radionuclides which due to their high radiotoxicity need to be accounted for in a reliable and traceable way. The hydrolysis of trivalent actinides and the effect of highly alkaline pH conditions on the solubility of trivalent actinides in calcium chloride rich brine solutions were investigated and a thermodynamic model derived. The solubility of Plutonium in saline brine systems was studied under reducing and non-reducing conditions and is described within a new thermodynamic model. The influence of dissolved carbonate on Americium and Plutonium solubility in MgCl{sub 2} solutions was investigated and quantitative information on Am and Pu solubility limits in these systems derived. Thermodynamic constants and model parameter derived in this work are implemented in the Thermodynamic Reference Database THEREDA owned by BfS. According to the quality assurance approach in THEREDA, is was necessary to publish parts of this work in peer-reviewed scientific journals. The publications are focused on solubility experiments, spectroscopy of aquatic and solid species and thermodynamic data. (Neck et al., Pure Appl. Chem., Vol. 81, (2009), pp. 1555-1568., Altmaier et al., Radiochimica Acta, 97, (2009), pp. 187-192., Altmaier et al., Actinide Research Quarterly, No 2., (2011), pp. 29-32.).
Directory of Open Access Journals (Sweden)
Hussain Shareef
2017-01-01
Full Text Available Many maximum power point tracking (MPPT algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability.
Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah
2017-01-01
Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.
Directory of Open Access Journals (Sweden)
Matthew W Breece
Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.
Directory of Open Access Journals (Sweden)
Elham Faraji
2016-03-01
Full Text Available In this research, the capability of a charged system search algorithm (CSS in handling water management optimization problems is investigated. First, two complex mathematical problems are solved by CSS and the results are compared with those obtained from other metaheuristic algorithms. In the last step, the optimization model developed by the CSS algorithm is applied to the waste load allocation in rivers based on the total maximum daily load (TMDL concept. The results are presented in Tables and Figures for easy comparison. The study indicates the superiority of the CSS algorithm in terms of its speed and performance over the other metaheuristic algorithms while its precision in water management optimization problems is verified.
International Nuclear Information System (INIS)
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-01-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field
Directory of Open Access Journals (Sweden)
Rajeev Kumar
Full Text Available We present a comprehensive analysis of estimation of fisheries Maximum Sustainable Yield (MSY reference points using an ecosystem model built for Mille Lacs Lake, the second largest lake within Minnesota, USA. Data from single-species modelling output, extensive annual sampling for species abundances, annual catch-survey, stomach-content analysis for predatory-prey interactions, and expert opinions were brought together within the framework of an Ecopath with Ecosim (EwE ecosystem model. An increase in the lake water temperature was observed in the last few decades; therefore, we also incorporated a temperature forcing function in the EwE model to capture the influences of changing temperature on the species composition and food web. The EwE model was fitted to abundance and catch time-series for the period 1985 to 2006. Using the ecosystem model, we estimated reference points for most of the fished species in the lake at single-species as well as ecosystem levels with and without considering the influence of temperature change; therefore, our analysis investigated the trophic and temperature effects on the reference points. The paper concludes that reference points such as MSY are not stationary, but change when (1 environmental conditions alter species productivity and (2 fishing on predators alters the compensatory response of their prey. Thus, it is necessary for the management to re-estimate or re-evaluate the reference points when changes in environmental conditions and/or major shifts in species abundance or community structure are observed.
Energy Technology Data Exchange (ETDEWEB)
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Yusop, Syazwani Mohd; Mustapha, Muzzneena Ahmad
2018-04-01
The coupling of fishing locations for R. kanagurta obtained from SEAFDEC and multi-sensor satellite imageries of oceanographic variables; sea surface temperature (SST), sea surface height (SSH) and chl-a concentration (chl-a) were utilized to evaluate the performance of maximum entropy (MaxEnt) models for R. kanagurta fishing ground for prediction. Besides, this study was conducted to identify the relative percentage contribution of each environmental variable considered in order to describe the effects of the oceanographic factors on the species distribution in the study area. The potential fishing grounds during intermonsoon periods; April and October 2008-2009 were simulated separately and covered the near-coast of Kelantan, Terengganu, Pahang and Johor. The oceanographic conditions differed between regions by the inherent seasonal variability. The seasonal and spatial extents of potential fishing grounds were largely explained by chl-a concentration (0.21-0.99 mg/m3 in April and 0.28-1.00 mg/m3 in October), SSH (77.37-85.90 cm in April and 107.60-108.97 cm in October) and SST (30.43-33.70 °C in April and 30.48-30.97 °C in October). The constructed models were applicable and therefore they were suitable for predicting the potential fishing zones of R. kanagurta in EEZ. The results from this study revealed MaxEnt's potential for predicting the spatial distribution of R. kanagurta and highlighted the use of multispectral satellite images for describing the seasonal potential fishing grounds.
230Th and 234Th as coupled tracers of particle cycling in the ocean: A maximum likelihood approach
Wang, Wei-Lei; Armstrong, Robert A.; Cochran, J. Kirk; Heilbrun, Christina
2016-05-01
We applied maximum likelihood estimation to measurements of Th isotopes (234,230Th) in Mediterranean Sea sediment traps that separated particles according to settling velocity. This study contains two unique aspects. First, it relies on settling velocities that were measured using sediment traps, rather than on measured particle sizes and an assumed relationship between particle size and sinking velocity. Second, because of the labor and expense involved in obtaining these data, they were obtained at only a few depths, and their analysis required constructing a new type of box-like model, which we refer to as a "two-layer" model, that we then analyzed using likelihood techniques. Likelihood techniques were developed in the 1930s by statisticians, and form the computational core of both Bayesian and non-Bayesian statistics. Their use has recently become very popular in ecology, but they are relatively unknown in geochemistry. Our model was formulated by assuming steady state and first-order reaction kinetics for thorium adsorption and desorption, and for particle aggregation, disaggregation, and remineralization. We adopted a cutoff settling velocity (49 m/d) from Armstrong et al. (2009) to separate particles into fast- and slow-sinking classes. A unique set of parameters with no dependence on prior values was obtained. Adsorption rate constants for both slow- and fast-sinking particles are slightly higher in the upper layer than in the lower layer. Slow-sinking particles have higher adsorption rate constants than fast-sinking particles. Desorption rate constants are higher in the lower layer (slow-sinking particles: 13.17 ± 1.61, fast-sinking particles: 13.96 ± 0.48) than in the upper layer (slow-sinking particles: 7.87 ± 0.60 y-1, fast-sinking particles: 1.81 ± 0.44 y-1). Aggregation rate constants were higher, 1.88 ± 0.04, in the upper layer and just 0.07 ± 0.01 y-1 in the lower layer. Disaggregation rate constants were just 0.30 ± 0.10 y-1 in the upper
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
Upper limit for Poisson variable incorporating systematic uncertainties by Bayesian approach
International Nuclear Information System (INIS)
Zhu, Yongsheng
2007-01-01
To calculate the upper limit for the Poisson observable at given confidence level with inclusion of systematic uncertainties in background expectation and signal efficiency, formulations have been established along the line of Bayesian approach. A FORTRAN program, BPULE, has been developed to implement the upper limit calculation
Censoring: a new approach for detection limits in total-reflection X-ray fluorescence
International Nuclear Information System (INIS)
Pajek, M.; Kubala-Kukus, A.; Braziewicz, J.
2004-01-01
It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called 'nondetects', i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for
Benoit, Stéphane; Posey, James E.; Chenoweth, Matthew R.; Gherardini, Frank C.
2001-01-01
In the causative agent of syphilis, Treponema pallidum, the gene encoding 3-phosphoglycerate mutase, gpm, is part of a six-gene operon (tro operon) that is regulated by the Mn-dependent repressor TroR. Since substrate-level phosphorylation via the Embden-Meyerhof pathway is the principal way to generate ATP in T. pallidum and Gpm is a key enzyme in this pathway, Mn could exert a regulatory effect on central metabolism in this bacterium. To study this, T. pallidum gpm was cloned, Gpm was purified from Escherichia coli, and antiserum against the recombinant protein was raised. Immunoblots indicated that Gpm was expressed in freshly extracted infective T. pallidum. Enzyme assays indicated that Gpm did not require Mn2+ while 2,3-diphosphoglycerate (DPG) was required for maximum activity. Consistent with these observations, Mn did not copurify with Gpm. The purified Gpm was stable for more than 4 h at 25°C, retained only 50% activity after incubation for 20 min at 34°C or 10 min at 37°C, and was completely inactive after 10 min at 42°C. The temperature effect was attenuated when 1 mM DPG was added to the assay mixture. The recombinant Gpm from pSLB2 complemented E. coli strain PL225 (gpm) and restored growth on minimal glucose medium in a temperature-dependent manner. Increasing the temperature of cultures of E. coli PL225 harboring pSLB2 from 34 to 42°C resulted in a 7- to 11-h period in which no growth occurred (compared to wild-type E. coli). These data suggest that biochemical properties of Gpm could be one contributing factor to the heat sensitivity of T. pallidum. PMID:11466272
International Nuclear Information System (INIS)
Liekhus, K.J.; Connolly, M.J.
1995-01-01
A test program has been conducted at the Idaho National Engineering Laboratory to demonstrate that the concentration of volatile organic compounds (VOCs) within the innermost layer of confinement in a vented waste drum can be estimated using a model incorporating diffusion and permeation transport principles as well as limited waste drum sampling data. The model consists of a series of material balance equations describing steady-state VOC transport from each distinct void volume in the drum. The primary model input is the measured drum headspace VOC concentration. Model parameters are determined or estimated based on available process knowledge. The model effectiveness in estimating VOC concentration in the headspace of the innermost layer of confinement was examined for vented waste drums containing different waste types and configurations. This paper summarizes the experimental measurements and model predictions in vented transuranic waste drums containing solidified sludges and solid waste
Chandrasekhar limit: an elementary approach based on classical physics and quantum theory
Pinochet, Jorge; Van Sint Jan, Michael
2016-05-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the Chandrasekhar limit. This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons due to the exclusion principle can no longer stop the gravitational collapse. In the present article, we create an elemental approximation to the Chandrasekhar limit, accessible to non-graduate science and engineering students. The article focuses especially on clarifying the origins of Chandrasekhar’s discovery and the underlying physical concepts. Throughout the article, only basic algebra is used as well as some general notions of classical physics and quantum theory.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
A novel approach to derive halo-independent limits on dark matter properties
Ferrer, Francesc; Ibarra, Alejandro; Wild, Sebastian
2015-01-01
We propose a method that allows to place an upper limit on the dark matter elastic scattering cross section with nucleons which is independent of the velocity distribution. Our approach combines null results from direct detection experiments with indirect searches at neutrino telescopes, and goes beyond previous attempts to remove astrophysical uncertainties in that it directly constrains the particle physics properties of the dark matter. The resulting halo-independent upper limits on the sc...
Censoring approach to the detection limits in X-ray fluorescence analysis
International Nuclear Information System (INIS)
Pajek, M.; Kubala-Kukus, A.
2004-01-01
We demonstrate that the effect of detection limits in the X-ray fluorescence analysis (XRF), which limits the determination of very low concentrations of trace elements and results in appearance of the so-called 'nondetects', can be accounted for using the statistical concept of censoring. More precisely, the results of such measurements can be viewed as the left random censored data, which can further be analyzed using the Kaplan-Meier method correcting the data for the presence of nondetects. Using this approach, the results of measured, detection limit censored concentrations can be interpreted in a nonparametric manner including the correction for the nondetects, i.e. the measurements in which the concentrations were found to be below the actual detection limits. Moreover, using the Monte Carlo simulation technique we show that by using the Kaplan-Meier approach the corrected mean concentrations for a population of the samples can be estimated within a few percent uncertainties with respect of the simulated, uncensored data. This practically means that the final uncertainties of estimated mean values are limited in fact by the number of studied samples and not by the correction procedure itself. The discussed random-left censoring approach was applied to analyze the XRF detection-limit-censored concentration measurements of trace elements in biomedical samples
International Nuclear Information System (INIS)
Badea, A.F.; Brancus, I.M.; Rebel, H.; Haungs, A.; Oehlschlaeger, J.; Zazyan, M.
1999-01-01
The average depth of the maximum X m of the EAS (Extensive Air Shower) development depends on the energy E 0 and the mass of the primary particle, and its dependence from the energy is traditionally expressed by the so-called elongation rate D e defined as change in the average depth of the maximum per decade of E 0 i.e. D e = dX m /dlog 10 E 0 . Invoking the superposition model approximation i.e. assuming that a heavy primary (A) has the same shower elongation rate like a proton, but scaled with energies E 0 /A, one can write X m = X init + D e log 10 (E 0 /A). In 1977 an indirect approach studying D e has been suggested by Linsley. This approach can be applied to shower parameters which do not depend explicitly on the energy of the primary particle, but do depend on the depth of observation X and on the depth X m of shower maximum. The distribution of the EAS muon arrival times, measured at a certain observation level relatively to the arrival time of the shower core reflect the pathlength distribution of the muon travel from locus of production (near the axis) to the observation locus. The basic a priori assumption is that we can associate the mean value or median T of the time distribution to the height of the EAS maximum X m , and that we can express T = f(X,X m ). In order to derive from the energy variation of the arrival time quantities information about elongation rate, some knowledge is required about F i.e. F = - ∂ T/∂X m ) X /∂(T/∂X) X m , in addition to the variations with the depth of observation and the zenith-angle (θ) dependence, respectively. Thus ∂T/∂log 10 E 0 | X = - F·D e ·1/X v ·∂T/∂secθ| E 0 . In a similar way the fluctuations σ(X m ) of X m may be related to the fluctuations σ(T) of T i.e. σ(T) = - σ(X m )· F σ ·1/X v ·∂T/∂secθ| E 0 , with F σ being the corresponding scaling factor for the fluctuation of F. By simulations of the EAS development using the Monte Carlo code CORSIKA the energy and angle
New approaches to deriving limits of the release of radioactive material into the environment
International Nuclear Information System (INIS)
Lindell, B.
1977-01-01
During the last few years, new principles have been developed for the limitation of the release of radioactive material into the environment. It is no longer considered appropriate to base the limitation on limits for the concentrations of the various radionuclides in air and water effluents. Such limits would not prevent large amounts of radioactive material from reaching the environment should effluent rates be high. A common practice has been to identify critical radionuclides and critical pathways and to base the limitation on authorized dose limits for local ''critical groups''. If this were the only limitation, however, larger releases could be permitted after installing either higher stacks or equipment to retain the more short-lived radionuclides for decay before release. Continued release at such limits would then lead to considerably higher exposure at a distance than if no such installation had been made. Accordingly there would be no immediate control of overlapping exposures from several sources, nor would the system guarantee control of the future situation. The new principles described in this paper take the future into account by limiting the annual dose commitments rather than the annual doses. They also offer means of controlling the global situation by limiting not only doses in critical groups but also global collective doses. Their objective is not only to ensure that individual dose limits will always be respected but also to meet the requirement that ''all doses be kept as low as reasonably achievable''. The new approach is based on the most recent recommendations by the ICRP and has been described in a report by an IAEA panel (Procedures for establishing limits for the release of radioactive material into the environment). It has been applied in the development of new Swedish release regulations, which illustrate some of the problems which arise in the practical application
New approaches to deriving limits of the release of radioactive material into the environment
International Nuclear Information System (INIS)
Lindell, B.
1977-01-01
During the last few years, new principles have been developed for the limitation of the release of radioactive material into the environment. It is no longer considered appropriate to base the limitation on limits for the concentrations of the various radionuclides in air and water effluents. Such limits would not prevent large amounts of radioactive material from reaching the environment should effluent rates be high. A common practice has been to identify critical radionuclides and critical pathways and to base the limitation on authorized dose limits for local ''critical groups''. If this were the only limitation, however, larger releases could be permitted after installing either higher stacks or equipment to retain the more shortlived radionuclides for decay before release. Continued release at such limits would then lead to considerably higher exposure at a distance than if no such installation had been made. Accordingly there would be no immediate control of overlapping exposures from several sources, nor would the system guarantee control of the future situation. The new principles described in this paper take the future into account by limiting the annual dose commitments rather than the annual doses. They also offer means of controlling the global situation by limiting not only doses in critical groups but also global collective doses. Their objective is not only to ensure that individual dose limits will always be respected but also to meet the requirement that ''all doses be kept as low as reasonably achievable''. The new approach is based on the most recent recommendations by the ICRP and has been described in a report by an IAEA panel (Procedures for Establishing Limits for the Release of Radioactive Material into the Environment). It has been applied in the development of new Swedish release regulations, which illustrate some of the problems which arise in the practical application. (author)
Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck; Aabo, Søren
2017-01-02
Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support hygiene interventions. The objective of the present study was to develop an approach which could make it possible to define potential risk-based microbiological limits for an indicator, enterococci, in order to evaluate the risk from potential growth of Salmonella. A positive correlation between the concentration of enterococci and the prevalence and concentration of Salmonella was shown for 6640 pork samples taken at Danish cutting plants and retail butchers. The samples were collected in five different studies in 2001, 2002, 2010, 2011 and 2013. The observations that both Salmonella and enterococci are carried in the intestinal tract, contaminate pork by the same mechanisms and share similar growth characteristics (lag phase and maximum specific growth rate) at temperatures around 5-10°C, suggest a potential of enterococci to be used as an indicator of potential growth of Salmonella in pork. Elevated temperatures during processing will lead to growth of both enterococci and, if present, also Salmonella. By combining the correlation between enterococci and Salmonella with risk modelling, it is possible to predict the risk of salmonellosis based on the level of enterococci. The risk model used for this purpose includes the dose-response relationship for Salmonella and a reduction factor to account for preparation of the fresh pork. By use of the risk model, it was estimated that the majority of salmonellosis cases, caused by the consumption of pork in Denmark, is caused by the small fraction of pork products that has enterococci concentrations above 5logCFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological
Relevance of plastic limit loads to reference stress approach for surface cracked cylinder problems
International Nuclear Information System (INIS)
Kim, Yun-Jae; Shim, Do-Jun
2005-01-01
To investigate the relevance of the definition of the reference stress to estimate J and C* for surface crack problems, this paper compares finite element (FE) J and C* results for surface cracked pipes with those estimated according to the reference stress approach using various definitions of the reference stress. Pipes with part circumferential inner surface cracks and finite internal axial cracks are considered, subject to internal pressure and global bending. The crack depth and aspect ratio are systematically varied. The reference stress is defined in four different ways using (i) a local limit load (ii), a global limit load, (iii) a global limit load determined from the FE limit analysis, and (iv) the optimised reference load. It is found that the reference stress based on a local limit load gives overall excessively conservative estimates of J and C*. Use of a global limit load clearly reduces the conservatism, compared to that of a local limit load, although it can sometimes provide non-conservative estimates of J and C*. The use of the FE global limit load gives overall non-conservative estimates of J and C*. The reference stress based on the optimised reference load gives overall accurate estimates of J and C*, compared to other definitions of the reference stress. Based on the present findings, general guidance on the choice of the reference stress for surface crack problems is given
What controls the maximum magnitude of injection-induced earthquakes?
Eaton, D. W. S.
2017-12-01
Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum
Ülker, Erkan; Turanboy, Alparslan
2009-07-01
The block stone industry is one of the main commercial use of rock. The economic potential of any block quarry depends on the recovery rate, which is defined as the total volume of useful rough blocks extractable from a fixed rock volume in relation to the total volume of moved material. The natural fracture system, the rock type(s) and the extraction method used directly influence the recovery rate. The major aims of this study are to establish a theoretical framework for optimising the extraction process in marble quarries for a given fracture system, and for predicting the recovery rate of the excavated blocks. We have developed a new approach by taking into consideration only the fracture structure for maximum block recovery in block quarries. The complete model uses a linear approach based on basic geometric features of discontinuities for 3D models, a tree structure (TS) for individual investigation and finally a genetic algorithm (GA) for the obtained cuboid volume(s). We tested our new model in a selected marble quarry in the town of İscehisar (AFYONKARAHİSAR—TURKEY).
International Nuclear Information System (INIS)
Ofenheimer, Aldo; Buchmayr, Bruno; Kolleck, Ralf; Merklein, Marion
2005-01-01
The influence of strain paths (loading history) on material formability is well known in sheet forming processes. Sophisticated experimental methods are used to determine the entire shape of strain paths of forming limits for aluminum AA6016-T4 alloy. Forming limits for sheet metal in as-received condition as well as for different pre-deformation are presented. A theoretical approach based on Arrieux's intrinsic Forming Limit Stress Curve (FLSC) concept is employed to numerically predict the influence of loading history on forming severity. The detailed experimental strain paths are used in the theoretical study instead of any linear or bilinear simplified loading histories to demonstrate the predictive quality of forming limits in the state of stress
Graillon, T; Fuentes, S; Metellus, P; Adetchessi, T; Gras, R; Dufour, H
2014-01-01
Advances in transsphenoidal surgery and endoscopic techniques have opened new perspectives for cavernous sinus (CS) approaches. The aim of this study was to assess the advantages and disadvantages of limited endoscopic transsphenoidal approach, as performed in pituitary adenoma surgery, for CS tumor biopsy illustrated with three clinical cases. The first case was a 46-year-old woman with a prior medical history of parotid adenocarcinoma successfully treated 10 years previously. The cavernous sinus tumor was revealed by right third and sixth nerve palsy and increased over the past three years. A tumor biopsy using a limited endoscopic transsphenoidal approach revealed an adenocarcinoma metastasis. Complementary radiosurgery was performed. The second case was a 36-year-old woman who consulted for diplopia with right sixth nerve palsy and amenorrhea with hyperprolactinemia. Dopamine agonist treatment was used to restore the patient's menstrual cycle. Cerebral magnetic resonance imaging (MRI) revealed a right sided CS tumor. CS biopsy, via a limited endoscopic transsphenoidal approach, confirmed a meningothelial grade 1 meningioma. Complementary radiosurgery was performed. The third case was a 63-year-old woman with progressive installation of left third nerve palsy and visual acuity loss, revealing a left cavernous sinus tumor invading the optic canal. Surgical biopsy was performed using an enlarged endoscopic transsphenoidal approach to the decompress optic nerve. Biopsy results revealed a meningothelial grade 1 meningioma. Complementary radiotherapy was performed. In these three cases, no complications were observed. Mean hospitalization duration was 4 days. Reported anatomical studies and clinical series have shown the feasibility of reaching the cavernous sinus using an endoscopic endonasal approach. Trans-foramen ovale CS percutaneous biopsy is an interesting procedure but only provides cell analysis results, and not tissue analysis. However, radiotherapy and
Problem of data quality and the limitations of the infrastructure approach
Behlen, Fred M.; Sayre, Richard E.; Rackus, Edward; Ye, Dingzhong
1998-07-01
The 'Infrastructure Approach' is a PACS implementation methodology wherein the archive, network and information systems interfaces are acquired first, and workstations are installed later. The approach allows building a history of archived image data, so that most prior examinations are available in digital form when workstations are deployed. A limitation of the Infrastructure Approach is that the deferred use of digital image data defeats many data quality management functions that are provided automatically by human mechanisms when data is immediately used for the completion of clinical tasks. If the digital data is used solely for archiving while reports are interpreted from film, the radiologist serves only as a check against lost films, and another person must be designated as responsible for the quality of the digital data. Data from the Radiology Information System and the PACS were analyzed to assess the nature and frequency of system and data quality errors. The error level was found to be acceptable if supported by auditing and error resolution procedures requiring additional staff time, and in any case was better than the loss rate of a hardcopy film archive. It is concluded that the problem of data quality compromises but does not negate the value of the Infrastructure Approach. The Infrastructure Approach should best be employed only to a limited extent, and that any phased PACS implementation should have a substantial complement of workstations dedicated to softcopy interpretation for at least some applications, and with full deployment following not long thereafter.
Stochastic approach to the derivation of emission limits for wastewater treatment plants.
Stransky, D; Kabelkova, I; Bares, V
2009-01-01
Stochastic approach to the derivation of WWTP emission limits meeting probabilistically defined environmental quality standards (EQS) is presented. The stochastic model is based on the mixing equation with input data defined by probability density distributions and solved by Monte Carlo simulations. The approach was tested on a study catchment for total phosphorus (P(tot)). The model assumes input variables independency which was proved for the dry-weather situation. Discharges and P(tot) concentrations both in the study creek and WWTP effluent follow log-normal probability distribution. Variation coefficients of P(tot) concentrations differ considerably along the stream (c(v)=0.415-0.884). The selected value of the variation coefficient (c(v)=0.420) affects the derived mean value (C(mean)=0.13 mg/l) of the P(tot) EQS (C(90)=0.2 mg/l). Even after supposed improvement of water quality upstream of the WWTP to the level of the P(tot) EQS, the WWTP emission limits calculated would be lower than the values of the best available technology (BAT). Thus, minimum dilution ratios for the meaningful application of the combined approach to the derivation of P(tot) emission limits for Czech streams are discussed.
Approach to the thermodynamic limit in lattice QCD at μ≠0
International Nuclear Information System (INIS)
Splittorff, K.; Verbaarschot, J. J. M.
2008-01-01
The expectation value of the complex phase factor of the fermion determinant is computed to leading order in the p expansion of the chiral Lagrangian. The computation is valid for μ π /2 and determines the dependence of the sign problem on the volume and on the geometric shape of the volume. In the thermodynamic limit with L i →∞ at fixed temperature 1/L 0 , the average phase factor vanishes. In the low temperature limit where L i /L 0 is fixed as L i becomes large, the average phase factor approaches 1 for μ π /2. The results for a finite volume compare well with lattice results obtained by Allton et al. After taking appropriate limits, we reproduce previously derived results for the ε regime and for one-dimensional QCD. The distribution of the phase itself is also computed
Balzer, Laura B; Zheng, Wenjing; van der Laan, Mark J; Petersen, Maya L
2018-01-01
We often seek to estimate the impact of an exposure naturally occurring or randomly assigned at the cluster-level. For example, the literature on neighborhood determinants of health continues to grow. Likewise, community randomized trials are applied to learn about real-world implementation, sustainability, and population effects of interventions with proven individual-level efficacy. In these settings, individual-level outcomes are correlated due to shared cluster-level factors, including the exposure, as well as social or biological interactions between individuals. To flexibly and efficiently estimate the effect of a cluster-level exposure, we present two targeted maximum likelihood estimators (TMLEs). The first TMLE is developed under a non-parametric causal model, which allows for arbitrary interactions between individuals within a cluster. These interactions include direct transmission of the outcome (i.e. contagion) and influence of one individual's covariates on another's outcome (i.e. covariate interference). The second TMLE is developed under a causal sub-model assuming the cluster-level and individual-specific covariates are sufficient to control for confounding. Simulations compare the alternative estimators and illustrate the potential gains from pairing individual-level risk factors and outcomes during estimation, while avoiding unwarranted assumptions. Our results suggest that estimation under the sub-model can result in bias and misleading inference in an observational setting. Incorporating working assumptions during estimation is more robust than assuming they hold in the underlying causal model. We illustrate our approach with an application to HIV prevention and treatment.
Mavrodiev, Evgeny V; Laktionov, Alexy P; Cellinese, Nico
2012-01-01
The evolution of the diverse flora in the Lower Volga Valley (LVV) (southwest Russia) is complex due to the composite geomorphology and tectonic history of the Caspian Sea and adjacent areas. In the absence of phylogenetic studies and temporal information, we implemented a maximum likelihood (ML) approach and stochastic character mapping reconstruction aiming at recovering historical signals from species occurrence data. A taxon-area matrix of 13 floristic areas and 1018 extant species was constructed and analyzed with RAxML and Mesquite. Additionally, we simulated scenarios with numbers of hypothetical extinct taxa from an unknown palaeoflora that occupied the areas before the dramatic transgression and regression events that have occurred from the Pleistocene to the present day. The flora occurring strictly along the river valley and delta appear to be younger than that of adjacent steppes and desert-like regions, regardless of the chronology of transgression and regression events that led to the geomorphological formation of the LVV. This result is also supported when hypothetical extinct taxa are included in the analyses. The history of each species was inferred by using a stochastic character mapping reconstruction method as implemented in Mesquite. Individual histories appear to be independent from one another and have been shaped by repeated dispersal and extinction events. These reconstructions provide testable hypotheses for more in-depth investigations of their population structure and dynamics. PMID:22957179
Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian
2018-05-01
Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.
Novel approach to epicardial pacemaker implantation in patients with limited venous access.
Costa, Roberto; Scanavacca, Mauricio; da Silva, Kátia Regina; Martinelli Filho, Martino; Carrillo, Roger
2013-11-01
Limited venous access in certain patients increases the procedural risk and complexity of conventional transvenous pacemaker implantation. The purpose of this study was to determine a minimally invasive epicardial approach using pericardial reflections for dual-chamber pacemaker implantation in patients with limited venous access. Between June 2006 and November 2011, 15 patients underwent epicardial pacemaker implantation. Procedures were performed through a minimally invasive subxiphoid approach and pericardial window with subsequent fluoroscopy-assisted lead placement. Mean patient age was 46.4 ± 15.3 years (9 male [(60.0%], 6 female [40.0%]). The new surgical approach was used in patients determined to have limited venous access due to multiple abandoned leads in 5 (33.3%), venous occlusion in 3 (20.0%), intravascular retention of lead fragments from prior extraction in 3 (20.0%), tricuspid valve vegetation currently under treatment in 2 (13.3%), and unrepaired intracardiac defects in 2 (13.3%). All procedures were successful with no perioperative complications or early deaths. Mean operating time for isolated pacemaker implantation was 231.7 ± 33.5 minutes. Lead placement on the superior aspect of right atrium, through the transverse sinus, was possible in 12 patients. In the remaining 3 patients, the atrial lead was implanted on the left atrium through the oblique sinus, the postcaval recess, or the left pulmonary vein recess. None of the patients displayed pacing or sensing dysfunction, and all parameters remained stable throughout the follow-up period of 36.8 ± 25.1 months. Epicardial pacemaker implantation through pericardial reflections is an effective alternative therapy for those patients requiring physiologic pacing in whom venous access is limited. © 2013 Heart Rhythm Society. All rights reserved.
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
An ICMP-Based Mobility Management Approach Suitable for Protocol Deployment Limitation
Directory of Open Access Journals (Sweden)
Jeng-Yueng Chen
2009-01-01
Full Text Available Mobility management is one of the important tasks on wireless networks. Many approaches have been proposed in the past, but none of them have been widely deployed so far. Mobile IP (MIP and Route Optimization (ROMIP, respectively, suffer from triangular routing problem and binding cache supporting upon each node on the entire Internet. One step toward a solution is the Mobile Routing Table (MRT, which enables edge routers to take over address binding. However, this approach demands that all the edge routers on the Internet support MRT, resulting in protocol deployment difficulties. To address this problem and to offset the limitation of the original MRT approach, we propose two different schemes, an ICMP echo scheme and an ICMP destination-unreachable scheme. These two schemes work with the MRT to efficiently find MRT-enabled routers that greatly reduce the number of triangular routes. In this paper, we analyze and compare the standard MIP and the proposed approaches. Simulation results have shown that the proposed approaches reduce transmission delay, with only a few routers supporting MRT.
Weisenberg, J.; Pico, T.; Birch, L.; Mitrovica, J. X.
2017-12-01
The history of the Laurentide Ice Sheet since the Last Glacial Maximum ( 26 ka; LGM) is constrained by geological evidence of ice margin retreat in addition to relative sea-level (RSL) records in both the near and far field. Nonetheless, few observations exist constraining the ice sheet's extent across the glacial build-up phase preceding the LGM. Recent work correcting RSL records along the U.S. mid-Atlantic dated to mid-MIS 3 (50-35 ka) for glacial-isostatic adjustment (GIA) infer that the Laurentide Ice Sheet grew by more than three-fold in the 15 ky leading into the LGM. Here we test the plausibility of a late and extremely rapid glaciation by driving a high-resolution ice sheet model, based on a nonlinear diffusion equation for the ice thickness. We initialize this model at 44 ka with the mid-MIS 3 ice sheet configuration proposed by Pico et al. (2017), GIA-corrected basal topography, and mass balance representative of mid-MIS 3 conditions. These simulations predict rapid growth of the eastern Laurentide Ice Sheet, with rates consistent with achieving LGM ice volumes within 15 ky. We use these simulations to refine the initial ice configuration and present an improved and higher resolution model for North American ice cover during mid-MIS 3. In addition we show that assumptions of ice loads during the glacial phase, and the associated reconstructions of GIA-corrected basal topography, produce a bias that can underpredict ice growth rates in the late stages of the glaciation, which has important consequences for our understanding of the speed limit for ice growth on glacial timescales.
Directory of Open Access Journals (Sweden)
M.T. Khorsi Y. Amidi
2008-05-01
Full Text Available Cartilaginous tumors comprise 1% of all laryngeal masses. Since they grow slowly and metastasis is rare, long term survival is expected in cases of chondroma and chondrosarcoma. Thus, based on these facts and the fact that total salvage surgery after recurrence of previous tumor does not influence treatment outcomes, "Quality of Life" must be taken into great consideration. Based on 3 cases of limited condrosarcoma that we have successfully operated on using submucosal delivery through a parapharyngeal approach, after several years of recurrence free follow ups, authors determine this technique as an efficient method of approach to these tumors. Since this technique takes less time and there is no need for glottic incision and the patient is discharged in 2 days without insertion of endolaryngeal stent, we believe this method is superior to laryngofissure or total laryngectomy.
An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits
Directory of Open Access Journals (Sweden)
R. de J. Portillo-Vélez
2015-01-01
Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.
Booth, Michael; Okely, Anthony
2005-04-01
Paediatric overweight and obesity is recognised as one of Australia's most significant health problems and effective approaches to increasing physical activity and reducing energy consumption are being sought urgently. Every potential approach and setting should be subjected to critical review in an attempt to maximise the impact of policy and program initiatives. This paper identifies the strengths and limitations of schools as a setting for promoting physical activity. The strengths are: most children and adolescents attend school; most young people are likely to see teachers as credible sources of information; schools provide access to the facilities, infrastructure and support required for physical activity; and schools are the workplace of skilled educators. Potential limitations are: those students who like school the least are the most likely to engage in health-compromising behaviours and the least likely to be influenced by school-based programs; there are about 20 more hours per week available for physical activity outside schools hours than during school hours; enormous demands are already being made on schools; many primary school teachers have low levels of perceived competence in teaching physical education and fundamental movement skills; and opportunities for being active at school may not be consistent with how and when students prefer to be active.
Fuentes-Pardo, Angela P; Ruzzante, Daniel E
2017-10-01
Whole-genome resequencing (WGR) is a powerful method for addressing fundamental evolutionary biology questions that have not been fully resolved using traditional methods. WGR includes four approaches: the sequencing of individuals to a high depth of coverage with either unresolved or resolved haplotypes, the sequencing of population genomes to a high depth by mixing equimolar amounts of unlabelled-individual DNA (Pool-seq) and the sequencing of multiple individuals from a population to a low depth (lcWGR). These techniques require the availability of a reference genome. This, along with the still high cost of shotgun sequencing and the large demand for computing resources and storage, has limited their implementation in nonmodel species with scarce genomic resources and in fields such as conservation biology. Our goal here is to describe the various WGR methods, their pros and cons and potential applications in conservation biology. WGR offers an unprecedented marker density and surveys a wide diversity of genetic variations not limited to single nucleotide polymorphisms (e.g., structural variants and mutations in regulatory elements), increasing their power for the detection of signatures of selection and local adaptation as well as for the identification of the genetic basis of phenotypic traits and diseases. Currently, though, no single WGR approach fulfils all requirements of conservation genetics, and each method has its own limitations and sources of potential bias. We discuss proposed ways to minimize such biases. We envision a not distant future where the analysis of whole genomes becomes a routine task in many nonmodel species and fields including conservation biology. © 2017 John Wiley & Sons Ltd.
Rutkowska, Agnieszka; Kohnová, Silvia; Banasik, Kazimierz
2018-04-01
Probabilistic properties of dates of winter, summer and annual maximum flows were studied using circular statistics in three catchments differing in topographic conditions; a lowland, highland and mountainous catchment. The circular measures of location and dispersion were used in the long-term samples of dates of maxima. The mixture of von Mises distributions was assumed as the theoretical distribution function of the date of winter, summer and annual maximum flow. The number of components was selected on the basis of the corrected Akaike Information Criterion and the parameters were estimated by means of the Maximum Likelihood method. The goodness of fit was assessed using both the correlation between quantiles and a version of the Kuiper's and Watson's test. Results show that the number of components varied between catchments and it was different for seasonal and annual maxima. Differences between catchments in circular characteristics were explained using climatic factors such as precipitation and temperature. Further studies may include circular grouping catchments based on similarity between distribution functions and the linkage between dates of maximum precipitation and maximum flow.
National Research Council Canada - National Science Library
1978-01-01
Concomitant with the general desire to protect freshwater fisheries has been an expansion of research into their water quality requirements and, for important contaminants, the maximum concentrations...
Serious limitations of the QTL/Microarray approach for QTL gene discovery
Directory of Open Access Journals (Sweden)
Warden Craig H
2010-07-01
Full Text Available Abstract Background It has been proposed that the use of gene expression microarrays in nonrecombinant parental or congenic strains can accelerate the process of isolating individual genes underlying quantitative trait loci (QTL. However, the effectiveness of this approach has not been assessed. Results Thirty-seven studies that have implemented the QTL/microarray approach in rodents were reviewed. About 30% of studies showed enrichment for QTL candidates, mostly in comparisons between congenic and background strains. Three studies led to the identification of an underlying QTL gene. To complement the literature results, a microarray experiment was performed using three mouse congenic strains isolating the effects of at least 25 biometric QTL. Results show that genes in the congenic donor regions were preferentially selected. However, within donor regions, the distribution of differentially expressed genes was homogeneous once gene density was accounted for. Genes within identical-by-descent (IBD regions were less likely to be differentially expressed in chromosome 2, but not in chromosomes 11 and 17. Furthermore, expression of QTL regulated in cis (cis eQTL showed higher expression in the background genotype, which was partially explained by the presence of single nucleotide polymorphisms (SNP. Conclusions The literature shows limited successes from the QTL/microarray approach to identify QTL genes. Our own results from microarray profiling of three congenic strains revealed a strong tendency to select cis-eQTL over trans-eQTL. IBD regions had little effect on rate of differential expression, and we provide several reasons why IBD should not be used to discard eQTL candidates. In addition, mismatch probes produced false cis-eQTL that could not be completely removed with the current strains genotypes and low probe density microarrays. The reviewed studies did not account for lack of coverage from the platforms used and therefore removed genes
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
Directory of Open Access Journals (Sweden)
Isabela M. Drăghici
2016-05-01
Full Text Available Purpose: Laparoscopic management analysis of a rare condition having potentially severe evolution, seen in pediatric surgical pathology. Aims: Outlining the optimal surgical approach method of hepatic hydatid double cyst and the laparoscopic method’s limitations. Materials and Methods: The patient is a 6 years old girl that presented with two simultaneous giant hepatic hydatid cysts (segments VII-VIII, having close vicinity to the right branch of portal vein and to hepatic veins; she benefited from a single stage partial pericystectomy Lagrot performed by laparoscopy. Results: The procedure had no intraoperative accidents or incidents. Had good postoperative evolution without immediate or late complications. Trocars positioning had been adapted to the patient’s size and cysts topography. Conclusions: The laparoscopic treatment is feasible and safe, but is not yet the gold standard for a hepatic hydatid disease due to certain inconveniences.
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
Poley, Rachel A; Newbigging, Joseph L; Sivilotti, Marco L A
2014-09-01
Deep vein thrombosis (DVT) is both common and serious, yet the desire to never miss the diagnosis, coupled with the low specificity of D-dimer testing, results in high imaging rates, return visits, and empirical anticoagulation. The objective of this study was to evaluate a new approach incorporating bedside limited-compression ultrasound (LC US) by emergency physicians (EPs) into the workup strategy for DVT. This was a cross-sectional observational study of emergency department (ED) patients with suspected DVT. Patients on anticoagulants; those with chronic DVT, leg cast, or amputation; or when the results of comprehensive imaging were already known were excluded. All patients were treated in the usual fashion based on the protocol in use at the center, including comprehensive imaging based on the modified Wells score and serum D-dimer testing. Seventeen physicians were trained and performed LC US in all subjects. The authors identified a priori an alternate workup strategy in which DVT would be ruled out in "DVT unlikely" (Wells score return visits for imaging and 10 (4.4%) cases of unnecessary anticoagulation. In 19% of cases, the treating and scanning physician disagreed whether the patient was DVT likely or DVT unlikely based on Wells score (κ = 0.62; 95% CI = 0.48 to 0.77). Limited-compression US holds promise as one component of the diagnostic approach to DVT, but should not be used as a stand-alone test due to imperfect sensitivity. Tradeoffs in diagnostic efficiency for the sake of perfect sensitivity remain a difficult issue collectively in emergency medicine (EM), but need to be scrutinized carefully in light of the costs of overinvestigation, delays in diagnosis, and risks of empirical anticoagulation. © 2014 by the Society for Academic Emergency Medicine.
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
Risk newsboy: approach for addressing uncertainty in developing action levels and cleanup limits
International Nuclear Information System (INIS)
Cooke, Roger; MacDonell, Margaret
2007-01-01
Site cleanup decisions involve developing action levels and residual limits for key contaminants, to assure health protection during the cleanup period and into the long term. Uncertainty is inherent in the toxicity information used to define these levels, based on incomplete scientific knowledge regarding dose-response relationships across various hazards and exposures at environmentally relevant levels. This problem can be addressed by applying principles used to manage uncertainty in operations research, as illustrated by the newsboy dilemma. Each day a newsboy must balance the risk of buying more papers than he can sell against the risk of not buying enough. Setting action levels and cleanup limits involves a similar concept of balancing and distributing risks and benefits in the face of uncertainty. The newsboy approach can be applied to develop health-based target concentrations for both radiological and chemical contaminants, with stakeholder input being crucial to assessing 'regret' levels. Associated tools include structured expert judgment elicitation to quantify uncertainty in the dose-response relationship, and mathematical techniques such as probabilistic inversion and iterative proportional fitting. (authors)
An approach to criteria, design limits and monitoring in nuclear fuel waste disposal
Energy Technology Data Exchange (ETDEWEB)
Simmons, G R; Baumgartner, P; Bird, G A; Davison, C C; Johnson, L H; Tamm, J A
1994-12-01
The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs.
An approach to criteria, design limits and monitoring in nuclear fuel waste disposal
International Nuclear Information System (INIS)
Simmons, G.R.; Baumgartner, P.; Bird, G.A.; Davison, C.C.; Johnson, L.H.; Tamm, J.A.
1994-12-01
The Nuclear Fuel Waste Management Program has been established to develop and demonstrate the technology for safe geological disposal of nuclear fuel waste. One objective of the program is to show that a disposal system (i.e., a disposal centre and associated transportation system) can be designed and that it would be safe. Therefore the disposal system must be shown to comply with safety requirements specified in guidelines, standards, codes and regulations. The components of the disposal system must also be shown to operate within the limits specified in their design. Compliance and performance of the disposal system would be assessed on a site-specific basis by comparing estimates of the anticipated performance of the system and its components with compliance or performance criteria. A monitoring program would be developed to consider the effects of the disposal system on the environment and would include three types of monitoring: baseline monitoring, compliance monitoring, and performance monitoring. This report presents an approach to establishing compliance and performance criteria, limits for use in disposal system component design, and the main elements of a monitoring program for a nuclear fuel waste disposal system. (author). 70 refs., 9 tabs., 13 figs
Analysis of enamel development using murine model systems: approaches and limitations.
Directory of Open Access Journals (Sweden)
Megan K Pugach
2014-09-01
Full Text Available A primary goal of enamel research is to understand and potentially treat or prevent enamel defects related to amelogenesis imperfecta (AI. Rodents are ideal models to assist our understanding of how enamel is formed because they are easily genetically modified, and their continuously erupting incisors display all stages of enamel development and mineralization. While numerous methods have been developed to generate and analyze genetically modified rodent enamel, it is crucial to understand the limitations and challenges associated with these methods in order to draw appropriate conclusions that can be applied translationally, to AI patient care. We have highlighted methods involved in generating and analyzing rodent enamel and potential approaches to overcoming limitations of these methods: 1 generating transgenic, knockout and knockin mouse models, and 2 analyzing rodent enamel mineral density and functional properties (structure, mechanics of mature enamel. There is a need for a standardized workflow to analyze enamel phenotypes in rodent models so that investigators can compare data from different studies. These methods include analyses of gene and protein expression, developing enamel histology, enamel pigment, degree of mineralization, enamel structure and mechanical properties. Standardization of these methods with regard to stage of enamel development and sample preparation is crucial, and ideally investigators can use correlative and complementary techniques with the understanding that developing mouse enamel is dynamic and complex.
A qualitative risk assessment approach for Swiss dairy products: opportunities and limitations.
Menéndez González, S; Hartnack, S; Berger, T; Doherr, M; Breidenbach, E
2011-05-01
Switzerland implemented a risk-based monitoring of Swiss dairy products in 2002 based on a risk assessment (RA) that considered the probability of exceeding a microbiological limit value set by law. A new RA was launched in 2007 to review and further develop the previous assessment, and to make recommendations for future risk-based monitoring according to current risks. The resulting qualitative RA was designed to ascertain the risk to human health from the consumption of Swiss dairy products. The products and microbial hazards to be considered in the RA were determined based on a risk profile. The hazards included Campylobacter spp., Listeria monocytogenes, Salmonella spp., Shiga toxin-producing Escherichia coli, coagulase-positive staphylococci and Staphylococcus aureus enterotoxin. The release assessment considered the prevalence of the hazards in bulk milk samples, the influence of the process parameters on the microorganisms, and the influence of the type of dairy. The exposure assessment was linked to the production volume. An overall probability was estimated combining the probabilities of release and exposure for each combination of hazard, dairy product and type of dairy. This overall probability represents the likelihood of a product from a certain type of dairy exceeding the microbiological limit value and being passed on to the consumer. The consequences could not be fully assessed due to lack of detailed information on the number of disease cases caused by the consumption of dairy products. The results were expressed as a ranking of overall probabilities. Finally, recommendations for the design of the risk-based monitoring programme and for filling the identified data gaps were given. The aims of this work were (i) to present the qualitative RA approach for Swiss dairy products, which could be adapted to other settings and (ii) to discuss the opportunities and limitations of the qualitative method. © 2010 Blackwell Verlag GmbH.
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Handling limited datasets with neural networks in medical applications: A small-data approach.
Shaikhina, Torgyn; Khovanova, Natalia A
2017-01-01
Single-centre studies in medical domain are often characterised by limited samples due to the complexity and high costs of patient data collection. Machine learning methods for regression modelling of small datasets (less than 10 observations per predictor variable) remain scarce. Our work bridges this gap by developing a novel framework for application of artificial neural networks (NNs) for regression tasks involving small medical datasets. In order to address the sporadic fluctuations and validation issues that appear in regression NNs trained on small datasets, the method of multiple runs and surrogate data analysis were proposed in this work. The approach was compared to the state-of-the-art ensemble NNs; the effect of dataset size on NN performance was also investigated. The proposed framework was applied for the prediction of compressive strength (CS) of femoral trabecular bone in patients suffering from severe osteoarthritis. The NN model was able to estimate the CS of osteoarthritic trabecular bone from its structural and biological properties with a standard error of 0.85MPa. When evaluated on independent test samples, the NN achieved accuracy of 98.3%, outperforming an ensemble NN model by 11%. We reproduce this result on CS data of another porous solid (concrete) and demonstrate that the proposed framework allows for an NN modelled with as few as 56 samples to generalise on 300 independent test samples with 86.5% accuracy, which is comparable to the performance of an NN developed with 18 times larger dataset (1030 samples). The significance of this work is two-fold: the practical application allows for non-destructive prediction of bone fracture risk, while the novel methodology extends beyond the task considered in this study and provides a general framework for application of regression NNs to medical problems characterised by limited dataset sizes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Vernez, David; Fraize-Frontier, Sandrine; Vincent, Raymond; Binet, Stéphane; Rousselle, Christophe
2018-03-15
Assessment factors (AFs) are commonly used for deriving reference concentrations for chemicals. These factors take into account variabilities as well as uncertainties in the dataset, such as inter-species and intra-species variabilities or exposure duration extrapolation or extrapolation from the lowest-observed-adverse-effect level (LOAEL) to the noobserved- adverse-effect level (NOAEL). In a deterministic approach, the value of an AF is the result of a debate among experts and, often a conservative value is used as a default choice. A probabilistic framework to better take into account uncertainties and/or variability when setting occupational exposure limits (OELs) is presented and discussed in this paper. Each AF is considered as a random variable with a probabilistic distribution. A short literature was conducted before setting default distributions ranges and shapes for each AF commonly used. A random sampling, using Monte Carlo techniques, is then used for propagating the identified uncertainties and computing the final OEL distribution. Starting from the broad default distributions obtained, experts narrow it to its most likely range, according to the scientific knowledge available for a specific chemical. Introducing distribution rather than single deterministic values allows disclosing and clarifying variability and/or uncertainties inherent to the OEL construction process. This probabilistic approach yields quantitative insight into both the possible range and the relative likelihood of values for model outputs. It thereby provides a better support in decision-making and improves transparency. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
International Nuclear Information System (INIS)
Thompson, William L.
2000-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were and gt;0.85 and constant across sampling units (95% confidence interval coverage= 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C(bar V)0.27[SE= 0.0004]) than when they did (C(bar V)= 0.19[SE= 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates
Iler, Amy M.; Høye, Toke T.; Inouye, David W.; Schmidt, Niels M.
2013-01-01
Many alpine and subalpine plant species exhibit phenological advancements in association with earlier snowmelt. While the phenology of some plant species does not advance beyond a threshold snowmelt date, the prevalence of such threshold phenological responses within plant communities is largely unknown. We therefore examined the shape of flowering phenology responses (linear versus nonlinear) to climate using two long-term datasets from plant communities in snow-dominated environments: Gothic, CO, USA (1974–2011) and Zackenberg, Greenland (1996–2011). For a total of 64 species, we determined whether a linear or nonlinear regression model best explained interannual variation in flowering phenology in response to increasing temperatures and advancing snowmelt dates. The most common nonlinear trend was for species to flower earlier as snowmelt advanced, with either no change or a slower rate of change when snowmelt was early (average 20% of cases). By contrast, some species advanced their flowering at a faster rate over the warmest temperatures relative to cooler temperatures (average 5% of cases). Thus, some species seem to be approaching their limits of phenological change in response to snowmelt but not temperature. Such phenological thresholds could either be a result of minimum springtime photoperiod cues for flowering or a slower rate of adaptive change in flowering time relative to changing climatic conditions. PMID:23836793
An approach to the derivation of radionuclide intake limits for members of the public
International Nuclear Information System (INIS)
Thompson, R.C.
1980-01-01
The modification of occupational exposure limits for application to general populations is discussed. First, the permitted radiation dose needs to be modified from that considered appropriate for occupational exposure, to that considered appropriate for the particular general population exposure of concern. This is a problem of optimization and is considered only briefly. The second modification allows for the different physical, biological, and societal parameters applicable to general populations as contrasted with occupational populations. These differences derive from the heterogeneity of the general population particularly in terms of age and state-of-health, as these affect radionuclide deposition, absorption, distribution, and retention, and as they affect basic sensitivity to the development of detrimental effects. Environmental factors will influence physical availability and may alter the chemical and physical form of the radionuclide, and hence biological availability to the general population. Societal factors may modify the potential for exposure of different segments of the general population. This complex modifying factor will be different for each radioelement. The suggested approach is illustrated using plutonium as an example. (H.K.)
Sherwin, Jason
At the start of the 21st century, the topic of complexity remains a formidable challenge in engineering, science and other aspects of our world. It seems that when disaster strikes it is because some complex and unforeseen interaction causes the unfortunate outcome. Why did the financial system of the world meltdown in 2008--2009? Why are global temperatures on the rise? These questions and other ones like them are difficult to answer because they pertain to contexts that require lengthy descriptions. In other words, these contexts are complex. But we as human beings are able to observe and recognize this thing we call 'complexity'. Furthermore, we recognize that there are certain elements of a context that form a system of complex interactions---i.e., a complex system. Many researchers have even noted similarities between seemingly disparate complex systems. Do sub-atomic systems bear resemblance to weather patterns? Or do human-based economic systems bear resemblance to macroscopic flows? Where do we draw the line in their resemblance? These are the kinds of questions that are asked in complex systems research. And the ability to recognize complexity is not only limited to analytic research. Rather, there are many known examples of humans who, not only observe and recognize but also, operate complex systems. How do they do it? Is there something superhuman about these people or is there something common to human anatomy that makes it possible to fly a plane? Or to drive a bus? Or to operate a nuclear power plant? Or to play Chopin's etudes on the piano? In each of these examples, a human being operates a complex system of machinery, whether it is a plane, a bus, a nuclear power plant or a piano. What is the common thread running through these abilities? The study of situational awareness (SA) examines how people do these types of remarkable feats. It is not a bottom-up science though because it relies on finding general principles running through a host of varied
A Weakest-Link Approach for Fatigue Limit of 30CrNiMo8 Steels (Preprint)
2011-03-01
34Application of a Weakest-Link Concept to the Fatigue Limit of the Bearing Steel Sae 52100 in a Bainitic Condition," Fatigue and Fracture of...AFRL-RX-WP-TP-2011-4206 A WEAKEST-LINK APPROACH FOR FATIGUE LIMIT OF 30CrNiMo8 STEELS (PREPRINT) S. Ekwaro-Osire and H.V. Kulkarni Texas...2011 4. TITLE AND SUBTITLE A WEAKEST-LINK APPROACH FOR FATIGUE LIMIT OF 30CrNiMo8 STEELS (PREPRINT) 5a. CONTRACT NUMBER In-house 5b. GRANT
Aktham I. Maghyereh; Haitham A. Al Zoubi; Haitham Nobanee
2007-01-01
We reexamine the effects of price limits on stock volatility of Taiwan Stock Exchange using a new methodology based on the Extreme-Value technique. Consistent with the advocates of price limits, we find that stock market volatility is sharply moderated under more restrictive price limits.
A partial ensemble Kalman filtering approach to enable use of range limited observations
DEFF Research Database (Denmark)
Borup, Morten; Grum, Morten; Madsen, Henrik
2015-01-01
The ensemble Kalman filter (EnKF) relies on the assumption that an observed quantity can be regarded as a stochastic variable that is Gaussian distributed with mean and variance that equals the measurement and the measurement noise, respectively. When a gauge has a minimum and/or maximum detection...
A new approach to define acceptance limits for hematology in external quality assessment schemes.
Soumali, Mohamed Rida; Van Blerk, Marjan; Akharif, Abdelhadi; Albarède, Stéphanie; Kesseler, Dagmar; Gutierrez, Gabriela; de la Salle, Barbara; Plum, Inger; Guyard, Anne; Favia, Ana Paula; Coucke, Wim
2017-10-26
A study performed in 2007 comparing the evaluation procedures used in European external quality assessment schemes (EQAS) for hemoglobin and leukocyte concentrations showed that acceptance criteria vary widely. For this reason, the Hematology working group from the European Organisation for External Quality Assurance Providers in Laboratory Medicine (EQALM) decided to perform a statistical study with the aim of establishing appropriate acceptance limits (AL) allowing harmonization between the evaluation procedures of European EQAS organizers. Eight EQAS organizers from seven European countries provided their hematology survey results from 2010 to 2012 for red blood cells (RBC), hemoglobin, hematocrit, mean corpuscular volume (MCV), white blood cells (WBC), platelets and reticulocytes. More than 440,000 data were collected. The relation between the absolute value of the relative differences between reported EQA results and their corresponding assigned value (U-scores) was modeled by means of an adaptation of Thompson's "characteristic function". Quantile regression was used to investigate the percentiles of the U-scores for each target concentration range. For deriving AL, focus was mainly on the upper percentiles (90th, 95th and 99th). For RBC, hemoglobin, hematocrit and MCV, no relation was found between the U-scores and the target concentrations for any of the percentiles. For WBC, platelets and reticulocytes, a relation with the target concentrations was found and concentration-dependent ALs were determined. The approach enabled to determine state of the art-based ALs, that were concentration-dependent when necessary and usable by various EQA providers. It could also easily be applied to other domains.
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Directory of Open Access Journals (Sweden)
Yu-Pei Huang
2018-01-01
Full Text Available This paper proposes a modified maximum power point tracking (MPPT algorithm for photovoltaic systems under rapidly changing partial shading conditions (PSCs. The proposed algorithm integrates a genetic algorithm (GA and the firefly algorithm (FA and further improves its calculation process via a differential evolution (DE algorithm. The conventional GA is not advisable for MPPT because of its complicated calculations and low accuracy under PSCs. In this study, we simplified the GA calculations with the integration of the DE mutation process and FA attractive process. Results from both the simulation and evaluation verify that the proposed algorithm provides rapid response time and high accuracy due to the simplified processing. For instance, evaluation results demonstrate that when compared to the conventional GA, the execution time and tracking accuracy of the proposed algorithm can be, respectively, improved around 69.4% and 4.16%. In addition, in comparison to FA, the tracking speed and tracking accuracy of the proposed algorithm can be improved around 42.9% and 1.85%, respectively. Consequently, the major improvement of the proposed method when evaluated against the conventional GA and FA is tracking speed. Moreover, this research provides a framework to integrate multiple nature-inspired algorithms for MPPT. Furthermore, the proposed method is adaptable to different types of solar panels and different system formats with specifically designed equations, the advantages of which are rapid tracking speed with high accuracy under PSCs.
Murray, Aja Louise; Booth, Tom; Eisner, Manuel; Obsuth, Ingrid; Ribeaud, Denis
2018-05-22
Whether or not importance should be placed on an all-encompassing general factor of psychopathology (or p factor) in classifying, researching, diagnosing, and treating psychiatric disorders depends (among other issues) on the extent to which comorbidity is symptom-general rather than staying largely within the confines of narrower transdiagnostic factors such as internalizing and externalizing. In this study, we compared three methods of estimating p factor strength. We compared omega hierarchical and explained common variance calculated from confirmatory factor analysis (CFA) bifactor models with maximum likelihood (ML) estimation, from exploratory structural equation modeling/exploratory factor analysis models with a bifactor rotation, and from Bayesian structural equation modeling (BSEM) bifactor models. Our simulation results suggested that BSEM with small variance priors on secondary loadings might be the preferred option. However, CFA with ML also performed well provided secondary loadings were modeled. We provide two empirical examples of applying the three methodologies using a normative sample of youth (z-proso, n = 1,286) and a university counseling sample (n = 359).
Energy Technology Data Exchange (ETDEWEB)
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)
2012-06-20
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.
2012-06-01
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ + 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
International Nuclear Information System (INIS)
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P.
2012-01-01
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance (Δ) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log Δ+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael; Smargiassi, Audrey
2014-09-01
Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data.
Energy Technology Data Exchange (ETDEWEB)
Vasquez, V.R., E-mail: vrvasquez@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Curren, J., E-mail: janecurren@yahoo.com [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Lau, S.-L., E-mail: simlin@ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Stenstrom, M.K., E-mail: stenstro@seas.ucla.edu [Department of Civil and Environmental Engineering, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States); Suffet, I.H., E-mail: msuffet@ucla.edu [Environmental Science and Engineering Program, University of California, Los Angeles, Los Angeles, CA 90095-1496 (United States)
2011-09-01
Echo Park Lake is a small lake in Los Angeles, CA listed on the USA Clean Water Act Section 303(d) list of impaired water bodies for elevated levels of organochlorine pesticides (OCPs) and polychlorinated biphenyls (PCBs) in fish tissue. A lake water and sediment sampling program was completed to support the development of total maximum daily loads (TMDL) to address the lake impairment. The field data indicated quantifiable levels of OCPs and PCBs in the sediments, but lake water data were all below detection levels. The field sediment data obtained may explain the contaminant levels in fish tissue using appropriate sediment-water partitioning coefficients and bioaccumulation factors. A partition-equilibrium fugacity model of the whole lake system was used to interpret the field data and indicated that half of the total mass of the pollutants in the system are in the sediments and the other half is in soil; therefore, soil erosion could be a significant pollutant transport mode into the lake. Modeling also indicated that developing and quantifying the TMDL depends significantly on the analytical detection level for the pollutants in field samples and on the choice of octanol-water partitioning coefficient and bioaccumulation factors for the model. - Research highlights: {yields} Fugacity model using new OCP and PCB field data supports lake TMDL calculations. {yields} OCP and PCB levels in lake sediment were found above levels for impairment. {yields} Relationship between sediment data and available fish tissue data evaluated. {yields} Model provides approximation of contaminant sources and sinks for a lake system. {yields} Model results were sensitive to analytical detection and quantification levels.
International Nuclear Information System (INIS)
Spiegler, P.
1981-09-01
As part of the assessment of the potential radiological consequences of the proposed Waste Isolation Pilot Plant (WIPP), this report evaluates the post-closure radiation dose commitments associated with a possible breach event which involves dissolution of the repository by groundwaters and subsequent transport of the nuclear waste through an aquifer to a well assumed to exist at a point 3 miles downstream from the repository. The concentrations of uranium and plutonium isotopes at the well are based on the nuclear waste inventory presently proposed for WIPP and basic assumptions concerning the transport of waste as well as treatment to reduce the salinity of the water. The concentrations of U-233, Pu-239, and Pu-240, all radionuclides originally emplaced as waste in the repository, would exceed current EPA drinking water limits. The concentrations of U-234, U-235, and U-236, all decay products of plutonium isotopes originally emplaced as waste, would be well below current EPA drinking water limits. The 50-year dose commitments from one year of drinking treated water contaminated with U-233 or Pu-239 and Pu-240 were found to be comparable to a one-year dose from natural background. The 50-year dose commitments from one year of drinking milk would be no more than about 1/5 the dose obtained from ingestion of treated water. These doses are considered upper bounds because of several very conservative assumptions which are discussed in the report
Kumar, M Hari; Kumar, M Siva; Kumar, Sabitha Hari; Kumar, Kingsly Selva
2016-01-01
Limited cutaneous scleroderma is a subtype of scleroderma limited to the skin of the face, hands, feet and forearms. We present a case of a 45-year-old woman affected by limited cutaneous systemic scleroderma involving the orofacial region and causing restricted mouth opening. The patient showed noteworthy improvement of the skin lesion by use of a combination of intralesional corticosteroid with hyaluronidase and various multiantioxidants, resulting in amelioration of her mouth opening problem. The patient gave her full informed written consent to this report being published. PMID:27033280
Kumar, M Hari; Kumar, M Siva; Kumar, Sabitha Hari; Kumar, Kingsly Selva
2016-03-31
Limited cutaneous scleroderma is a subtype of scleroderma limited to the skin of the face, hands, feet and forearms. We present a case of a 45-year-old woman affected by limited cutaneous systemic scleroderma involving the orofacial region and causing restricted mouth opening. The patient showed noteworthy improvement of the skin lesion by use of a combination of intralesional corticosteroid with hyaluronidase and various multiantioxidants, resulting in amelioration of her mouth opening problem. The patient gave her full informed written consent to this report being published. 2016 BMJ Publishing Group Ltd.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
CSIR Research Space (South Africa)
Sebake, TN
2008-06-01
Full Text Available professionals, particularly by architects, in the implementation of sustainability principles in the development of building projects. The aim of the paper is to highlight the limitations of introducing sustainability aspects into the existing South African...
Modeling the evolution of natural cliffs subject to weathering. 1, Limit analysis approach
Utili, Stefano; Crosta, Giovanni B.
2011-01-01
Retrogressive landsliding evolution of natural slopes subjected to weathering has been modeled by assuming Mohr-Coulomb material behavior and by using an analytical method. The case of weathering-limited slope conditions, with complete erosion of the accumulated debris, has been modeled. The limit analysis upper-bound method is used to study slope instability induced by a homogeneous decrease of material strength in space and time. The only assumption required in the model concerns the degree...
Zoetendal, E.G.; Ben-Amor, K.; Akkermans, A.D.L.; Abee, T.; Vos, de W.M.
2001-01-01
A major concern in molecular ecological studies is the lysis efficiency of different bacteria in a complex ecosystem. We used a PCR-based 16S rDNA approach to determine the effect of two DNA isolation protocols (i.e. the bead beating and Triton-X100 method) on the detection limit of seven
Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users
DEFF Research Database (Denmark)
Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane
2011-01-01
a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....
DEFF Research Database (Denmark)
Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck
2017-01-01
Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support...... for this purpose includes the dose-response relationship for Salmonella and a reduction factor to account for preparation of the fresh pork. By use of the risk model, it was estimated that the majority of salmonellosis cases, caused by the consumption of pork in Denmark, is caused by the small fraction of pork...... products that has enterococci concentrations above 5. log. CFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological limits and therefore, the perspective of this novel approach is that it can be used for definition of a risk-based microbiological...
Oktaviyanthi, R.; Dahlan, J. A.
2018-04-01
This study aims to develop student worksheets that correspond to the Cognitive Apprenticeship learning approach. The main subject in this student worksheet is Functions and Limits with the branch of the main subject is Continuity and Limits of Functions. There are two indicators of the achievement of this learning that are intended to be developed in the student worksheet (1) the student can explain the concept of limit by using the formal definition of limit and (2) the student can evaluate the value of limit of a function using epsilon and delta. The type of research used is development research that refers to the development of Plomp products. The research flow starts from literature review, observation, interviews, work sheet design, expert validity test, and limited trial on first-year students in academic year 2016-2017 in Universitas Serang Raya, STKIP Pelita Pratama Al-Azhar Serang, and Universitas Mathla’ul Anwar Pandeglang. Based on the product development result obtained the student worksheets that correspond to the Cognitive Apprenticeship learning approach are valid and reliable.
Universality and the approach to the continuum limit in lattice gauge theory
De Divitiis, G M; Guagnelli, M; Lüscher, Martin; Petronzio, Roberto; Sommer, Rainer; Weisz, P; Wolff, U; de Divitiis, G; Frezzotti, R; Guagnelli, M; Luescher, M; Petronzio, R; Sommer, R; Weisz, P; Wolff, U
1995-01-01
The universality of the continuum limit and the applicability of renormalized perturbation theory are tested in the SU(2) lattice gauge theory by computing two different non-perturbatively defined running couplings over a large range of energies. The lattice data (which were generated on the powerful APE computers at Rome II and DESY) are extrapolated to the continuum limit by simulating sequences of lattices with decreasing spacings. Our results confirm the expected universality at all energies to a precision of a few percent. We find, however, that perturbation theory must be used with care when matching different renormalized couplings at high energies.
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
A large deviations approach to limit theory for heavy-tailed time series
DEFF Research Database (Denmark)
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
Exercise testing, limitation and training in patients with cystic fibrosis. A personalized approach
Werkman, M.S.
2014-01-01
Exercise testing and training are cornerstones in regular CF care. However, no consensus exists in literature about which exercise test protocol should be used for individual patients. Furthermore, divergence exists in insights about both the dominant exercise limiting mechanisms and the
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-01-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
An Effective Approach to Biomedical Information Extraction with Limited Training Data
Jonnalagadda, Siddhartha
2011-01-01
In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist. Copyright © 2013 Elsevier Ltd. All rights reserved.
Seiniger, Patrick; Bartels, Oliver; Pastor, Claus; Wisch, Marcus
2013-01-01
It is commonly agreed that active safety will have a significant impact on reducing accident figures for pedestrians and probably also bicyclists. However, chances and limitations for active safety systems have only been derived based on accident data and the current state of the art, based on proprietary simulation models. The objective of this article is to investigate these chances and limitations by developing an open simulation model. This article introduces a simulation model, incorporating accident kinematics, driving dynamics, driver reaction times, pedestrian dynamics, performance parameters of different autonomous emergency braking (AEB) generations, as well as legal and logical limitations. The level of detail for available pedestrian accident data is limited. Relevant variables, especially timing of the pedestrian appearance and the pedestrian's moving speed, are estimated using assumptions. The model in this article uses the fact that a pedestrian and a vehicle in an accident must have been in the same spot at the same time and defines the impact position as a relevant accident parameter, which is usually available from accident data. The calculations done within the model identify the possible timing available for braking by an AEB system as well as the possible speed reduction for different accident scenarios as well as for different system configurations. The simulation model identifies the lateral impact position of the pedestrian as a significant parameter for system performance, and the system layout is designed to brake when the accident becomes unavoidable by the vehicle driver. Scenarios with a pedestrian running from behind an obstruction are the most demanding scenarios and will very likely never be avoidable for all vehicle speeds due to physical limits. Scenarios with an unobstructed person walking will very likely be treatable for a wide speed range for next generation AEB systems.
Advantages and limitations of the use of optogenetic approach in studying fast-scale spike encoding.
Directory of Open Access Journals (Sweden)
Aleksey Malyshev
Full Text Available Understanding single-neuron computations and encoding performed by spike-generation mechanisms of cortical neurons is one of the central challenges for cell electrophysiology and computational neuroscience. An established paradigm to study spike encoding in controlled conditions in vitro uses intracellular injection of a mixture of signals with fluctuating currents that mimic in vivo-like background activity. However this technique has two serious limitations: it uses current injection, while synaptic activation leads to changes of conductance, and current injection is technically most feasible in the soma, while the vast majority of synaptic inputs are located on the dendrites. Recent progress in optogenetics provides an opportunity to circumvent these limitations. Transgenic expression of light-activated ionic channels, such as Channelrhodopsin2 (ChR2, allows induction of controlled conductance changes even in thin distant dendrites. Here we show that photostimulation provides a useful extension of the tools to study neuronal encoding, but it has its own limitations. Optically induced fluctuating currents have a low cutoff (~70 Hz, thus limiting the dynamic range of frequency response of cortical neurons. This leads to severe underestimation of the ability of neurons to phase-lock their firing to high frequency components of the input. This limitation could be worked around by using short (2 ms light stimuli which produce membrane potential responses resembling EPSPs by their fast onset and prolonged decay kinetics. We show that combining application of short light stimuli to different parts of dendritic tree for mimicking distant EPSCs with somatic injection of fluctuating current that mimics fluctuations of membrane potential in vivo, allowed us to study fast encoding of artificial EPSPs photoinduced at different distances from the soma. We conclude that dendritic photostimulation of ChR2 with short light pulses provides a powerful tool to
William L. Thompson
2003-01-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream fish studies across North America. However, their population estimator relies on two key assumptions: (1) removal estimates are equal to the true numbers of fish, and (2) removal estimates are highly correlated with snorkel counts within a subset of sampled...
Energy Technology Data Exchange (ETDEWEB)
Safe, S. [Texas A and M Univ., College Station, TX (United States). Dept. of Veterinary Physiology and Pharmacology
1995-12-31
2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) and related halogenated aromatic hydrocarbons (HAHs) are present as complex mixtures of polychlorinated dibenzo-p-dioxins (PCDDs), dibenzofurans (PCDFs) and biphenyls (PCBs) in most environmental matrices. Risk management of these mixtures utilize the toxic equivalency factor (TEF) approach in which the TCDD (dioxin) or toxic equivalents of a mixture is a summation of the congener concentration (Ci) times TEF{sub i} (potency relative to TCDD) where. TEQ{sub mixture} = {Sigma}[Cil] {times} TEF{sub i}. TEQs are determined only for those HAHs which are aryl hydrocarbon (Ah) receptor agonists and this approach assumes that the toxic or biochemical effects of individual compounds in a mixture are additive. Several in vivo and in vitro laboratory and field studies with different HAH mixtures have been utilized to validate the TEF approach. For some responses, the calculated toxicities of PCDD/PCDF and PCB mixtures predict the observed toxic potencies. However, for fetal cleft palate and immunotoxicity in mice, nonadditive (antagonistic) responses are observed using complex PCB mixtures or binary mixtures containing an Ah receptor agonist with 2,2{prime},4,4{prime},5,5{prime}-hexachlorobiphenyl (PCB153). The potential interactive effects of PCBs and other dietary Ah receptor antagonist suggest that the TEF approach for risk management of HAHs requires further refinement and should be used selectively.
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
A QMU approach for characterizing the operability limits of air-breathing hypersonic vehicles
International Nuclear Information System (INIS)
Iaccarino, Gianluca; Pecnik, Rene; Glimm, James; Sharp, David
2011-01-01
The operability limits of a supersonic combustion engine for an air-breathing hypersonic vehicle are characterized using numerical simulations and an uncertainty quantification methodology. The time-dependent compressible flow equations with heat release are solved in a simplified configuration. Verification, calibration and validation are carried out to assess the ability of the model to reproduce the flow/thermal interactions that occur when the engine unstarts due to thermal choking. quantification of margins and uncertainty (QMU) is used to determine the safe operation region for a range of fuel flow rates and combustor geometries. - Highlights: → In this work we introduce a method to study the operability limits of hypersonic scramjet engines. → The method is based on a calibrated heat release model. → It accounts explicitly for uncertainties due to flight conditions and model correlations. → We examine changes due to the combustor geometry and fuel injection.
Revegetation in China’s Loess Plateau is approaching sustainable water resource limits
Feng, Xiaoming; Fu, Bojie; Piao, Shilong; Wang, Shuai; Ciais, Philippe; Zeng, Zhenzhong; Lü, Yihe; Zeng, Yuan; Li, Yue; Jiang, Xiaohui; Wu, Bingfang
2016-11-01
Revegetation of degraded ecosystems provides opportunities for carbon sequestration and bioenergy production. However, vegetation expansion in water-limited areas creates potentially conflicting demands for water between the ecosystem and humans. Current understanding of these competing demands is still limited. Here, we study the semi-arid Loess Plateau in China, where the `Grain to Green’ large-scale revegetation programme has been in operation since 1999. As expected, we found that the new planting has caused both net primary productivity (NPP) and evapotranspiration (ET) to increase. Also the increase of ET has induced a significant (p develop a new conceptual framework to determine the critical carbon sequestration that is sustainable in terms of both ecological and socio-economic resource demands in a coupled anthropogenic-biological system.
Liquidity dynamics in an electronic open limit order book: An event study approach
Gomber, Peter; Schweickert, Uwe; Theissen, Erik
2011-01-01
We analyze the dynamics of liquidity in Xetra, an electronic open limit order book. We use the Exchange Liquidity Measure (XLM), a measure of the cost of a roundtrip trade of given size V. This measure captures the price and the quantity dimension of liquidity. We present descriptive statistics, analyze the cross-sectional determinants of the XLM measure and document its intraday pattern. Our main contribution is an analysis of the dynamics of the XLM measure around liquidity shocks. We use i...
Implementation of upper limit calculation for a poisson variable by bayesian approach
International Nuclear Information System (INIS)
Zhu Yongsheng
2008-01-01
The calculation of Bayesian confidence upper limit for a Poisson variable including both signal and background with and without systematic uncertainties has been formulated. A Fortran 77 routine, BPULE, has been developed to implement the calculation. The routine can account for systematic uncertainties in the background expectation and signal efficiency. The systematic uncertainties may be separately parameterized by a Gaussian, Log-Gaussian or flat probability density function (pdf). Some technical details of BPULE have been discussed. (authors)
A labview approach to instrumentation for the TFTR bumper limiter alignment project
International Nuclear Information System (INIS)
Skelly, G.N.; Owens, D.K.
1992-01-01
This paper reports on a project recently undertaken to measure the alignment of the TFTR bumper limiter in relation to the toroidal magnetic field axis. The process involved the measurement of the toroidal magnetic field, and the positions of the tiles that make up the bumper limiter. The basis for the instrument control and data acquisition system was National Instrument's LabVIEW 2. LabVIEW is a graphical programming system for developing scientific and engineering applications on a Macintosh. For this project, a Macintosh IIci controlled the IEEE-488 GPIB programmable instruments via an interface box connected to the SCSI port of the computer. With LabVIEW, users create graphical software modules called virtual instruments instead of writing conventional text-based code. To measure the magnetic field, the control system acquired data from two nuclear magnetic resonance magnetometers while the torroidal field coils were pulsed. To measure the position of the tiles on the limiter, an instrumented mechanical arm was used inside the vessel
Approaches to the calculation of limitations on nuclear detonations for peaceful purposes
Energy Technology Data Exchange (ETDEWEB)
Whipple, G H [School of Public Health, University of Michigan, Ann Arbor, MI (United States)
1969-07-01
The long-term equilibrium levels of tritium, krypton- 85 and carbon-14 which are acceptable in the environment have been estimated on the following premises: 1) the three isotopes reach the environment and equilibrate throughout it in periods shorter than their half lives, 2) nuclear detonations and nuclear power constitute the dominant sources of these isotopes, 3) the doses from these three isotopes add to one another and to the doses from other radioactive isotopes released to the environment, and 4) the United States, by virtue of its population, is entitled to 6% of the world's capacity to accept radioactive wastes. These premises lead to the conclusion that U.S. nuclear detonations are limited by carbon-14 to 60 megatons per year. The corresponding limit for U.S. nuclear power appears to be set by krypton-85 at 100,000 electrical megawatts, although data for carbon-14 production by nuclear power are not available. It is noted that if the equilibration assumed in these estimates does not occur, the limits will in general be lower than those given above. (author)
Approaches to the calculation of limitations on nuclear detonations for peaceful purposes
International Nuclear Information System (INIS)
Whipple, G.H.
1969-01-01
The long-term equilibrium levels of tritium, krypton- 85 and carbon-14 which are acceptable in the environment have been estimated on the following premises: 1) the three isotopes reach the environment and equilibrate throughout it in periods shorter than their half lives, 2) nuclear detonations and nuclear power constitute the dominant sources of these isotopes, 3) the doses from these three isotopes add to one another and to the doses from other radioactive isotopes released to the environment, and 4) the United States, by virtue of its population, is entitled to 6% of the world's capacity to accept radioactive wastes. These premises lead to the conclusion that U.S. nuclear detonations are limited by carbon-14 to 60 megatons per year. The corresponding limit for U.S. nuclear power appears to be set by krypton-85 at 100,000 electrical megawatts, although data for carbon-14 production by nuclear power are not available. It is noted that if the equilibration assumed in these estimates does not occur, the limits will in general be lower than those given above. (author)
International Nuclear Information System (INIS)
Sjoeberg, L.
1996-01-01
Risk perception has traditionally been conceived as a cognitive phenomenon, basically a question of information processing. The very term perception suggests that information processing is involved and of crucial importance. Kahneman and Tversky suggested that the use of 'heuristics' in the intuitive estimation of probabilities accounts for biased probability perception, hence claiming to explain risk perception as well. The psychometric approach of Slovic et al, a further step in in the cognitive tradition, conceives of perceived risk as a function of general properties of a hazard. However, the psychometric approach is shown here to explain only about 20% of the variance of perceived risk, even less of risk acceptability. Its claim to explanatory power is based on a statistical illusion: mean values were investigated and accounted for, across hazards. A currently popular alternative to the psychometric tradition, Cultural Theory, is even less successful and explains only about 5% of the variance of perceived risk. The claims of this approach were also based on a statistical illusion: 'significant' results were reported and interpreted as being of substantial importance. The present paper presents a new approach: attitude to the risk generating technology, general sensitivity to risks and specific risk explained well over 60% of the variance of perceived risk of nuclear waste, in a study of extensive data from a representative sample of the Swedish population. The attitude component functioning as an explanatory factor of perceived risk, rather than as a consequence of perceived risk, suggests strongly that perceived risk is something other than cognition. Implications for risk communication are discussed. (author)
International Nuclear Information System (INIS)
Zimen, K.E.
1975-02-01
The paper gives a model of energy consumption and a programme for its application. Previous models are mainly criticized on the grounds that new technological developments as well as adjustments due to learning processes of homo sapiens are generally not sufficiently accounted for in these models. The approach of this new model is therefore an attempt at the determination of the physical-chemical limiting values for the capacity of the global HST (homo sapiens - Tellus) system or of individual regions with respect to certain critical factors. These limiting values determined by the physical-chemical system of the earth are independent of human ingenuity and flexibility. (orig./AK) [de
Advantages and limitations of quantitative PCR (Q-PCR)-based approaches in microbial ecology.
Smith, Cindy J; Osborn, A Mark
2009-01-01
Quantitative PCR (Q-PCR or real-time PCR) approaches are now widely applied in microbial ecology to quantify the abundance and expression of taxonomic and functional gene markers within the environment. Q-PCR-based analyses combine 'traditional' end-point detection PCR with fluorescent detection technologies to record the accumulation of amplicons in 'real time' during each cycle of the PCR amplification. By detection of amplicons during the early exponential phase of the PCR, this enables the quantification of gene (or transcript) numbers when these are proportional to the starting template concentration. When Q-PCR is coupled with a preceding reverse transcription reaction, it can be used to quantify gene expression (RT-Q-PCR). This review firstly addresses the theoretical and practical implementation of Q-PCR and RT-Q-PCR protocols in microbial ecology, highlighting key experimental considerations. Secondly, we review the applications of (RT)-Q-PCR analyses in environmental microbiology and evaluate the contribution and advances gained from such approaches. Finally, we conclude by offering future perspectives on the application of (RT)-Q-PCR in furthering understanding in microbial ecology, in particular, when coupled with other molecular approaches and more traditional investigations of environmental systems.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Approaches for the development of occupational exposure limits for man-made mineral fibres (MMMFs)
International Nuclear Information System (INIS)
Ziegler-Skylakakis, Kyriakoula
2004-01-01
Occupational exposure limits (OELs) are an essential tool in the control of exposure to hazardous chemical agents, and serve to minimise the occurrence of occupational diseases associated with such exposure. The setting of OELs, together with other associated measures, forms an essential part of the European Community's strategy on health and safety at work, upon which the legislative framework for the protection of workers from risks related to chemical agents is based. The European Commission is assisted by the Scientific Committee on Occupational Exposure Limits (SCOEL) in its work of setting OELs for hazardous chemical agents. The procedure for setting OELs requires information on the toxic mechanisms of an agent that should allow to differentiate between thresholded and non-thresholded mechanisms. In the first case, a no-observed adverse effect level (NOAEL) can be defined, which can be the basis for a derivation of an OEL. In the latter case, any exposure is correlated with a certain risk. If adequate scientific data are available, SCOEL estimates the risk associated with a series of exposure levels. This can then be used for guidance, when setting OELs at European level. Man-made mineral fibres (MMMFs) are widely used at different worksites. MMMF products can release airborne respirable fibres during their production, use and removal. According to the classification of the EU system, all MMMF fibres are considered to be irritants and are classified for carcinogenicity. EU legislation foresees the use of limit values as one of the provisions for the protection of workers from the risks related to exposure to carcinogens. In the following paper, the research requirements identified by SCOEL for the development of OELs for MMMFs will be presented
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Approaching the Ultimate Limits of Communication Efficiency with a Photon-Counting Detector
Erkmen, Baris; Moision, Bruce; Dolinar, Samuel J.; Birnbaum, Kevin M.; Divsalar, Dariush
2012-01-01
Coherent states achieve the Holevo capacity of a pure-loss channel when paired with an optimal measurement, but a physical realization of this measurement is as of yet unknown, and it is also likely to be of high complexity. In this paper, we focus on the photon-counting measurement and study the photon and dimensional efficiencies attainable with modulations over classical- and nonclassical-state alphabets. We first review the state-of-the-art coherent on-off-keying (OOK) with a photoncounting measurement, illustrating its asymptotic inefficiency relative to the Holevo limit. We show that a commonly made Poisson approximation in thermal noise leads to unbounded photon information efficiencies, violating the conjectured Holevo limit. We analyze two binary-modulation architectures that improve upon the dimensional versus photon efficiency tradeoff achievable with conventional OOK. We show that at high photon efficiency these architectures achieve an efficiency tradeoff that differs from the best possible tradeoff--determined by the Holevo capacity--by only a constant factor. The first architecture we analyze is a coherent-state transmitter that relies on feedback from the receiver to control the transmitted energy. The second architecture uses a single-photon number-state source.
Utility approach to decision-making in extended T1 and limited T2 glottic carcinoma.
van Loon, Yda; Stiggelbout, Anne M; Hakkesteegt, Marieke M; Langeveld, Ton P M; de Jong, Rob J Baatenburg; Sjögren, Elisabeth V
2017-04-01
It is still undecided if endoscopic laser surgery or radiotherapy is the preferable treatment in extended T1 and limited T2 glottic tumors. Health utilities assessed from patients can aid in decision-making. Patients treated for extended T1 or limited T2 glottic carcinoma by laser surgery (n = 12) or radiotherapy (n = 14) assigned health utilities using a visual analog scale (VAS), time tradeoff (TTO) technique and scored their voice handicap using the Voice Handicap Index (VHI). VAS and TTO scores were slightly lower for the laser group compared to the radiotherapy group, however, not significantly so. The VHI showed a correlation with the VAS score, which was very low in both groups and can be considered (near) normal. Patients show no clear preference for the outcomes of laser surgery or radiotherapy from a quality of life (QOL) or voice handicap point of view. These data can now be incorporated into decision-making models. © 2017 Wiley Periodicals, Inc. Head Neck, 2017 © 2016 Wiley Periodicals, Inc. Head Neck 39: 779-785, 2017. © 2017 Wiley Periodicals, Inc.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
New approach to the theory of coupled πNN-NN system. III. A three-body limit
International Nuclear Information System (INIS)
Avishai, Y.; Mizutani, T.
1980-01-01
In the limit where the pion is restricted to be emitted only by the nucleon that first absorbed it, it is shown that the equations previously developed to describe the couple πNN (πd) - NN system reduce to conventional three-body equations. Specifically, it is found in this limit that the input πN p 11 amplitude which, put on-shell, is directly related to the experimental phase shift, contrary to the original equations where the direct (dressed) nucleon pole term and the non-pole part of this partial wave enter separately. The present study clarifies the limitation of pure three-body approach to the πNN-NN problems as well as suggests a rare opportunity of observing a possible resonance behavior in the non-pole part of the πN P 11 amplitude through πd experiments
Directory of Open Access Journals (Sweden)
Doris Weidemann
2009-01-01
Full Text Available Despite the huge interest in sojourner adjustment, there is still a lack of qualitative as well as of longitudinal research that would offer more detailed insights into intercultural learning processes during overseas stays. The present study aims to partly fill that gap by documenting changes in knowledge structures and general living experiences of fifteen German sojourners in Taiwan in a longitudinal, cultural-psychological study. As part of a multimethod design a structure formation technique was used to document subjective theories on giving/losing face and their changes over time. In a second step results from this study are compared to knowledge-structures of seven long-term German residents in Taiwan, and implications for the conceptualization of intercultural learning will be proposed. Finally, results from both studies serve to discuss the potential and limits of structure formation techniques in the field of intercultural communication research. URN: urn:nbn:de:0114-fqs0901435
Sun, Yang; Song, Huajing; Zhang, Feng; Yang, Lin; Ye, Zhuo; Mendelev, Mikhail I; Wang, Cai-Zhuang; Ho, Kai-Ming
2018-02-23
The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a "persistent embryo" method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleation of the B2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. Thus, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.
Sun, Yang; Song, Huajing; Zhang, Feng; Yang, Lin; Ye, Zhuo; Mendelev, Mikhail I.; Wang, Cai-Zhuang; Ho, Kai-Ming
2018-02-01
The crystal nucleation from liquid in most cases is too rare to be accessed within the limited time scales of the conventional molecular dynamics (MD) simulation. Here, we developed a "persistent embryo" method to facilitate crystal nucleation in MD simulations by preventing small crystal embryos from melting using external spring forces. We applied this method to the pure Ni case for a moderate undercooling where no nucleation can be observed in the conventional MD simulation, and obtained nucleation rate in good agreement with the experimental data. Moreover, the method is applied to simulate an even more sluggish event: the nucleation of the B 2 phase in a strong glass-forming Cu-Zr alloy. The nucleation rate was found to be 8 orders of magnitude smaller than Ni at the same undercooling, which well explains the good glass formability of the alloy. Thus, our work opens a new avenue to study solidification under realistic experimental conditions via atomistic computer simulation.
Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi
2014-08-15
We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.
Design and modeling of an SJ infrared solar cell approaching upper limit of theoretical efficiency
Sahoo, G. S.; Mishra, G. P.
2018-01-01
Recent trends of photovoltaics account for the conversion efficiency limit making them more cost effective. To achieve this we have to leave the golden era of silicon cell and make a path towards III-V compound semiconductor groups to take advantages like bandgap engineering by alloying these compounds. In this work we have used a low bandgap GaSb material and designed a single junction (SJ) cell with a conversion efficiency of 32.98%. SILVACO ATLAS TCAD simulator has been used to simulate the proposed model using both Ray Tracing and Transfer Matrix Method (under 1 sun and 1000 sun of AM1.5G spectrum). A detailed analyses of photogeneration rate, spectral response, potential developed, external quantum efficiency (EQE), internal quantum efficiency (IQE), short-circuit current density (JSC), open-circuit voltage (VOC), fill factor (FF) and conversion efficiency (η) are discussed. The obtained results are compared with previously reported SJ solar cell reports.
International Nuclear Information System (INIS)
McBurney, Ruth E.; Pollard, Christine G.
1992-01-01
In 1987, the Texas Department of Health (TDH) implemented a rule to allow, under certain conditions, wastes containing limited concentrations of short- lived radionuclides (less than 300-day half-life) to be disposed of in Type I sanitary landfills. The rule was based on a technical analysis that demonstrated the degree of safety for approximately 340 m of radioactive waste generated annually in Texas and identified major restrictions and conditions for disposal. TDH's Bureau of Radiation Control staff have been able to maintain an account of licensees utilizing the rule during the past years. Several research and industrial facilities in the state have saved significantly on waste disposal expenses. Public concerns and economic impacts for licensees as well as other regulatory aspects and experiences with the rule are discussed. (author)
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Shetty, Sathwik Raviraj; Ruiz-Treviño, Armando S; Omay, Sacit Bulent; Almeida, Joao Paulo; Liang, Buqing; Chen, Yu-Ning; Singh, Harminder; Schwartz, Theodore H
2017-10-01
To review current management strategies for olfactory groove meningioma (OGM)s and the recent literature comparing endoscopic endonasal (EEA) with traditional transcranial (TCA) approaches. A PubMed search of the recent literature (2011-2016) was performed to examine outcomes following EEA and TCA for OGM. The extent of resection, visual outcome, postoperative complications and recurrence rates were analyzed using percentages and proportions, the Fischer exact test and the Student's t-test using Graphpad PRISM 7.0Aa (San Diego, CA) software. There were 444 patients in the TCA group with a mean diameter of 4.61 (±1.17) cm and 101 patients in the EEA group with a mean diameter of 3.55 (± 0.58) cm (p = 0.0589). GTR was achieved in 90.9% (404/444) in the TCA group and 70.2% (71/101) in the EEA group (p OGMs.
Rajan, Sharmila; Sonoda, Junichiro; Tully, Timothy; Williams, Ambrose J; Yang, Feng; Macchi, Frank; Hudson, Terry; Chen, Mark Z; Liu, Shannon; Valle, Nicole; Cowan, Kyra; Gelzleichter, Thomas
2018-04-13
bFKB1 is a humanized bispecific IgG1 antibody, created by conjoining an anti-Fibroblast Growth Factor Receptor 1 (FGFR1) half-antibody to an anti-Klothoβ (KLB) half-antibody, using the knobs-into-holes strategy. bFKB1 acts as a highly selective agonist for the FGFR1/KLB receptor complex and is intended to ameliorate obesity-associated metabolic defects by mimicking the activity of the hormone FGF21. An important aspect of the biologics product manufacturing process is to establish meaningful product specifications regarding the tolerable levels of impurities that copurify with the drug product. The aim of the current study was to determine acceptable levels of product-related impurities for bFKB1. To determine the tolerable levels of these impurities, we dosed obese mice with bFKB1 enriched with various levels of either HMW impurities or anti-FGFR1-related impurities, and measured biomarkers for KLB-independent FGFR1 signaling. Here, we show that product-related impurities of bFKB1, in particular, high molecular weight (HMW) impurities and anti-FGFR1-related impurities, when purposefully enriched, stimulate FGFR1 in a KLB-independent manner. By taking this approach, the tolerable levels of product-related impurities were successfully determined. Our study demonstrates a general pharmacology-guided approach to setting a product specification for a bispecific antibody whose homomultimer-related impurities could lead to undesired biological effects. Copyright © 2018. Published by Elsevier Inc.
Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M
2016-10-10
Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.
[Substitutive and dietetic approaches in childhood autistic disorder: interests and limits].
Hjiej, H; Doyen, C; Couprie, C; Kaye, K; Contejean, Y
2008-10-01
Autism is a developmental disorder that requires specialized therapeutic approaches. Influenced by various theoretical hypotheses, therapeutic programs are typically structured on a psychodynamic, biological or educative basis. Presently, educational strategies are recommended in the treatment of autism, without excluding other approaches when they are necessary. Some authors recommend dietetic or complementary approaches to the treatment of autism, which often stimulates great interest in the parents but also provokes controversy for professionals. Nevertheless, professionals must be informed about this approach because parents are actively in demand of it. First of all, enzymatic disorders and metabolic errors are those most frequently evoked in the literature. The well-known phenylalanine hydroxylase deficit responsible for phenylketonuria has been described as being associated with autism. In this case, adapted diet prevents mental retardation and autistic symptoms. Some enzymatic errors are also corrected by supplementation with uridine or ribose for example, but these supplementations are the responsibility of specialized medical teams in the domain of neurology and cannot be applied by parents alone. Secondly, increased opoid activity due to an excess of peptides is also supposed to be at the origin of some autistic symptoms. Gluten-free or casein-free diets have thus been tested in controlled studies, with contradictory results. With such diets, some studies show symptom regression but others report negative side effects, essentially protein malnutrition. Methodological bias, small sample sizes, the use of various diagnostic criteria or heterogeneity of evaluation interfere with data analysis and interpretation, which prompted professionals to be cautious with such diets. The third hypothesis emphasized in the literature is the amino acid domain. Some autistic children lack some amino acids such as glutamic or aspartic acids for example and this deficiency
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Yang, Qi; Franco, Christopher M M; Sorokin, Shirley J; Zhang, Wei
2017-02-02
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3-D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers.
Yeast biomass production: a new approach in glucose-limited feeding strategy
Directory of Open Access Journals (Sweden)
Érika Durão Vieira
2013-01-01
Full Text Available The aim of this work was to implement experimentally a simple glucose-limited feeding strategy for yeast biomass production in a bubble column reactor based on a spreadsheet simulator suitable for industrial application. In biomass production process using Saccharomyces cerevisiae strains, one of the constraints is the strong tendency of these species to metabolize sugars anaerobically due to catabolite repression, leading to low values of biomass yield on substrate. The usual strategy to control this metabolic tendency is the use of a fed-batch process in which where the sugar source is fed incrementally and total sugar concentration in broth is maintained below a determined value. The simulator presented in this work was developed to control molasses feeding on the basis of a simple theoretical model in which has taken into account the nutritional growth needs of yeast cell and two input data: the theoretical specific growth rate and initial cell biomass. In experimental assay, a commercial baker's yeast strain and molasses as sugar source were used. Experimental results showed an overall biomass yield on substrate of 0.33, a biomass increase of 6.4 fold and a specific growth rate of 0.165 h-1 in contrast to the predicted value of 0.180 h-1 in the second stage simulation.
Exploring the Obstacles and the Limits of Sustainable Development. A Theoretical Approach
Directory of Open Access Journals (Sweden)
Paula-Carmen Roșca
2017-03-01
Full Text Available The term “sustainable” or “sustainability” is currently used so much and in so many fields that it has become basically part of our everyday lives. It has been connected and linked to almost everything related to our living, to our lifestyle: energy, transport, housing, diet, clothing etc. But what does the term “sustainable” really mean? Many people may have heard about sustainable development or sustainability and may have even tried to have a sustainable living but their efforts might not be enough. The present paper is meant to bring forward a few of the limits of “sustainability” concept. Moreover, it is focused on revealing some arguments from the “other side” along with disagreements regarding some of the principles of “sustainable development” and even critics related to its progress, to its achievements. Another purpose of this paper is to draw attention over some of the issues and obstacles which may threaten the future of sustainability. The paper is also meant to highlight the impact that some stakeholders might have on the evolution of sustainable development due to their financial power, on a global scale.
The quasi-classical limit of scattering amplitude - L2-approach for short range potentials
International Nuclear Information System (INIS)
Yajima, K.; Vienna Univ.
1984-01-01
We are concerned with the asymptotic behaviour as Planck's constant h → 0 of the scattering operator Ssup(h) associated with the pair of Schroedinger equations i h/2π delta u/delta t = - ((h/2π) 2 /2m)Δu + V(x) u equivalent to Hsup(h)u and i h/2π delta u/delta t = - ((h/2π) 2 /2m)Δu equivalent to Hsup(h) 0 u. We shall show under certain conditions that the scattering matrix S-circumflexsup(h)(p,q), the distribution kernel of Ssup(h) in momentum representation, may be expressed in terms of a Fourier integral operator. Then applying the stationary phase method to it, we shall prove that S-circumflexsup(h) has an asymptotic expansion in powers of h/2π up to any order in L 2 -space and that the limit as h → 0 of the total cross section is twice the one of classical mechanics, in generic. (Author)
Multicore in production: advantages and limits of the multiprocess approach in the ATLAS experiment
International Nuclear Information System (INIS)
Binet, S; Calafiura, P; Lavrijsen, W; Leggett, C; Tatarkhanov, M; Tsulaia, V; Jha, M K; Lesny, D; Severini, H; Smith, D; Snyder, S; VanGemmeren, P; Washbrook, A
2012-01-01
The shared memory architecture of multicore CPUs provides HEP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelize HEP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm, which allowed us to achieve sharing of almost 80% of memory pages among event worker processes for certain types of reconstruction jobs with negligible CPU overhead. By leaving the task of managing shared memory pages to the operating system, we have been able to parallelize large reconstruction and simulation applications originally written to be run in a single thread of execution with little to no change to the application code. The process of validating AthenaMP for production took ten months of concentrated effort and is expected to continue for several more months. Besides validating the software itself, an important and time-consuming aspect of running multicore applications in production was to configure the ATLAS distributed production system to handle multicore jobs. This entailed defining multicore batch queues, where the unit resource is not a core, but a whole computing node; monitoring the output of many event workers; and adapting the job definition layer to handle computing resources with different event throughputs. We will present scalability and memory usage studies, based on data gathered both on dedicated hardware and at the CERN Computer Center.
Angular plasmon response of gold nanoparticles arrays: approaching the Rayleigh limit
Directory of Open Access Journals (Sweden)
Marae-Djouda Joseph
2016-07-01
Full Text Available The regular arrangement of metal nanoparticles influences their plasmonic behavior. It has been previously demonstrated that the coupling between diffracted waves and plasmon modes can give rise to extremely narrow plasmon resonances. This is the case when the single-particle localized surface plasmon resonance (λLSP is very close in value to the Rayleigh anomaly wavelength (λRA of the nanoparticles array. In this paper, we performed angle-resolved extinction measurements on a 2D array of gold nano-cylinders designed to fulfil the condition λRA<λLSP. Varying the angle of excitation offers a unique possibility to finely modify the value of λRA, thus gradually approaching the condition of coupling between diffracted waves and plasmon modes. The experimental observation of a collective dipolar resonance has been interpreted by exploiting a simplified model based on the coupling of evanescent diffracted waves with plasmon modes. Among other plasmon modes, the measurement technique has also evidenced and allowed the study of a vertical plasmon mode, only visible in TM polarization at off-normal excitation incidence. The results of numerical simulations, based on the periodic Green’s tensor formalism, match well with the experimental transmission spectra and show fine details that could go unnoticed by considering only experimental data.
International Nuclear Information System (INIS)
Bruno, J.; Duro, L.; Jordana, S.; Cera, E.
1996-02-01
Solubility limits constitute a critical parameter for the determination of the mobility of radionuclides in the near field and the geosphere, and consequently for the performance assessment of nuclear waste repositories. Mounting evidence from natural system studies indicate that trace elements, and consequently radionuclides, are associated to the dynamic cycling of major geochemical components. We have recently developed a thermodynamic approach to take into consideration the co-precipitation and co-dissolution processes that mainly control this linkage. The approach has been tested in various natural system studies with encouraging results. The Pocos de Caldas natural analogue was one of the sites where a full testing of our predictive geochemical modelling capabilities were done during the analogue project. We have revisited the Pocos de Caldas data and expanded the trace element solubility calculations by considering the documented trace metal/major ion interactions. This has been done by using the co-precipitation/co-dissolution approach. The outcome is as follows: A satisfactory modelling of the behaviour of U, Zn and REEs is achieved by assuming co-precipitation with ferrihydrite. Strontium concentrations are apparently controlled by its co-dissolution from Sr-rich fluorites. From the performance assessment point of view, the present work indicates that calculated solubility limits using the co-precipitation approach are in close agreement with the actual trace element concentrations. Furthermore, the calculated radionuclide concentrations are 2-4 orders of magnitude lower than conservative solubility limits calculated by assuming equilibrium with individual trace element phases. 34 refs, 18 figs, 13 tabs
Energy Technology Data Exchange (ETDEWEB)
Bruno, J.; Duro, L.; Jordana, S.; Cera, E. [QuantiSci, Barcelona (Spain)
1996-02-01
Solubility limits constitute a critical parameter for the determination of the mobility of radionuclides in the near field and the geosphere, and consequently for the performance assessment of nuclear waste repositories. Mounting evidence from natural system studies indicate that trace elements, and consequently radionuclides, are associated to the dynamic cycling of major geochemical components. We have recently developed a thermodynamic approach to take into consideration the co-precipitation and co-dissolution processes that mainly control this linkage. The approach has been tested in various natural system studies with encouraging results. The Pocos de Caldas natural analogue was one of the sites where a full testing of our predictive geochemical modelling capabilities were done during the analogue project. We have revisited the Pocos de Caldas data and expanded the trace element solubility calculations by considering the documented trace metal/major ion interactions. This has been done by using the co-precipitation/co-dissolution approach. The outcome is as follows: A satisfactory modelling of the behaviour of U, Zn and REEs is achieved by assuming co-precipitation with ferrihydrite. Strontium concentrations are apparently controlled by its co-dissolution from Sr-rich fluorites. From the performance assessment point of view, the present work indicates that calculated solubility limits using the co-precipitation approach are in close agreement with the actual trace element concentrations. Furthermore, the calculated radionuclide concentrations are 2-4 orders of magnitude lower than conservative solubility limits calculated by assuming equilibrium with individual trace element phases. 34 refs, 18 figs, 13 tabs.
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Directory of Open Access Journals (Sweden)
Nicholas Stacey
2012-11-01
Full Text Available A WHO and UNICEF joint report states that in 2008, 884 million people lacked access to potable drinking water. A life-cycle approach to develop potable water systems may improve the sustainability for such systems, however, a review of the literature shows that such an approach has primarily been used for urban systems located in resourced countries. Although urbanization is increasing globally, over 40 percent of the world’s population is currently rural with many considered poor. In this paper, we present a first step towards using life-cycle assessment to develop sustainable rural water systems in resource-limited countries while pointing out the needs. For example, while there are few differences in costs and environmental impacts for many improved rural water system options, a system that uses groundwater with community standpipes is substantially lower in cost that other alternatives with a somewhat lower environmental inventory. However, a LCA approach shows that from institutional as well as community and managerial perspectives, sustainability includes many other factors besides cost and environment that are a function of the interdependent decision process used across the life cycle of a water system by aid organizations, water user committees, and household users. These factors often present the biggest challenge to designing sustainable rural water systems for resource-limited countries.
January, Kathleen; Conway, Laura J; Deardorff, Matthew; Harrington, Ann; Krantz, Ian D; Loomes, Kathleen; Pipan, Mary; Noon, Sarah E
2016-06-01
Given the clinical complexities of Cornelia de Lange Syndrome (CdLS), the Center for CdLS and Related Diagnoses at The Children's Hospital of Philadelphia (CHOP) and The Multidisciplinary Clinic for Adolescents and Adults at Greater Baltimore Medical Center (GBMC) were established to develop a comprehensive approach to clinical management and research issues relevant to CdLS. Little work has been done to evaluate the general utility of a multispecialty approach to patient care. Previous research demonstrates several advantages and disadvantages of multispecialty care. This research aims to better understand the benefits and limitations of a multidisciplinary clinic setting for individuals with CdLS and related diagnoses. Parents of children with CdLS and related diagnoses who have visited a multidisciplinary clinic (N = 52) and who have not visited a multidisciplinary clinic (N = 69) were surveyed to investigate their attitudes. About 90.0% of multispecialty clinic attendees indicated a preference for multidisciplinary care. However, some respondents cited a need for additional clinic services including more opportunity to meet with other specialists (N = 20), such as behavioral health, and increased information about research studies (N = 15). Travel distance and expenses often prevented families' multidisciplinary clinic attendance (N = 41 and N = 35, respectively). Despite identified limitations, these findings contribute to the evidence demonstrating the utility of a multispecialty approach to patient care. This approach ultimately has the potential to not just improve healthcare for individuals with CdLS but for those with medically complex diagnoses in general. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
African Journals Online (AJOL)
From the 4th – 17th December 2016, the parties of the. Convention for Biodiversity held their 13th conference in Cancún,. Mexico. At the event, a revised red list was produced. On the list are some species featured for the first time. Others were down- listed, or moved into categories more dire than previously was the case.
International Nuclear Information System (INIS)
Goldammer, W.; Helming, M.; Kuehnel, G.; Landfermann, H.-H.
2000-01-01
The clean-up of contaminated sites requires appropriate and efficient methodologies for the decision-making about priorities and extent of remedial measures, aiming at the two, usually conflicting, goals to protect people and the environment and to save money and other resources. Finding the cost-effective balance between these two primary objectives often is complicated by several factors. Sensible decision-making in this situation requires the use of appropriate methodologies and tools which assist in identifying and implementing the optimal solution. The paper discusses an approach developed in Germany to achieve environmentally sound and cost-effective solutions. A basic requirement within the German approach is the limitation of individual doses in order to limit inequity between people exposed. An Action Level of 1 mSv per annum is used in this sense for the identification of sites that require farther investigation and potentially remediation. On the basis of this individual dose related criterion secondary reference levels for directly measurable quantities such as activity concentrations have been derived, facilitating the practical application of the Action Level Concept. Decisions on remedial action, in particular for complex sites, are based on justification and optimization analyses. These take into consideration a variety of different contaminants and risks to humans and the environment arising on various exposure pathways. The optimization analyses, carried-out to identify optimal remediation options, address radiological risks as well as short and long term costs within a cost-benefit analysis framework. Other relevant factors of influence, e.g. chemical risks or ecological damage, are incorporated as well. Comprehensive methodologies utilizing probabilistic methods have been developed to assess site conditions and possible remediation options on this basis. The approaches developed are applied within the German uranium mine rehabilitation program
Energy Technology Data Exchange (ETDEWEB)
Goldammer, W. [Brenk Systemplanung GmbH, Aachen (Germany); Helming, M.; Kuehnel, G.; Landfermann, H.-H. [Federal Ministry for the Environment, Nature Conservation and Nuclear Safety, Bonn (Germany)
2000-05-01
The clean-up of contaminated sites requires appropriate and efficient methodologies for the decision-making about priorities and extent of remedial measures, aiming at the two, usually conflicting, goals to protect people and the environment and to save money and other resources. Finding the cost-effective balance between these two primary objectives often is complicated by several factors. Sensible decision-making in this situation requires the use of appropriate methodologies and tools which assist in identifying and implementing the optimal solution. The paper discusses an approach developed in Germany to achieve environmentally sound and cost-effective solutions. A basic requirement within the German approach is the limitation of individual doses in order to limit inequity between people exposed. An Action Level of 1 mSv per annum is used in this sense for the identification of sites that require farther investigation and potentially remediation. On the basis of this individual dose related criterion secondary reference levels for directly measurable quantities such as activity concentrations have been derived, facilitating the practical application of the Action Level Concept. Decisions on remedial action, in particular for complex sites, are based on justification and optimization analyses. These take into consideration a variety of different contaminants and risks to humans and the environment arising on various exposure pathways. The optimization analyses, carried-out to identify optimal remediation options, address radiological risks as well as short and long term costs within a cost-benefit analysis framework. Other relevant factors of influence, e.g. chemical risks or ecological damage, are incorporated as well. Comprehensive methodologies utilizing probabilistic methods have been developed to assess site conditions and possible remediation options on this basis. The approaches developed are applied within the German uranium mine rehabilitation program
Strait, J
2009-01-01
Following the incident in sector 34, considerable effort has been made to improve the systems for detecting similar faults and to improve the safety systems to limit the damage if a similar incident should occur. Nevertheless, even after the consolidation and repairs are completed, other faults may still occur in the superconducting magnet systems, which could result in damage to the LHC. Such faults include both direct failures of a particular component or system, or an incorrect response to a “normal” upset condition, for example a quench. I will review a range of faults which could be reasonably expected to occur in the superconducting magnet systems, and which could result in substantial damage and down-time to the LHC. I will evaluate the probability and the consequences of such faults, and suggest what mitigations, if any, are possible to protect against each.
Department of Housing and Urban Development — In accordance with 24 CFR Part 92.252, HUD provides maximum HOME rent limits. The maximum HOME rents are the lesser of: The fair market rent for existing housing for...
Catastrophic Disruption Threshold and Maximum Deflection from Kinetic Impact
Cheng, A. F.
2017-12-01
The use of a kinetic impactor to deflect an asteroid on a collision course with Earth was described in the NASA Near-Earth Object Survey and Deflection Analysis of Alternatives (2007) as the most mature approach for asteroid deflection and mitigation. The NASA DART mission will demonstrate asteroid deflection by kinetic impact at the Potentially Hazardous Asteroid 65803 Didymos in October, 2022. The kinetic impactor approach is considered to be applicable with warning times of 10 years or more and with hazardous asteroid diameters of 400 m or less. In principle, a larger kinetic impactor bringing greater kinetic energy could cause a larger deflection, but input of excessive kinetic energy will cause catastrophic disruption of the target, leaving possibly large fragments still on collision course with Earth. Thus the catastrophic disruption threshold limits the maximum deflection from a kinetic impactor. An often-cited rule of thumb states that the maximum deflection is 0.1 times the escape velocity before the target will be disrupted. It turns out this rule of thumb does not work well. A comparison to numerical simulation results shows that a similar rule applies in the gravity limit, for large targets more than 300 m, where the maximum deflection is roughly the escape velocity at momentum enhancement factor β=2. In the gravity limit, the rule of thumb corresponds to pure momentum coupling (μ=1/3), but simulations find a slightly different scaling μ=0.43. In the smaller target size range that kinetic impactors would apply to, the catastrophic disruption limit is strength-controlled. A DART-like impactor won't disrupt any target asteroid down to significantly smaller size than the 50 m below which a hazardous object would not penetrate the atmosphere in any case unless it is unusually strong.
van de Ven, Stephanie M W Y; Bemis, Kyle D; Lau, Kenneth; Adusumilli, Ravali; Kota, Uma; Stolowitz, Mark; Vitek, Olga; Mallick, Parag; Gambhir, Sanjiv S
2016-06-01
MALDI mass spectrometry imaging (MSI) is emerging as a tool for protein and peptide imaging across tissue sections. Despite extensive study, there does not yet exist a baseline study evaluating the potential capabilities for this technique to detect diverse proteins in tissue sections. In this study, we developed a systematic approach for characterizing MALDI-MSI workflows in terms of limits of detection, coefficients of variation, spatial resolution, and the identification of endogenous tissue proteins. Our goal was to quantify these figures of merit for a number of different proteins and peptides, in order to gain more insight in the feasibility of protein biomarker discovery efforts using this technique. Control proteins and peptides were deposited in serial dilutions on thinly sectioned mouse xenograft tissue. Using our experimental setup, coefficients of variation were biomarkers and a new benchmarking strategy that can be used for comparing diverse MALDI-MSI workflows. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Directory of Open Access Journals (Sweden)
Bruni Filippo
2015-12-01
Full Text Available As continually greater attention is given to the processes of gamification, the dimension pertaining to evaluation must also be focussed on the purpose of avoiding ineffective forms of banalisation. In reference to the evidence-based approach proposed by Mayer and in highlighting its possibilities and limits, an experiment is herein presented related to teacher training, in which we attempt to unite some traits of the processes of gamification to a first evaluation screen. The data obtained, if they seem on the one hand, indicate an overall positive perception on the part of the attendees, on the other though, they indicate forms of resistance and of saturation with respect to both the excessively competitive mechanisms and the peer evaluation procedures.
International Nuclear Information System (INIS)
Birge, Jonathan R.; Kaertner, Franz X.
2008-01-01
We derive an analytical approximation for the measured pulse width error in spectral shearing methods, such as spectral phase interferometry for direct electric-field reconstruction (SPIDER), caused by an anomalous delay between the two sheared pulse components. This analysis suggests that, as pulses approach the single-cycle limit, the resulting requirements on the calibration and stability of this delay become significant, requiring precision orders of magnitude higher than the scale of a wavelength. This is demonstrated by numerical simulations of SPIDER pulse reconstruction using actual data from a sub-two-cycle laser. We briefly propose methods to minimize the effects of this sensitivity in SPIDER and review variants of spectral shearing that attempt to avoid this difficulty
Rinehart, Ann
2013-11-01
Futility is an ancient concept arising from Greek mythology that was resurrected for its medical application in the 1980s with the proliferation of many lifesaving technologies, including dialysis and renal transplantation. By that time, the domineering medical paternalism that characterized the pre-1960s physician-patient relationship morphed into assertive patient autonomy, and some patients began to claim the right to demand aggressive, high-technology interventions, despite physician disapproval. To counter this power struggle, the establishment of a precise definition of futility offered hope for a futility policy that would allow physicians to justify withholding or withdrawing treatment, despite patient and family objections. This article reviews the various attempts made to define medical futility and describes their limited applicability to dialysis. When futility concerns arise, physicians should recognize the opportunity to address conflict, using best practice communication skills. Physicians would also benefit from understanding the ethical principles of respect for patient autonomy, beneficence, nonmaleficence, justice, and professional integrity that underlie medical decision-making. Also reviewed is the use of a fair process approach or time-limited trial when conflict resolution cannot be achieved. These topics are addressed in the Renal Physician Association's clinical practice guideline Shared Decision-Making in the Appropriate Initiation and Withdrawal from Dialysis, with which nephrologists should be well versed. A case presentation of intractable calciphylaxis in a new dialysis patient illustrates the pitfalls of physicians not fully appreciating the ethics of medical decision-making and failing to use effective conflict management approaches in the clinical practice guideline.
2013-01-01
Summary Futility is an ancient concept arising from Greek mythology that was resurrected for its medical application in the 1980s with the proliferation of many lifesaving technologies, including dialysis and renal transplantation. By that time, the domineering medical paternalism that characterized the pre-1960s physician–patient relationship morphed into assertive patient autonomy, and some patients began to claim the right to demand aggressive, high-technology interventions, despite physician disapproval. To counter this power struggle, the establishment of a precise definition of futility offered hope for a futility policy that would allow physicians to justify withholding or withdrawing treatment, despite patient and family objections. This article reviews the various attempts made to define medical futility and describes their limited applicability to dialysis. When futility concerns arise, physicians should recognize the opportunity to address conflict, using best practice communication skills. Physicians would also benefit from understanding the ethical principles of respect for patient autonomy, beneficence, nonmaleficence, justice, and professional integrity that underlie medical decision-making. Also reviewed is the use of a fair process approach or time-limited trial when conflict resolution cannot be achieved. These topics are addressed in the Renal Physician Association’s clinical practice guideline Shared Decision-Making in the Appropriate Initiation and Withdrawal from Dialysis, with which nephrologists should be well versed. A case presentation of intractable calciphylaxis in a new dialysis patient illustrates the pitfalls of physicians not fully appreciating the ethics of medical decision-making and failing to use effective conflict management approaches in the clinical practice guideline. PMID:23868900
Nistal-Nuño, Beatriz
2017-03-31
In Chile, a new law introduced in March 2012 lowered the blood alcohol concentration (BAC) limit for impaired drivers from 0.1% to 0.08% and the BAC limit for driving under the influence of alcohol from 0.05% to 0.03%, but its effectiveness remains uncertain. The goal of this investigation was to evaluate the effects of this enactment on road traffic injuries and fatalities in Chile. A retrospective cohort study. Data were analyzed using a descriptive and a Generalized Linear Models approach, type of Poisson regression, to analyze deaths and injuries in a series of additive Log-Linear Models accounting for the effects of law implementation, month influence, a linear time trend and population exposure. A review of national databases in Chile was conducted from 2003 to 2014 to evaluate the monthly rates of traffic fatalities and injuries associated to alcohol and in total. It was observed a decrease by 28.1 percent in the monthly rate of traffic fatalities related to alcohol as compared to before the law (Plaw (Plaw implemented in 2012 in Chile. Chile experienced a significant reduction in alcohol-related traffic fatalities and injuries, being a successful public health intervention.
Ballweg, Verena; Eibofner, Frank; Graf, Hansjorg
2011-10-01
State of the art to access radiofrequency (RF) heating near implants is computer modeling of the devices and solving Maxwell's equations for the specific setup. For a set of input parameters, a fixed result is obtained. This work presents a theoretical approach in the alternating current (ac) limit, which can potentially render closed formulas for the basic behavior of tissue heating near metallic structures. Dedicated experiments were performed to support the theory. For the ac calculations, the implant was modeled as an RLC parallel circuit, with L being the secondary of a transformer and the RF transmission coil being its primary. Parameters influencing coupling, power matching, and specific absorption rate (SAR) were determined and formula relations were established. Experiments on a copper ring with a radial gap as capacitor for inductive coupling (at 1.5 T) and on needles for capacitive coupling (at 3 T) were carried out. The temperature rise in the embedding dielectric was observed as a function of its specific resistance using an infrared (IR) camera. Closed formulas containing the parameters of the setup were obtained for the frequency dependence of the transmitted power at fixed load resistance, for the calculation of the resistance for optimum power transfer, and for the calculation of the transmitted power in dependence of the load resistance. Good qualitative agreement was found between the course of the experimentally obtained heating curves and the theoretically determined power curves. Power matching revealed as critical parameter especially if the sample was resonant close to the Larmor frequency. The presented ac approach to RF heating near an implant, which mimics specific values for R, L, and C, allows for closed formulas to estimate the potential of RF energy transfer. A first reference point for worst-case determination in MR testing procedures can be obtained. Numerical approaches, necessary to determine spatially resolved heating maps, can
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
77 FR 37554 - Calculation of Maximum Obligation Limitation
2012-06-22
... definition of a financial company under section 201 of the Dodd- Frank Act. \\4\\ Section 203(b) of the Dodd... definition of a financial company under section 201. \\5\\ 12 U.S.C. 1823(c)(4). \\6\\ Section 201(a)(11) of the... is in default or in danger of default and that it meets the definition of financial company under...
76 FR 72645 - Calculation of Maximum Obligation Limitation
2011-11-25
..., inter alia, its powers and duties to: (1) Succeed to all rights, titles, powers and privileges of the... issued on or after January 1, 1999. The Agencies have sought to present the proposed rule in a simple and...
34 CFR 682.506 - Limitations on maximum loan amounts.
2010-07-01
... loan is intended less— (i) The student's estimated financial assistance; and (ii) The student's.... (b) The Secretary does not guarantee a Federal Consolidation loan in an amount greater than that required to discharge loans eligible for consolidation under § 682.100(a)(4). (Authority: 20 U.S.C. 1075...
5 CFR 582.402 - Maximum garnishment limitations.
2010-01-01
... earnings that may be garnished for a Federal, State or local tax obligation or in compliance with an order... alimony, including any amounts withheld to offset administrative costs as provided for in § 582.305(k... of an employee-obligor's aggregate disposable earnings for any workweek in compliance with legal...
Amr, Sherif M; El-Mofty, Aly O; Amin, Sherif N
2002-01-01
The potentialities, limitations, and technical pitfalls of the vascularized fibular grafting in infected nonunions of the tibia are outlined on the basis of 14 patients approached anteriorly or posteriorly. An infected nonunion of the tibia together with a large exposed area over the shin of the tibia is better approached anteriorly. The anastomosis is placed in an end-to-end or end-to-side fashion onto the anterior tibial vessels. To locate the site of the nonunion, the tibialis anterior muscle should be retracted laterally and the proximal and distal ends of the site of the nonunion debrided up to healthy bleeding bone. All the scarred skin over the anterior tibia should be excised, because it becomes devitalized as a result of the exposure. To cover the exposed area, the fibula has to be harvested with a large skin paddle, incorporating the first septocutaneous branch originating from the peroneal vessels before they gain the upper end of the flexor hallucis longus muscle. A disadvantage of harvesting the free fibula together with a skin paddle is that its pedicle is short. The skin paddle lies at the antimesenteric border of the graft, the site of incising and stripping the periosteum. In addition, it has to be sutured to the skin at the recipient site, so the soft tissues (together with the peroneal vessels), cannot be stripped off the graft to prolong its pedicle. Vein grafts should be resorted to, if the pedicle does not reach a healthy segment of the anterior tibial vessels. Defects with limited exposed areas of skin, especially in questionable patency of the vessels of the leg, require primarily a fibula with a long pedicle that could easily reach the popliteal vessels and are thus better approached posteriorly. In this approach, the site of the nonunion is exposed medial to the flexor digitorum muscle and the proximal and distal ends of the site of the nonunion debrided up to healthy bleeding bone. No attempt should be made to strip the scarred skin off
Energy Technology Data Exchange (ETDEWEB)
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2000-11-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were >0.85 and constant across sampling units (95% confidence interval coverage = 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C{bar V} = 0.27 [SE = 0.0004]) than when they did (C{bar V} = 0.19 [SE = 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Unification of field theory and maximum entropy methods for learning probability densities
Kinney, Justin B.
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Energy Technology Data Exchange (ETDEWEB)
Bittel, R; Mancel, J [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire
1968-10-01
The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)
Modelling information flow along the human connectome using maximum flow.
Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung
2018-01-01
The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.
A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.
Tipton, Elizabeth; Shuster, Jonathan
2017-10-15
Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Atkinson, Nancy L; Saperstein, Sandra L; Desmond, Sharon M; Gold, Robert S; Billing, Amy S; Tian, Jing
2009-06-22
Adult women living in rural areas have high rates of obesity. Although rural populations have been deemed hard to reach, Internet-based programming is becoming a viable strategy as rural Internet access increases. However, when people are able to get online, they may not find information designed for them and their needs, especially harder to reach populations. This results in a "content gap" for many users. User-centered design is a methodology that can be used to create appropriate online materials. This research was conducted to apply a user-centered approach to the design and development of a health promotion website for low-income mothers living in rural Maryland. Three iterative rounds of concept testing were conducted to (1) identify the name and content needs of the site and assess concerns about registering on a health-related website; (2) determine the tone and look of the website and confirm content and functionality; and (3) determine usability and acceptability. The first two rounds involved focus group and small group discussions, and the third round involved usability testing with individual women as they used the prototype system. The formative research revealed that women with limited incomes were enthusiastic about a website providing nutrition and physical activity information targeted to their incomes and tailored to their personal goals and needs. Other priority content areas identified were budgeting, local resources and information, and content that could be used with their children. Women were able to use the prototype system effectively. This research demonstrated that user-centered design strategies can help close the "content gap" for at-risk audiences.
Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora
Energy Technology Data Exchange (ETDEWEB)
Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)
1968-07-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
International Nuclear Information System (INIS)
Sapinski, M.
2012-01-01
With thirteen beam induced quenches and numerous Machine Development tests, the current knowledge of LHC magnets quench limits still contains a lot of unknowns. Various approaches to determine the quench limits are reviewed and results of the tests are presented. Attempt to reconstruct a coherent picture emerging from these results is taken. The available methods of computation of the quench levels are presented together with dedicated particle shower simulations which are necessary to understand the tests. The future experiments, needed to reach better understanding of quench limits as well as limits for the machine operation are investigated. The possible strategies to set BLM (Beam Loss Monitor) thresholds are discussed. (author)
Trueba, A; García-Lastra, J M; Garcia-Fernandez, P; Aramburu, J A; Barriuso, M T; Moreno, M
2011-11-24
This work is aimed at clarifying the changes on optical spectra of Cr(3+) impurities due to either a host lattice variation or a hydrostatic pressure, which can hardly be understood by means of the usual Tanabe-Sugano (TS) approach assuming that the Racah parameter, B, grows when covalency decreases. For achieving this goal, the optical properties of Cr(3+)-doped LiBaF(3) and KMgF(3) model systems have been explored by means of high level ab initio calculations on CrF(6)(3-) units subject to the electric field, E(R)(r), created by the rest of the lattice ions. These calculations, which reproduce available experimental data, indicate that the energy, E((2)E), of the (2)E(t(2g)(3)) → (4)A(2)(t(2g)(3)) emission transition is nearly independent of the host lattice. By contrast, the energy difference corresponding to (4)A(2)(t(2g)(3)) → (4)T(1)(t(2g)(2)e(g)(1)) and (4)A(2)(t(2g)(3)) → (4)T(2)(t(2g)(2)e(g)(1)) excitations, Δ((4)T(1); (4)T(2)), is shown to increase on passing from the normal to the inverted perovskite host lattice despite the increase in covalency, a fact which cannot be accounted for through the usual TS model. Similarly, when the Cr(3+)-F(-) distance, R, is reduced both Δ((4)T(1); (4)T(2)) and the covalency are found to increase. By analyzing the limitations of the usual model, we found surprising results that are shown to arise from the deformation of both 3d(Cr) and ligand orbitals in the antibonding e(g) orbital, which has a σ character and is more extended than the π t(2g) orbital. By contrast, because of the higher stiffness of the t(2g) orbital, the dependence of E((2)E) with R basically follows the corresponding variation of covalency in that level. Bearing in mind the similarities of the optical properties displayed by Cr(3+) impurities in oxides and fluorides, the present results can be useful for understanding experimental data on Cr(3+)-based gemstones where the local symmetry is lower than cubic.
Scrape-off layer based modelling of the density limit in beryllated JET limiter discharges
International Nuclear Information System (INIS)
Borrass, K.; Campbell, D.J.; Clement, S.; Vlases, G.C.
1993-01-01
The paper gives a scrape-off layer based interpretation of the density limit in beryllated JET limiter discharges. In these discharges, JET edge parameters show a complicated time evolution as the density limit is approached and the limit is manifested as a non-disruptive density maximum which cannot be exceeded by enhanced gas puffing. The occurrence of Marfes, the manner of density control and details of recycling are essential elements of the interpretation. Scalings for the maximum density are given and compared with JET data. The relation to disruptive density limits, previously observed in JET carbon limiter discharges, and to density limits in divertor discharges is discussed. (author). 18 refs, 10 figs, 1 tab
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
International Nuclear Information System (INIS)
Wu, Y.T.; Gureghian, A.B.; Sagar, B.; Codell, R.B.
1992-12-01
The Limit State approach is based on partitioning the parameter space into two parts: one in which the performance measure is smaller than a chosen value (called the limit state), and the other in which it is larger. Through a Taylor expansion at a suitable point, the partitioning surface (called the limit state surface) is approximated as either a linear or quadratic function. The success and efficiency of the limit state method depends upon choosing an optimum point for the Taylor expansion. The point in the parameter space that has the highest probability of producing the value chosen as the limit state is optimal for expansion. When the parameter space is transformed into a standard Gaussian space, the optimal expansion point, known as the lost Probable Point (MPP), has the property that its location on the Limit State surface is closest to the origin. Additionally, the projections onto the parameter axes of the vector from the origin to the MPP are the sensitivity coefficients. Once the MPP is determined and the Limit State surface approximated, formulas (see Equations 4-7 and 4-8) are available for determining the probability of the performance measure being less than the limit state. By choosing a succession of limit states, the entire cumulative distribution of the performance measure can be detemined. Methods for determining the MPP and also for improving the estimate of the probability are discussed in this report
2005-04-29
To) 29-04-2005 Final Report July 2004 to July 2005 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER The appli’eation of an army prospective payment model structured...Z39.18 Prospective Payment Model 1 The Application of an Army Prospective Payment Model Structured on the Standards Set Forth by the CHAMPUS Maximum...Health Care Administration 20060315 090 Prospective Payment Model 2 Acknowledgments I would like to acknowledge my wife, Karen, who allowed me the
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Larner, A J
2016-01-01
Calculation of correlation coefficients is often undertaken as a way of comparing different cognitive screening instruments (CSIs). However, test scores may correlate but not agree, and high correlation may mask lack of agreement between scores. The aim of this study was to use the methodology of Bland and Altman to calculate limits of agreement between the scores of selected CSIs and contrast the findings with Pearson's product moment correlation coefficients between the test scores of the same instruments. Datasets from three pragmatic diagnostic accuracy studies which examined the Mini-Mental State Examination (MMSE) vs. the Montreal Cognitive Assessment (MoCA), the MMSE vs. the Mini-Addenbrooke's Cognitive Examination (M-ACE), and the M-ACE vs. the MoCA were analysed to calculate correlation coefficients and limits of agreement between test scores. Although test scores were highly correlated (all >0.8), calculated limits of agreement were broad (all >10 points), and in one case, MMSE vs. M-ACE, was >15 points. Correlation is not agreement. Highly correlated test scores may conceal broad limits of agreement, consistent with the different emphases of different tests with respect to the cognitive domains examined. Routine incorporation of limits of agreement into diagnostic accuracy studies which compare different tests merits consideration, to enable clinicians to judge whether or not their agreement is close. © 2016 S. Karger AG, Basel.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
International Nuclear Information System (INIS)
Kusano, K.; Kondoh, Y.; Gesso, H.; Osanai, Y.; Saito, K.N.; Ukai, R.; Nanba, T.; Nagamine, Y.; Shiina, S.
2001-01-01
Before the generation of steady state, dynamo-free RFP configuration by rf current driving scheme, it is necessary to find an optimum configuration into high stability beta limit against m=1 resonant resistive MHD modes and reducing nonlinearly turbulent level with less rf power. As first step to the optimization study, we are interested in partially relaxed state model (PRSM) RFP configuration, which is considered to be closer to a relaxed state at finite beta since it has force-free fields for poloidal direction with a relatively shorter characteristic length of relaxation and a relatively higher stability beta limit to m=1 resonant ideal MHD modes. The stability beta limit to m=1 resonant resistive MHD modes can be predicted to be relatively high among other RFP models and to be enhanced by the current density profile control using fast magnetosonic waves (FMW), which are accessible to high density region with strong absorption rate. (author)
Viecco, Camilo H.; Camp, L. Jean
Effective defense against Internet threats requires data on global real time network status. Internet sensor networks provide such real time network data. However, an organization that participates in a sensor network risks providing a covert channel to attackers if that organization’s sensor can be identified. While there is benefit for every party when any individual participates in such sensor deployments, there are perverse incentives against individual participation. As a result, Internet sensor networks currently provide limited data. Ensuring anonymity of individual sensors can decrease the risk of participating in a sensor network without limiting data provision.
The maximum significant wave height in the Southern North Sea
Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.
1995-01-01
The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...
Binnie, James
2018-04-01
Although the article by Scott rightly questions the dynamics of the Improving Access to Psychological Therapies system and re-examines the recovery rates, finding quite shocking results, his recommendations are ultimately flawed. There is a strong critique of the diagnostic procedures in Improving Access to Psychological Therapies services, but the answer is not to diagnose more rigorously and to adhere more strictly to a manualised approach to psychotherapy. The opposite may be required. Alternatives to the medical model of distress offer a less stigmatising and more human approach to helping people with their problems. Perhaps psychological therapists and the people they work alongside would be better served by a psychological approach rather than a psychiatric one.
Attitude sensor alignment calibration for the solar maximum mission
Pitone, Daniel S.; Shuster, Malcolm D.
1990-01-01
An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.
Winkler, Eva
2011-01-01
The field of oncology with its numerous high-priced innovations contributes considerably to the fact that medical progress is expensive. Additionally, due to the demographic changes and the increasing life expectancy, a growing number of cancer patients want to profit from this progress. Since resources are limited also in the health system, the fair distribution of the available resources urgently needs to be addressed. Dealing with scarcity is a typical problem in the domain of justice theory; therefore, this article first discusses different strategies to manage limited resources: rationalization, rationing, and prioritization. It then presents substantive as well as procedural criteria that assist in the just distribution of effective health benefits. There are various strategies to reduce the utilization of limited resources: Rationalization means that efficiency reserves are being exhausted; by means of rationing, effective health benefits are withheld due to cost considerations. Rationing can occur implicitly and thus covertly, e.g. through budgeting or the implementation of waiting periods, or explicitly, through transparent rules or policies about healthcare coverage. Ranking medical treatments according to their importance (prioritization) is often a prerequisite for rationing decisions. In terms of requirements of justice, both procedural and substantive criteria (e.g. equality, urgency, benefit) are relevant for the acceptance and quality of a decision to limit access to effective health benefits. Copyright © 2011 S. Karger AG, Basel.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum entropy principle and hydrodynamic models in statistical mechanics
International Nuclear Information System (INIS)
Trovato, M.; Reggiani, L.
2012-01-01
This review presents the state of the art of the maximum entropy principle (MEP) in its classical and quantum (QMEP) formulation. Within the classical MEP we overview a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport in the presence of electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. Analogously, the theoretical approach is applied to many one-dimensional n + nn + submicron Si structures by using different band structure models, different doping profiles, different applied biases and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with available experimental data. Within the quantum MEP we introduce a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is then asserted as fundamental principle of quantum statistical mechanics. Accordingly, we have developed a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theory is formulated both in thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ħ 2 , being ħ the reduced Planck constant. In particular, by using an arbitrary number of moments, we prove that: i) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives both of the
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
DEFF Research Database (Denmark)
Trueba, A.; García Lastra, Juan Maria; Garcia-Fernandez, P.
2011-01-01
This work is aimed at clarifying the changes on optical spectra of Cr 3+ impurities due to either a host lattice variation or a hydrostatic pressure, which can hardly be understood by means of the usual Tanabe - Sugano (TS) approach assuming that the Racah parameter, B, grows when covalency decre...
Douma, Rutger D.; Batista, Joana M.; Touw, Kai M.; Kiel, Jan A. K. W.; Zhao, Zheng; Veiga, Tania; Klaassen, Paul; Bovenberg, Roel A. L.; Daran, Jean-Marc; van Gulik, Walter M.; Heijnen, J.J.; Krikken, Arjen
2011-01-01
Background: In microbial production of non-catabolic products such as antibiotics a loss of production capacity upon long-term cultivation (for example chemostat), a phenomenon called strain degeneration, is often observed. In this study a systems biology approach, monitoring changes from gene to
Fehlmann, Michael; Gascón, Estíbaliz; Rohrer, Mario; Schwarb, Manfred; Stoffel, Markus
2018-05-01
The snowfall limit has important implications for different hazardous processes associated with prolonged or heavy precipitation such as flash floods, rain-on-snow events and freezing precipitation. To increase preparedness and to reduce risk in such situations, early warning systems are frequently used to monitor and predict precipitation events at different temporal and spatial scales. However, in alpine and pre-alpine valleys, the estimation of the snowfall limit remains rather challenging. In this study, we characterize uncertainties related to snowfall limit for different lead times based on local measurements of a vertically pointing micro rain radar (MRR) and a disdrometer in the Zulg valley, Switzerland. Regarding the monitoring, we show that the interpolation of surface temperatures tends to overestimate the altitude of the snowfall limit and can thus lead to highly uncertain estimates of liquid precipitation in the catchment. This bias is much smaller in the Integrated Nowcasting through Comprehensive Analysis (INCA) system, which integrates surface station and remotely sensed data as well as outputs of a numerical weather prediction model. To reduce systematic error, we perform a bias correction based on local MRR measurements and thereby demonstrate the added value of such measurements for the estimation of liquid precipitation in the catchment. Regarding the nowcasting, we show that the INCA system provides good estimates up to 6 h ahead and is thus considered promising for operational hydrological applications. Finally, we explore the medium-range forecasting of precipitation type, especially with respect to rain-on-snow events. We show for a selected case study that the probability for a certain precipitation type in an ensemble-based forecast is more persistent than the respective type in the high-resolution forecast (HRES) of the European Centre for Medium Range Weather Forecasts Integrated Forecasting System (ECMWF IFS). In this case study, the
International Nuclear Information System (INIS)
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-01-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
Directory of Open Access Journals (Sweden)
E. Khoury
2013-01-01
Full Text Available This paper deals with a gradually deteriorating system operating under an uncertain environment whose state is only known on a finite rolling horizon. As such, the system is subject to constraints. Maintenance actions can only be planned at imposed times called maintenance opportunities that are available on a limited visibility horizon. This system can, for example, be a commercial vehicle with a monitored critical component that can be maintained only in some specific workshops. Based on the considered system, we aim to use the monitoring data and the time-limited information for maintenance decision support in order to reduce its costs. We propose two predictive maintenance policies based, respectively, on cost and reliability criteria. Classical age-based and condition-based policies are considered as benchmarks. The performance assessment shows the value of the different types of information and the best way to use them in maintenance decision making.
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Energy Technology Data Exchange (ETDEWEB)
Chiang, Chi-Ting [C.N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794 (United States); Cieplak, Agnieszka M.; Slosar, Anže [Brookhaven National Laboratory, Blgd 510, Upton, NY 11375 (United States); Schmidt, Fabian, E-mail: chi-ting.chiang@stonybrook.edu, E-mail: acieplak@bnl.gov, E-mail: fabians@mpa-garching.mpg.de, E-mail: anze@bnl.gov [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)
2017-06-01
The squeezed-limit bispectrum, which is generated by nonlinear gravitational evolution as well as inflationary physics, measures the correlation of three wavenumbers, in the configuration where one wavenumber is much smaller than the other two. Since the squeezed-limit bispectrum encodes the impact of a large-scale fluctuation on the small-scale power spectrum, it can be understood as how the small-scale power spectrum ''responds'' to the large-scale fluctuation. Viewed in this way, the squeezed-limit bispectrum can be calculated using the response approach even in the cases which do not submit to perturbative treatment. To illustrate this point, we apply this approach to the cross-correlation between the large-scale quasar density field and small-scale Lyman-α forest flux power spectrum. In particular, using separate universe simulations which implement changes in the large-scale density, velocity gradient, and primordial power spectrum amplitude, we measure how the Lyman-α forest flux power spectrum responds to the local, long-wavelength quasar overdensity, and equivalently their squeezed-limit bispectrum. We perform a Fisher forecast for the ability of future experiments to constrain local non-Gaussianity using the bispectrum of quasars and the Lyman-α forest. Combining with quasar and Lyman-α forest power spectra to constrain the biases, we find that for DESI the expected 1−σ constraint is err[ f {sub NL}]∼60. Ability for DESI to measure f {sub NL} through this channel is limited primarily by the aliasing and instrumental noise of the Lyman-α forest flux power spectrum. The combination of response approach and separate universe simulations provides a novel technique to explore the constraints from the squeezed-limit bispectrum between different observables.
Sorias, Soli
2015-01-01
Efforts to overcome the problems of descriptive and categorical approaches have not yielded results. In the present article, psychiatric diagnosis using Bayesian networks is proposed. Instead of a yes/no decision, Bayesian networks give the probability of diagnostic category inclusion, thereby yielding both a graded, i.e., dimensional diagnosis, and a value of the certainty of the diagnosis. With the use of Bayesian networks in the diagnosis of mental disorders, information about etiology, associated features, treatment outcome, and laboratory results may be used in addition to clinical signs and symptoms, with each of these factors contributing proportionally to their own specificity and sensitivity. Furthermore, a diagnosis (albeit one with a lower probability) can be made even with incomplete, uncertain, or partially erroneous information, and patients whose symptoms are below the diagnostic threshold can be evaluated. Lastly, there is no need of NOS or "unspecified" categories, and comorbid disorders become different dimensions of the diagnostic evaluation. Bayesian diagnoses allow the preservation of current categories and assessment methods, and may be used concurrently with criteria-based diagnoses. Users need not put in extra effort except to collect more comprehensive information. Unlike the Research Domain Criteria (RDoC) project, the Bayesian approach neither increases the diagnostic validity of existing categories nor explains the pathophysiological mechanisms of mental disorders. It, however, can be readily integrated to present classification systems. Therefore, the Bayesian approach may be an intermediate phase between criteria-based diagnosis and the RDoC ideal.
Directory of Open Access Journals (Sweden)
Fanhao Meng
2018-04-01
Full Text Available Since the concept of hydrological response units (HRUs is used widely in hydrological modeling, the land use change scenarios analysis based on HRU may have direct influence on hydrological processes due to its simplified flow routing and HRU spatial distribution. This paper intends to overcome this issue based on a new analysis approach to explain what impacts for the impact of land use/cover change on hydrological processes (LUCCIHP, and compare whether differences exist between the conventional approach and the improved approach. Therefore, we proposed a sub-basin segmentation approach to obtain more reasonable impact assessment of LUCC scenario by re-discretizing the HRUs and prolonging the flow path in which the LUCC occurs. As a scenario study, the SWAT model is used in the Aksu River Basin, China, to simulate the response of hydrological processes to LUCC over ten years. Moreover, the impacts of LUCC on hydrological processes before and after model modification are compared and analyzed at three levels (catchment scale, sub-basin scale and HRU scale. Comparative analysis of Nash–Sutcliffe coefficient (NSE, RSR and Pbias, model simulations before and after model improvement shows that NSE increased by up to 2%, RSR decreased from 0.73 to 0.72, and Pbias decreased from 0.13 to 0.05. The major LUCCs affecting hydrological elements in this basin are related to the degradation of grassland and snow/ice and expansion of farmland and bare land. Model simulations before and after model improvement show that the average variation of flow components in typical sub-basins (surface runoff, lateral flow and groundwater flow are changed by +11.09%, −4.51%, and −6.58%, and +10.53%, −1.55%, and −8.98% from the base period model scenario, respectively. Moreover, the spatial response of surface runoff at the HRU level reveals clear spatial differences between before and after model improvement. This alternative approach illustrates the potential
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Energy Technology Data Exchange (ETDEWEB)
Malatesta, G; Mannucci, G; Demofonti, G [Centro Sviluppo Materiali S.p.A., Rome (Italy); Cumino, G [TenarisDalmine (Italy); Izquierdo, A; Tivelli, M [Tenaris Group (Mexico); Quintanilla, H [TENARIS Group (Mexico). TAMSA
2005-07-01
Nowadays specifications require strict Yield to Tensile ratio limitation, nevertheless a fully accepted engineering assessment of its influence on pipeline integrity is still lacking. Probabilistic analysis based on structural reliability approach (Limit State Design) aimed at quantifying the Y/T ratio influence on failure probabilities of offshore pipelines was made. In particular, Tenaris seamless pipe data were used as input for the probabilistic failure analysis. The LSD approach has been applied to two actual deep water design cases that have been on purpose selected, and the most relevant failure modes have been considered. Main result of the work is that the quantitative effect of the Y/T ratio on failure probabilities of a deep water pipeline resulted not so big as expected; it has a minor effect, especially when failure modes are governed by Y only. (author)
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Energy Technology Data Exchange (ETDEWEB)
Koenig, Michael [Institut fuer Theoretische Festkoerperphysik, Universitaet Karlsruhe (Germany); Karlsruhe School of Optics and Photonics (KSOP), Universitaet Karlsruhe (Germany); Niegemann, Jens; Tkeshelashvili, Lasha; Busch, Kurt [Institut fuer Theoretische Festkoerperphysik, Universitaet Karlsruhe (Germany); DFG Forschungszentrum Center for Functional Nanostructures (CFN), Universitaet Karlsruhe (Germany); Karlsruhe School of Optics and Photonics (KSOP), Universitaet Karlsruhe (Germany)
2008-07-01
Numerical simulations of metallic nano-structures are crucial for the efficient design of plasmonic devices. Conventional time-domain solvers such as FDTD introduce large numerical errors especially at metallic surfaces. Our approach combines a discontinuous Galerkin method on an adaptive mesh for the spatial discretisation with a Krylov-subspace technique for the time-stepping procedure. Thus, the higher-order accuracy in both time and space is supported by unconditional stability. As illustrative examples, we compare numerical results obtained with our method against analytical reference solutions and results from FDTD calculations.
Double-tailored nonimaging reflector optics for maximum-performance solar concentration.
Goldstein, Alex; Gordon, Jeffrey M
2010-09-01
A nonimaging strategy that tailors two mirror contours for concentration near the étendue limit is explored, prompted by solar applications where a sizable gap between the optic and absorber is required. Subtle limitations of this simultaneous multiple surface method approach are derived, rooted in the manner in which phase space boundaries can be tailored according to the edge-ray principle. The fundamental categories of double-tailored reflective optics are identified, only a minority of which can pragmatically offer maximum concentration at high collection efficiency. Illustrative examples confirm that acceptance half-angles as large as 30 mrad can be realized at a flux concentration of approximately 1000.
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Lemos, José P. S.; Minamitsuji, Masato; Zaslavskii, Oleg B.
2017-10-01
Using a thin shell, the first law of thermodynamics, and a unified approach, we study the thermodymanics and find the entropy of a (2 +1 )-dimensional extremal rotating Bañados-Teitelbom-Zanelli (BTZ) black hole. The shell in (2 +1 ) dimensions, i.e., a ring, is taken to be circularly symmetric and rotating, with the inner region being a ground state of the anti-de Sitter spacetime and the outer region being the rotating BTZ spacetime. The extremal BTZ rotating black hole can be obtained in three different ways depending on the way the shell approaches its own gravitational or horizon radius. These ways are explicitly worked out. The resulting three cases give that the BTZ black hole entropy is either the Bekenstein-Hawking entropy, S =A/+ 4 G , or an arbitrary function of A+, S =S (A+) , where A+=2 π r+ is the area, i.e., the perimeter, of the event horizon in (2 +1 ) dimensions. We speculate that the entropy of an extremal black hole should obey 0 ≤S (A+)≤A/+ 4 G . We also show that the contributions from the various thermodynamic quantities, namely, the mass, the circular velocity, and the temperature, for the entropy in all three cases are distinct. This study complements the previous studies in thin shell thermodynamics and entropy for BTZ black holes. It also corroborates the results found for a (3 +1 )-dimensional extremal electrically charged Reissner-Nordström black hole.
Directory of Open Access Journals (Sweden)
Ana F. Kozmidis-Petrović
2014-06-01
Full Text Available The Vogel-Fulcher-Tammann (VFT, Avramov and Milchev (AM as well as Mauro, Yue, Ellison, Gupta and Allan (MYEGA functions of viscous flow are analysed when the compositionally independent high temperature viscosity limit is introduced instead of the compositionally dependent parameter η∞ . Two different approaches are adopted. In the first approach, it is assumed that each model should have its own (average high-temperature viscosity parameter η∞ . In that case, η∞ is different for each of these three models. In the second approach, it is assumed that the high-temperature viscosity is a truly universal value, independent of the model. In this case, the parameter η∞ would be the same and would have the same value: log η∞ = −1.93 dPa·s for all three models. 3D diagrams can successfully predict the difference in behaviour of viscous functions when average or universal high temperature limit is applied in calculations. The values of the AM functions depend, to a greater extent, on whether the average or the universal value for η∞ is used which is not the case with the VFT model. Our tests and values of standard error of estimate (SEE show that there are no general rules whether the average or universal high temperature viscosity limit should be applied to get the best agreement with the experimental functions.
International Nuclear Information System (INIS)
Shukla, Anant Kant; Ramamohan, T R; Srinivas, S
2014-01-01
In this paper we propose a technique to obtain limit cycles and quasi-periodic solutions of forced nonlinear oscillators. We apply this technique to the forced Van der Pol oscillator and the forced Van der Pol Duffing oscillator and obtain for the first time their limit cycles (periodic) and quasi-periodic solutions analytically. We introduce a modification of the homotopy analysis method to obtain these solutions. We minimize the square residual error to obtain accurate approximations to these solutions. The obtained analytical solutions are convergent and agree well with numerical solutions even at large times. Time trajectories of the solution, its first derivative and phase plots are presented to confirm the validity of the proposed approach. We also provide rough criteria for the determination of parameter regimes which lead to limit cycle or quasi-periodic behaviour. (papers)
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Directory of Open Access Journals (Sweden)
Karwan Fatah-Black
2013-03-01
Full Text Available This article considers what the migration circuits to and from Suriname can tell us about Dutch early modern colonisation in the Atlantic world. Did the Dutch have an Atlantic empire that can be studied by treating it as an integrated space, as suggested by New Imperial Historians, or did colonisation rely on circuits outside Dutch control, stretching beyond its imperial space? An empire-centred approach has dominated the study of Suriname’s history and has largely glossed over the routes taken by European migrants to and from the colony. When the empirecentred perspective is transcended it becomes possible to see that colonists arrived in Suriname from a range of different places around the Atlantic and the European hinterland. The article takes an Atlantic or global perspective to demonstrate the choices available to colonists and the networks through which they moved.
Energy Technology Data Exchange (ETDEWEB)
Isaacs, Sivan, E-mail: sivan.isaacs@gmail.com; Abdulhalim, Ibrahim [Department of Electro-Optical Engineering and TheIlse Katz Institute for Nanoscale Science and Technology, Ben Gurion University of the Negev, Beer Sheva 84105 (Israel); NEW CREATE Programme, School of Materials Science and Engineering, 1 CREATE Way, Research Wing, #02-06/08, Singapore 138602 (Singapore)
2015-05-11
Using an insulator-metal-insulator structure with dielectric having refractive index (RI) larger than the analyte, long range surface plasmon (SP) resonance exhibiting ultra-high penetration depth is demonstrated for sensing applications of large bioentities at wavelengths in the visible range. Based on the diverging beam approach in Kretschmann-Raether configuration, one of the SP resonances is shown to shift in response to changes in the analyte RI while the other is fixed; thus, it can be used as a built in reference. The combination of the high sensitivity, high penetration depth and self-reference using the diverging beam approach in which a dark line is detected of the high sensitivity, high penetration depth, self-reference, and the diverging beam approach in which a dark line is detected using large number of camera pixels with a smart algorithm for sub-pixel resolution, a sensor with ultra-low detection limit is demonstrated suitable for large bioentities.
International Nuclear Information System (INIS)
Gan Yanbiao; Li Yingjun; Xu Aiguo; Zhang Guangcai
2011-01-01
We further develop the lattice Boltzmann (LB) model [Physica A 382 (2007) 502] for compressible flows from two aspects. Firstly, we modify the Bhatnagar-Gross-Krook (BGK) collision term in the LB equation, which makes the model suitable for simulating flows with different Prandtl numbers. Secondly, the flux limiter finite difference (FLFD) scheme is employed to calculate the convection term of the LB equation, which makes the unphysical oscillations at discontinuities be effectively suppressed and the numerical dissipations be significantly diminished. The proposed model is validated by recovering results of some well-known benchmarks, including (i) The thermal Couette flow; (ii) One- and two-dimensional Riemann problems. Good agreements are obtained between LB results and the exact ones or previously reported solutions. The flexibility, together with the high accuracy of the new model, endows the proposed model considerable potential for tracking some long-standing problems and for investigating nonlinear nonequilibrium complex systems. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Gutfrind, Christophe; Dufour, Laurent; Liebart, Vincent; Vannier, Jean-Claude; Vidal, Pierre
2016-05-20
The purpose of this article is to describe the design of a limited stroke actuator and the corresponding prototype to drive a Low Pressure (LP) Exhaust Gas Recirculation (EGR) valve for use in Internal Combustion Engines (ICEs). The direct drive actuator topology is an axial flux machine with two air gaps in order to minimize the rotor inertia and a bipolar surface-mounted permanent magnet in order to respect an 80° angular stroke. Firstly, the actuator will be described and optimized under constraints of a 150 ms time response, a 0.363 N·m minimal torque on an angular range from 0° to 80° and prototyping constraints. Secondly, the finite element method (FEM) using the FLUX-3D(®) software (CEDRAT, Meylan, France) will be used to check the actuator performances with consideration of the nonlinear effect of the iron material. Thirdly, a prototype will be made and characterized to compare its measurement results with the analytical model and the FEM model results. With these electromechanical behavior measurements, a numerical model is created with Simulink(®) in order to simulate an EGR system with this direct drive actuator under all operating conditions. Last but not least, the energy consumption of this machine will be estimated to evaluate the efficiency of the proposed EGR electromechanical system.
Gintant, Gary A
2008-08-01
The successful development of novel drugs requires the ability to detect (and avoid) compounds that may provoke Torsades-de-Pointes (TdeP) arrhythmia while endorsing those compounds with minimal torsadogenic risk. As TdeP is a rare arrhythmia not readily observed during clinical or post-marketing studies, numerous preclinical models are employed to assess delayed or altered ventricular repolarization (surrogate markers linked to enhanced proarrhythmic risk). This review evaluates the advantages and limitations of selected preclinical models (ranging from the simplest cellular hERG current assay to the more complex in vitro perfused ventricular wedge and Langendorff heart preparations and in vivo chronic atrio-ventricular (AV)-node block model). Specific attention is paid to the utility of concentration-response relationships and "risk signatures" derived from these studies, with the intention of moving beyond predicting clinical QT prolongation and towards prediction of TdeP risk. While the more complex proarrhythmia models may be suited to addressing questionable or conflicting proarrhythmic signals obtained with simpler preclinical assays, further benchmarking of proarrhythmia models is required for their use in the robust evaluation of safety margins. In the future, these models may be able to reduce unwarranted attrition of evolving compounds while becoming pivotal in the balanced integrated risk assessment of advancing compounds.
Gutfrind, Christophe; Dufour, Laurent; Liebart, Vincent; Vannier, Jean-Claude; Vidal, Pierre
2016-01-01
The purpose of this article is to describe the design of a limited stroke actuator and the corresponding prototype to drive a Low Pressure (LP) Exhaust Gas Recirculation (EGR) valve for use in Internal Combustion Engines (ICEs). The direct drive actuator topology is an axial flux machine with two air gaps in order to minimize the rotor inertia and a bipolar surface-mounted permanent magnet in order to respect an 80° angular stroke. Firstly, the actuator will be described and optimized under constraints of a 150 ms time response, a 0.363 N·m minimal torque on an angular range from 0° to 80° and prototyping constraints. Secondly, the finite element method (FEM) using the FLUX-3D® software (CEDRAT, Meylan, France) will be used to check the actuator performances with consideration of the nonlinear effect of the iron material. Thirdly, a prototype will be made and characterized to compare its measurement results with the analytical model and the FEM model results. With these electromechanical behavior measurements, a numerical model is created with Simulink® in order to simulate an EGR system with this direct drive actuator under all operating conditions. Last but not least, the energy consumption of this machine will be estimated to evaluate the efficiency of the proposed EGR electromechanical system. PMID:27213398
Directory of Open Access Journals (Sweden)
Grignon JS
2014-05-01
Full Text Available Jessica S Grignon,1,2 Jenny H Ledikwe,1,2 Ditsapelo Makati,2 Robert Nyangah,2 Baraedi W Sento,2 Bazghina-werq Semo1,2 1Department of Global Health, University of Washington, Seattle, WA, USA; 2International Training and Education Center for Health, Gaborone, Botswana Abstract: To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. Keywords: human resources, health policy, health worker, HIV/AIDS, PEPFAR
Fernández-Robredo, P.; Sancho, A.; Johnen, S.; Recalde, S.; Gama, N.; Thumann, G.; Groll, J.; García-Layana, A.
2014-01-01
Age-related macular degeneration (AMD) is the leading cause of blindness in the Western world. With an ageing population, it is anticipated that the number of AMD cases will increase dramatically, making a solution to this debilitating disease an urgent requirement for the socioeconomic future of the European Union and worldwide. The present paper reviews the limitations of the current therapies as well as the socioeconomic impact of the AMD. There is currently no cure available for AMD, and even palliative treatments are rare. Treatment options show several side effects, are of high cost, and only treat the consequence, not the cause of the pathology. For that reason, many options involving cell therapy mainly based on retinal and iris pigment epithelium cells as well as stem cells are being tested. Moreover, tissue engineering strategies to design and manufacture scaffolds to mimic Bruch's membrane are very diverse and under investigation. Both alternative therapies are aimed to prevent and/or cure AMD and are reviewed herein. PMID:24672707
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration. © 2013 Elsevier B.V. All rights reserved.
Frykedal, Karin Forslund; Rosander, Michael; Barimani, Mia; Berlin, Anita
2018-01-01
The aim of this study was to describe and understand parental group (PG) leaders' experiences of creating conditions for interaction and communication. The data consisted of 10 interviews with 14 leaders. The transcribed interviews were analysed using thematic analysis. The results showed that the leaders' ambition was to create a parent-centred learning environment by establishing conditions for interaction and communication between the parents in the PGs. However, the leaders' experience was that their professional competencies were insufficient and that they lacked pedagogical tools to create constructive group discussions. Nevertheless, they found other ways to facilitate interactive processes. Based on their experience in the PG, the leaders constructed informal socio-emotional roles for themselves (e.g. caring role and personal role) and let their more formal task roles (e.g. professional role, group leader and consulting role) recede into the background, so as to remove the imbalance of power between the leaders and the parents. They believed this would make the parents feel more confident and make it easier for them to start communicating and interacting. This personal approach places them in a vulnerable position in the PG, in which it is easy for them to feel offended by parents' criticism, questioning or silence.
Energy Technology Data Exchange (ETDEWEB)
Barboza, Luciano Vitoria [Sul-riograndense Federal Institute for Education, Science and Technology (IFSul), Pelotas, RS (Brazil)], E-mail: luciano@pelotas.ifsul.edu.br
2009-07-01
This paper presents an overview about the maximum load ability problem and aims to study the main factors that limit this load ability. Specifically this study focuses its attention on determining which electric system buses influence directly on the power demand supply. The proposed approach uses the conventional maximum load ability method modelled by an optimization problem. The solution of this model is performed using the Interior Point methodology. As consequence of this solution method, the Lagrange multipliers are used as parameters that identify the probable 'bottlenecks' in the electric power system. The study also shows the relationship between the Lagrange multipliers and the cost function in the Interior Point optimization interpreted like sensitivity parameters. In order to illustrate the proposed methodology, the approach was applied to an IEEE test system and to assess its performance, a real equivalent electric system from the South- Southeast region of Brazil was simulated. (author)
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Directory of Open Access Journals (Sweden)
Carlos Mauricio Soares de Andrade
2001-07-01
accumulation was twice superior compared to control, showing that the grass growth was being restricted by the low N availability in the soil. The high response to the N fertilization showed that the shading was not the only factor limiting the understorey productivity and, also, that the established Panicum maximum plants were not being negative and significantly affected by allelopathic substances produced by eucalypts.
Schulz-Hildebrandt, H.; Münter, Michael; Ahrens, M.; Spahr, H.; Hillmann, D.; König, P.; Hüttmann, G.
2018-03-01
Optical coherence tomography (OCT) images scattering tissues with 5 to 15 μm resolution. This is usually not sufficient for a distinction of cellular and subcellular structures. Increasing axial and lateral resolution and compensation of artifacts caused by dispersion and aberrations is required to achieve cellular and subcellular resolution. This includes defocus which limit the usable depth of field at high lateral resolution. OCT gives access the phase of the scattered light and hence correction of dispersion and aberrations is possible by numerical algorithms. Here we present a unified dispersion/aberration correction which is based on a polynomial parameterization of the phase error and an optimization of the image quality using Shannon's entropy. For validation, a supercontinuum light sources and a costume-made spectrometer with 400 nm bandwidth were combined with a high NA microscope objective in a setup for tissue and small animal imaging. Using this setup and computation corrections, volumetric imaging at 1.5 μm resolution is possible. Cellular and near cellular resolution is demonstrated in porcine cornea and the drosophila larva, when computational correction of dispersion and aberrations is used. Due to the excellent correction of the used microscope objective, defocus was the main contribution to the aberrations. In addition, higher aberrations caused by the sample itself were successfully corrected. Dispersion and aberrations are closely related artifacts in microscopic OCT imaging. Hence they can be corrected in the same way by optimization of the image quality. This way microscopic resolution is easily achieved in OCT imaging of static biological tissues.
Balzer, Laura; Staples, Patrick; Onnela, Jukka-Pekka; DeGruttola, Victor
2017-04-01
Several cluster-randomized trials are underway to investigate the implementation and effectiveness of a universal test-and-treat strategy on the HIV epidemic in sub-Saharan Africa. We consider nesting studies of pre-exposure prophylaxis within these trials. Pre-exposure prophylaxis is a general strategy where high-risk HIV- persons take antiretrovirals daily to reduce their risk of infection from exposure to HIV. We address how to target pre-exposure prophylaxis to high-risk groups and how to maximize power to detect the individual and combined effects of universal test-and-treat and pre-exposure prophylaxis strategies. We simulated 1000 trials, each consisting of 32 villages with 200 individuals per village. At baseline, we randomized the universal test-and-treat strategy. Then, after 3 years of follow-up, we considered four strategies for targeting pre-exposure prophylaxis: (1) all HIV- individuals who self-identify as high risk, (2) all HIV- individuals who are identified by their HIV+ partner (serodiscordant couples), (3) highly connected HIV- individuals, and (4) the HIV- contacts of a newly diagnosed HIV+ individual (a ring-based strategy). We explored two possible trial designs, and all villages were followed for a total of 7 years. For each village in a trial, we used a stochastic block model to generate bipartite (male-female) networks and simulated an agent-based epidemic process on these networks. We estimated the individual and combined intervention effects with a novel targeted maximum likelihood estimator, which used cross-validation to data-adaptively select from a pre-specified library the candidate estimator that maximized the efficiency of the analysis. The universal test-and-treat strategy reduced the 3-year cumulative HIV incidence by 4.0% on average. The impact of each pre-exposure prophylaxis strategy on the 4-year cumulative HIV incidence varied by the coverage of the universal test-and-treat strategy with lower coverage resulting in a larger
Directory of Open Access Journals (Sweden)
R. Ludwig
2003-01-01
Full Text Available Numerous applications of hydrological models have shown their capability to simulate hydrological processes with a reasonable degree of certainty. For flood modelling, the quality of precipitation data — the key input parameter — is very important but often remains questionable. This paper presents a critical review of experience in the EU-funded RAPHAEL project. Different meteorological data sources were evaluated to assess their applicability for flood modelling and forecasting in the Bavarian pre-alpine catchment of the Ammer river (709 km2, for which the hydrological aspects of runoff production are described as well as the complex nature of floods. Apart from conventional rain gauge data, forecasts from several Numerical Weather Prediction Models (NWP as well as rain radar data are examined, scaled and applied within the framework of a GIS-structured and physically based hydrological model. Multi-scenario results are compared and analysed. The synergetic approach leads to promising results under certain meteorological conditions but emphasises various drawbacks. At present, NWPs are the only source of rainfall forecasts (up to 96 hours with large spatial coverage and high temporal resolution. On the other hand, the coarse spatial resolution of NWP grids cannot yet address, adequately, the heterogeneous structures of orographic rainfields in complex convective situations; hence, a major downscaling problem for mountain catchment applications is introduced. As shown for two selected Ammer flood events, a high variability in prediction accuracy has still to be accepted at present. Sensitivity analysis of both meteo-data input and hydrological model performance in terms of process description are discussed and positive conclusions have been drawn for future applications of an advanced meteo-hydro model synergy. Keywords: RAPHAEL, modelling, forecasting, model coupling, PROMET-D, TOPMODEL
International Nuclear Information System (INIS)
Yamagami, Takuji; Kato, Takeharu; Hirota, Tatsuya; Yoshimatsu, Rika; Matsumoto, Tomohiro; Nishimura, Tsunehiko
2006-01-01
The goal of this study was to evaluate the efficacy of simple aspiration of air from the pleural space to prevent increased pneumothorax and avoid chest tube placement in cases of pneumothorax following interventional radiological procedures performed under computed tomography fluoroscopic guidance with the transthoracic percutaneous approach. While still on the scanner table, 102 cases underwent percutaneous manual aspiration of a moderate or large pneumothorax that had developed during mediastinal, lung, and transthoracic liver biopsies and ablations of lung and hepatic tumors (independent of symptoms). Air was aspirated from the pleural space by an 18- or 20-gauge intravenous catheter attached to a three-way stopcock and 20- or 50-mL syringe. We evaluated the management of each such case during and after manual aspiration. In 87 of the 102 patients (85.3%), the pneumothorax had resolved completely on follow-up chest radiographs without chest tube placement, but chest tube placement was required in 15 patients. Requirement of chest tube insertion significantly increased in parallel with the increased volume of aspirated air. When receiver-operating characteristic curves were applied retrospectively, the optimal cutoff level of aspirated air on which to base a decision to abandon manual aspiration alone and resort to chest tube placement was 670 mL. Percutaneous manual aspiration of the pneumothorax performed immediately after the procedure might prevent progressive pneumothorax and eliminate the need for chest tube placement. However, when the amount of aspirated air is large (such as more than 670 mL), chest tube placement should be considered
Mikami, Masato; Saputro, Herman; Seo, Takehiko; Oyagi, Hiroshi
2018-03-01
Stable operation of liquid-fueled combustors requires the group combustion of fuel spray. Our study employs a percolation approach to describe unsteady group-combustion excitation based on findings obtained from microgravity experiments on the flame spread of fuel droplets. We focus on droplet clouds distributed randomly in three-dimensional square lattices with a low-volatility fuel, such as n-decane in room-temperature air, where the pre-vaporization effect is negligible. We also focus on the flame spread in dilute droplet clouds near the group-combustion-excitation limit, where the droplet interactive effect is assumed negligible. The results show that the occurrence probability of group combustion sharply decreases with the increase in mean droplet spacing around a specific value, which is termed the critical mean droplet spacing. If the lattice size is at smallest about ten times as large as the flame-spread limit distance, the flame-spread characteristics are similar to those over an infinitely large cluster. The number density of unburned droplets remaining after completion of burning attained maximum around the critical mean droplet spacing. Therefore, the critical mean droplet spacing is a good index for stable combustion and unburned hydrocarbon. In the critical condition, the flame spreads through complicated paths, and thus the characteristic time scale of flame spread over droplet clouds has a very large value. The overall flame-spread rate of randomly distributed droplet clouds is almost the same as the flame-spread rate of a linear droplet array except over the flame-spread limit.
Quality assurance of nuclear analytical techniques based on Bayesian characteristic limits
International Nuclear Information System (INIS)
Michel, R.
2000-01-01
Based on Bayesian statistics, characteristic limits such as decision threshold, detection limit and confidence limits can be calculated taking into account all sources of experimental uncertainties. This approach separates the complete evaluation of a measurement according to the ISO Guide to the Expression of Uncertainty in Measurement from the determination of the characteristic limits. Using the principle of maximum entropy the characteristic limits are determined from the complete standard uncertainty of the measurand. (author)
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Shen, Hua; Wen, Chih-Yung; Parsani, Matteo; Shu, Chi-Wang
2016-01-01
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
International Nuclear Information System (INIS)
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta; Ahmed, Lucky; Rasulev, Bakhtiyor; Avramopoulos, Aggelos; Papadopoulos, Manthos G.; Leszczynski, Jerzy; Puzyn, Tomasz
2016-01-01
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide the recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.
Energy Technology Data Exchange (ETDEWEB)
Jagiello, Karolina; Grzonkowska, Monika; Swirog, Marta [University of Gdansk, Laboratory of Environmental Chemometrics, Faculty of Chemistry, Institute for Environmental and Human Health Protection (Poland); Ahmed, Lucky; Rasulev, Bakhtiyor [Jackson State University, Interdisciplinary Nanotoxicity Center, Department of Chemistry and Biochemistry (United States); Avramopoulos, Aggelos; Papadopoulos, Manthos G. [National Hellenic Research Foundation, Institute of Biology, Pharmaceutical Chemistry and Biotechnology (Greece); Leszczynski, Jerzy [Jackson State University, Interdisciplinary Nanotoxicity Center, Department of Chemistry and Biochemistry (United States); Puzyn, Tomasz, E-mail: t.puzyn@qsar.eu.org [University of Gdansk, Laboratory of Environmental Chemometrics, Faculty of Chemistry, Institute for Environmental and Human Health Protection (Poland)
2016-09-15
In this contribution, the advantages and limitations of two computational techniques that can be used for the investigation of nanoparticles activity and toxicity: classic nano-QSAR (Quantitative Structure–Activity Relationships employed for nanomaterials) and 3D nano-QSAR (three-dimensional Quantitative Structure–Activity Relationships, such us Comparative Molecular Field Analysis, CoMFA/Comparative Molecular Similarity Indices Analysis, CoMSIA analysis employed for nanomaterials) have been briefly summarized. Both approaches were compared according to the selected criteria, including: efficiency, type of experimental data, class of nanomaterials, time required for calculations and computational cost, difficulties in the interpretation. Taking into account the advantages and limitations of each method, we provide the recommendations for nano-QSAR modellers and QSAR model users to be able to determine a proper and efficient methodology to investigate biological activity of nanoparticles in order to describe the underlying interactions in the most reliable and useful manner.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Exercise-induced maximum metabolic rate scaled to body mass by ...
African Journals Online (AJOL)
Exercise-induced maximum metabolic rate scaled to body mass by the fractal ... rate scaling is that exercise-induced maximum aerobic metabolic rate (MMR) is ... muscle stress limitation, and maximized oxygen delivery and metabolic rates.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
CSIR Research Space (South Africa)
Kok, S
2012-07-01
Full Text Available continuously as the correlation function hyper-parameters approach zero. Since the global minimizer of the maximum likelihood function is an asymptote in this case, it is unclear if maximum likelihood estimation (MLE) remains valid. Numerical ill...
Simonov, Alexandr N.; Morris, Graham P.; Mashkina, Elena A.; Bethwaite, Blair; Gillow, Kathryn; Baker, Ruth E.; Gavaghan, David J.; Bond, Alan M.
2014-01-01
Many electrode processes that approach the "reversible" (infinitely fast) limit under voltammetric conditions have been inappropriately analyzed by comparison of experimental data and theory derived from the "quasi-reversible" model. Simulations based on "reversible" and "quasi-reversible" models have been fitted to an extensive series of a.c. voltammetric experiments undertaken at macrodisk glassy carbon (GC) electrodes for oxidation of ferrocene (Fc0/+) in CH3CN (0.10 M (n-Bu)4NPF6) and reduction of [Ru(NH 3)6]3+ and [Fe(CN)6]3- in 1 M KCl aqueous electrolyte. The confidence with which parameters such as standard formal potential (E0), heterogeneous electron transfer rate constant at E0 (k0), charge transfer coefficient (α), uncompensated resistance (Ru), and double layer capacitance (CDL) can be reported using the "quasi- reversible" model has been assessed using bootstrapping and parameter sweep (contour plot) techniques. Underparameterization, such as that which occurs when modeling CDL with a potential independent value, results in a less than optimal level of experiment-theory agreement. Overparameterization may improve the agreement but easily results in generation of physically meaningful but incorrect values of the recovered parameters, as is the case with the very fast Fc0/+ and [Ru(NH3)6]3+/2+ processes. In summary, for fast electrode kinetics approaching the "reversible" limit, it is recommended that the "reversible" model be used for theory-experiment comparisons with only E0, R u, and CDL being quantified and a lower limit of k 0 being reported; e.g., k0 ≥ 9 cm s-1 for the Fc0/+ process. © 2014 American Chemical Society.
Simonov, Alexandr N.
2014-08-19
Many electrode processes that approach the "reversible" (infinitely fast) limit under voltammetric conditions have been inappropriately analyzed by comparison of experimental data and theory derived from the "quasi-reversible" model. Simulations based on "reversible" and "quasi-reversible" models have been fitted to an extensive series of a.c. voltammetric experiments undertaken at macrodisk glassy carbon (GC) electrodes for oxidation of ferrocene (Fc0/+) in CH3CN (0.10 M (n-Bu)4NPF6) and reduction of [Ru(NH 3)6]3+ and [Fe(CN)6]3- in 1 M KCl aqueous electrolyte. The confidence with which parameters such as standard formal potential (E0), heterogeneous electron transfer rate constant at E0 (k0), charge transfer coefficient (α), uncompensated resistance (Ru), and double layer capacitance (CDL) can be reported using the "quasi- reversible" model has been assessed using bootstrapping and parameter sweep (contour plot) techniques. Underparameterization, such as that which occurs when modeling CDL with a potential independent value, results in a less than optimal level of experiment-theory agreement. Overparameterization may improve the agreement but easily results in generation of physically meaningful but incorrect values of the recovered parameters, as is the case with the very fast Fc0/+ and [Ru(NH3)6]3+/2+ processes. In summary, for fast electrode kinetics approaching the "reversible" limit, it is recommended that the "reversible" model be used for theory-experiment comparisons with only E0, R u, and CDL being quantified and a lower limit of k 0 being reported; e.g., k0 ≥ 9 cm s-1 for the Fc0/+ process. © 2014 American Chemical Society.
Simonov, Alexandr N; Morris, Graham P; Mashkina, Elena A; Bethwaite, Blair; Gillow, Kathryn; Baker, Ruth E; Gavaghan, David J; Bond, Alan M
2014-08-19
Many electrode processes that approach the "reversible" (infinitely fast) limit under voltammetric conditions have been inappropriately analyzed by comparison of experimental data and theory derived from the "quasi-reversible" model. Simulations based on "reversible" and "quasi-reversible" models have been fitted to an extensive series of a.c. voltammetric experiments undertaken at macrodisk glassy carbon (GC) electrodes for oxidation of ferrocene (Fc(0/+)) in CH3CN (0.10 M (n-Bu)4NPF6) and reduction of [Ru(NH3)6](3+) and [Fe(CN)6](3-) in 1 M KCl aqueous electrolyte. The confidence with which parameters such as standard formal potential (E(0)), heterogeneous electron transfer rate constant at E(0) (k(0)), charge transfer coefficient (α), uncompensated resistance (Ru), and double layer capacitance (CDL) can be reported using the "quasi-reversible" model has been assessed using bootstrapping and parameter sweep (contour plot) techniques. Underparameterization, such as that which occurs when modeling CDL with a potential independent value, results in a less than optimal level of experiment-theory agreement. Overparameterization may improve the agreement but easily results in generation of physically meaningful but incorrect values of the recovered parameters, as is the case with the very fast Fc(0/+) and [Ru(NH3)6](3+/2+) processes. In summary, for fast electrode kinetics approaching the "reversible" limit, it is recommended that the "reversible" model be used for theory-experiment comparisons with only E(0), Ru, and CDL being quantified and a lower limit of k(0) being reported; e.g., k(0) ≥ 9 cm s(-1) for the Fc(0/+) process.
Directory of Open Access Journals (Sweden)
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Wang, Wenhui; Nunez-Iglesias, Juan; Luan, Yihui; Sun, Fengzhu
2009-09-03
Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Caglayan, Günhan
2015-08-01
Despite few limitations, GeoGebra as a dynamic geometry software stood as a powerful instrument in helping university math majors understand, explore, and gain experiences in visualizing the limits of functions and the ɛ - δ formalism. During the process of visualizing a theorem, the order mattered in the sequence of constituents. Students made use of such rich constituents as finger-hand gestures and cursor gestures in an attempt to keep a record of visual demonstration in progress, while being aware of the interrelationships among these constituents and the transformational aspect of the visually proving process. Covariational reasoning along with interval mapping structures proved to be the key constituents in the visualizing and sense-making of a limit theorem using the delta-epsilon formalism. Pedagogical approaches and teaching strategies based on experimental mathematics - mindtool - consituential visual proofs trio would permit students to study, construct, and meaningfully connect the new knowledge to the previously mastered concepts and skills in a manner that would make sense for them.
Novel maximum-margin training algorithms for supervised neural networks.
Ludwig, Oswaldo; Nunes, Urbano
2010-06-01
This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by
International Nuclear Information System (INIS)
Gallegos, F.R.
1996-01-01
The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
DEFF Research Database (Denmark)
Linnet, Kristian
2005-01-01
Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Atoms in molecules, an axiomatic approach. I. Maximum transferability
Ayers, Paul W.
2000-12-01
Central to chemistry is the concept of transferability: the idea that atoms and functional groups retain certain characteristic properties in a wide variety of environments. Providing a completely satisfactory mathematical basis for the concept of atoms in molecules, however, has proved difficult. The present article pursues an axiomatic basis for the concept of an atom within a molecule, with particular emphasis devoted to the definition of transferability and the atomic description of Hirshfeld.
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
t= A’S OAKLAND DODGERS BASEBALL CATCHER ATHLETICS INNING GAMES GAME DAVE LEAGUE SERIES TEAM SEASON FRANCISCO BAY SAN PARK BALL RUNS A.’S -= A.’S...by the MI-3g Measure ’EM -- ’EM YOU SEASON GAME GAMES LEAGUE TEAM GUYS I BASEBALL COACH TEAM’S FOOTBALL WON HERE ME SEASONS TEAMS MY CHAMPIONSHIP ’N
Energy Technology Data Exchange (ETDEWEB)
Hu, Jian Zhi; Rommereim, Donald N.; Wind, Robert A.; Minard, Kevin R.; Sears, Jesse A.
2006-11-01
A simple approach is reported that yields high resolution, high sensitivity ¹H NMR spectra of biofluids with limited mass supply. This is achieved by spinning a capillary sample tube containing a biofluid at the magic angle at a frequency of about 80Hz. A 2D pulse sequence called ¹H PASS is then used to produce a high-resolution ¹H NMR spectrum that is free from magnetic susceptibility induced line broadening. With this new approach a high resolution ¹H NMR spectrum of biofluids with a volume less than 1.0 µl can be easily achieved at a magnetic field strength as low as 7.05T. Furthermore, the methodology facilitates easy sample handling, i.e., the samples can be directly collected into inexpensive and disposable capillary tubes at the site of collection and subsequently used for NMR measurements. In addition, slow magic angle spinning improves magnetic field shimming and is especially suitable for high throughput investigations. In this paper first results are shown obtained in a magnetic field of 7.05T on urine samples collected from mice using a modified commercial NMR probe.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Revision of regional maximum flood (RMF) estimation in Namibia ...
African Journals Online (AJOL)
Extreme flood hydrology in Namibia for the past 30 years has largely been based on the South African Department of Water Affairs Technical Report 137 (TR 137) of 1988. This report proposes an empirically established upper limit of flood peaks for regions called the regional maximum flood (RMF), which could be ...
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
Energy Technology Data Exchange (ETDEWEB)
Sylvetsky, Nitai, E-mail: gershom@weizmann.ac.il; Martin, Jan M. L., E-mail: gershom@weizmann.ac.il [Department of Organic Chemistry, Weizmann Institute of Science, 76100 Rehovot (Israel); Peterson, Kirk A., E-mail: kipeters@wsu.edu [Department of Chemistry, Washington State University, Pullman, Washington 99164-4630 (United States); Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)
2016-06-07
In the context of high-accuracy computational thermochemistry, the valence coupled cluster with all singles and doubles (CCSD) correlation component of molecular atomization energies presents the most severe basis set convergence problem, followed by the (T) component. In the present paper, we make a detailed comparison, for an expanded version of the W4-11 thermochemistry benchmark, between, on the one hand, orbital-based CCSD/AV{5,6}Z + d and CCSD/ACV{5,6}Z extrapolation, and on the other hand CCSD-F12b calculations with cc-pVQZ-F12 and cc-pV5Z-F12 basis sets. This latter basis set, now available for H–He, B–Ne, and Al–Ar, is shown to be very close to the basis set limit. Apparent differences (which can reach 0.35 kcal/mol for systems like CCl{sub 4}) between orbital-based and CCSD-F12b basis set limits disappear if basis sets with additional radial flexibility, such as ACV{5,6}Z, are used for the orbital calculation. Counterpoise calculations reveal that, while total atomization energies with V5Z-F12 basis sets are nearly free of BSSE, orbital calculations have significant BSSE even with AV(6 + d)Z basis sets, leading to non-negligible differences between raw and counterpoise-corrected extrapolated limits. This latter problem is greatly reduced by switching to ACV{5,6}Z core-valence basis sets, or simply adding an additional zeta to just the valence orbitals. Previous reports that all-electron approaches like HEAT (high-accuracy extrapolated ab-initio thermochemistry) lead to different CCSD(T) limits than “valence limit + CV correction” approaches like Feller-Peterson-Dixon and Weizmann-4 (W4) theory can be rationalized in terms of the greater radial flexibility of core-valence basis sets. For (T) corrections, conventional CCSD(T)/AV{Q,5}Z + d calculations are found to be superior to scaled or extrapolated CCSD(T)-F12b calculations of similar cost. For a W4-F12 protocol, we recommend obtaining the Hartree-Fock and valence CCSD components from CCSD-F12b
A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model
Directory of Open Access Journals (Sweden)
Jerry Davis
2015-06-01
Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion
Criticality predicts maximum irregularity in recurrent networks of excitatory nodes.
Directory of Open Access Journals (Sweden)
Yahya Karimipanah
Full Text Available A rigorous understanding of brain dynamics and function requires a conceptual bridge between multiple levels of organization, including neural spiking and network-level population activity. Mounting evidence suggests that neural networks of cerebral cortex operate at a critical regime, which is defined as a transition point between two phases of short lasting and chaotic activity. However, despite the fact that criticality brings about certain functional advantages for information processing, its supporting evidence is still far from conclusive, as it has been mostly based on power law scaling of size and durations of cascades of activity. Moreover, to what degree such hypothesis could explain some fundamental features of neural activity is still largely unknown. One of the most prevalent features of cortical activity in vivo is known to be spike irregularity of spike trains, which is measured in terms of the coefficient of variation (CV larger than one. Here, using a minimal computational model of excitatory nodes, we show that irregular spiking (CV > 1 naturally emerges in a recurrent network operating at criticality. More importantly, we show that even at the presence of other sources of spike irregularity, being at criticality maximizes the mean coefficient of variation of neurons, thereby maximizing their spike irregularity. Furthermore, we also show that such a maximized irregularity results in maximum correlation between neuronal firing rates and their corresponding spike irregularity (measured in terms of CV. On the one hand, using a model in the universality class of directed percolation, we propose new hallmarks of criticality at single-unit level, which could be applicable to any network of excitable nodes. On the other hand, given the controversy of the neural criticality hypothesis, we discuss the limitation of this approach to neural systems and to what degree they support the criticality hypothesis in real neural networks. Finally
The Maximum Entropy Principle and the Modern Portfolio Theory
Directory of Open Access Journals (Sweden)
Ailton Cassetari
2003-12-01
Full Text Available In this work, a capital allocation methodology base don the Principle of Maximum Entropy was developed. The Shannons entropy is used as a measure, concerning the Modern Portfolio Theory, are also discuted. Particularly, the methodology is tested making a systematic comparison to: 1 the mean-variance (Markovitz approach and 2 the mean VaR approach (capital allocations based on the Value at Risk concept. In principle, such confrontations show the plausibility and effectiveness of the developed method.
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
Heat Convection at the Density Maximum Point of Water
Balta, Nuri; Korganci, Nuri
2018-01-01
Water exhibits a maximum in density at normal pressure at around 4° degree temperature. This paper demonstrates that during cooling, at around 4 °C, the temperature remains constant for a while because of heat exchange associated with convective currents inside the water. Superficial approach implies it as a new anomaly of water, but actually it…
Discontinuity of maximum entropy inference and quantum phase transitions
International Nuclear Information System (INIS)
Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu
2015-01-01
In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)
Maximum power point tracker based on fuzzy logic
International Nuclear Information System (INIS)
Daoud, A.; Midoun, A.
2006-01-01
The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and
Algorithms of maximum likelihood data clustering with applications
Giada, Lorenzo; Marsili, Matteo
2002-12-01
We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.
International Nuclear Information System (INIS)
Shishkov, L. K.; Gorbaev, V. A.; Tsyganov, S. V.
2007-01-01
The paper touches upon the issues of NPP safety ensuring at the stage of fuel load design and operation by applying special limitations for a series of parameters, that is, design limits. Two following approaches are compared: the one used by west specialists for the PWR reactor and the Russian approach employed for the WWER reactor. The closeness of approaches is established, differences that are mainly peculiarities of terms are noted (Authors)
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
Stationary neutrino radiation transport by maximum entropy closure
International Nuclear Information System (INIS)
Bludman, S.A.
1994-11-01
The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation
Considerations on the establishment of maximum permissible exposure of man
International Nuclear Information System (INIS)
Jacobi, W.
1974-01-01
An attempt is made in the information lecture to give a quantitative analysis of the somatic radiation risk and to illustrate a concept to fix dose limiting values. Of primary importance is the limiting values. Of primary importance is the limiting value of the radiation exposure to the whole population. By consequential application of the risk concept, the following points are considered: 1) Definition of the risk for radiation late damages (cancer, leukemia); 2) relationship between radiation dose and thus caused radiation risk; 3) radiation risk and the dose limiting values at the time; 4) criteria for the maximum acceptable radiation risk; 5) limiting value which can be expected at the time. (HP/LH) [de
Limitations of Boltzmann's principle
International Nuclear Information System (INIS)
Lavenda, B.H.
1995-01-01
The usual form of Boltzmann's principle assures that maximum entropy, or entropy reduction, occurs with maximum probability, implying a unimodal distribution. Boltzmann's principle cannot be applied to nonunimodal distributions, like the arcsine law, because the entropy may be concave only over a limited portion of the interval. The method of subordination shows that the arcsine distribution corresponds to a process with a single degree of freedom, thereby confirming the invalidation of Boltzmann's principle. The fractalization of time leads to a new distribution in which arcsine and Cauchy distributions can coexist simultaneously for nonintegral degrees of freedom between √2 and 2
Scientific substantination of maximum allowable concentration of fluopicolide in water
Directory of Open Access Journals (Sweden)
Pelo I.М.
2014-03-01
Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
The mechanics of granitoid systems and maximum entropy production rates.
Hobbs, Bruce E; Ord, Alison
2010-01-13
A model for the formation of granitoid systems is developed involving melt production spatially below a rising isotherm that defines melt initiation. Production of the melt volumes necessary to form granitoid complexes within 10(4)-10(7) years demands control of the isotherm velocity by melt advection. This velocity is one control on the melt flux generated spatially just above the melt isotherm, which is the control valve for the behaviour of the complete granitoid system. Melt transport occurs in conduits initiated as sheets or tubes comprising melt inclusions arising from Gurson-Tvergaard constitutive behaviour. Such conduits appear as leucosomes parallel to lineations and foliations, and ductile and brittle dykes. The melt flux generated at the melt isotherm controls the position of the melt solidus isotherm and hence the physical height of the Transport/Emplacement Zone. A conduit width-selection process, driven by changes in melt viscosity and constitutive behaviour, operates within the Transport Zone to progressively increase the width of apertures upwards. Melt can also be driven horizontally by gradients in topography; these horizontal fluxes can be similar in magnitude to vertical fluxes. Fluxes induced by deformation can compete with both buoyancy and topographic-driven flow over all length scales and results locally in transient 'ponds' of melt. Pluton emplacement is controlled by the transition in constitutive behaviour of the melt/magma from elastic-viscous at high temperatures to elastic-plastic-viscous approaching the melt solidus enabling finite thickness plutons to develop. The system involves coupled feedback processes that grow at the expense of heat supplied to the system and compete with melt advection. The result is that limits are placed on the size and time scale of the system. Optimal characteristics of the system coincide with a state of maximum entropy production rate. This journal is © 2010 The Royal Society
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
The calculation of maximum permissible exposure levels for laser radiation
International Nuclear Information System (INIS)
Tozer, B.A.
1979-01-01
The maximum permissible exposure data of the revised standard BS 4803 are presented as a set of decision charts which ensure that the user automatically takes into account such details as pulse length and pulse pattern, limiting angular subtense, combinations of multiple wavelength and/or multiple pulse lengths, etc. The two decision charts given are for the calculation of radiation hazards to skin and eye respectively. (author)
Investigation on maximum transition temperature of phonon mediated superconductivity
Energy Technology Data Exchange (ETDEWEB)
Fusui, L; Yi, S; Yinlong, S [Physics Department, Beijing University (CN)
1989-05-01
Three model effective phonon spectra are proposed to get plots of {ital T}{sub {ital c}}-{omega} adn {lambda}-{omega}. It can be concluded that there is no maximum limit of {ital T}{sub {ital c}} in phonon mediated superconductivity for reasonable values of {lambda}. The importance of high frequency LO phonon is also emphasized. Some discussions on high {ital T}{sub {ital c}} are given.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Maximum Likelihood Compton Polarimetry with the Compton Spectrometer and Imager
Energy Technology Data Exchange (ETDEWEB)
Lowell, A. W.; Boggs, S. E; Chiu, C. L.; Kierans, C. A.; Sleator, C.; Tomsick, J. A.; Zoglauer, A. C. [Space Sciences Laboratory, University of California, Berkeley (United States); Chang, H.-K.; Tseng, C.-H.; Yang, C.-Y. [Institute of Astronomy, National Tsing Hua University, Taiwan (China); Jean, P.; Ballmoos, P. von [IRAP Toulouse (France); Lin, C.-H. [Institute of Physics, Academia Sinica, Taiwan (China); Amman, M. [Lawrence Berkeley National Laboratory (United States)
2017-10-20
Astrophysical polarization measurements in the soft gamma-ray band are becoming more feasible as detectors with high position and energy resolution are deployed. Previous work has shown that the minimum detectable polarization (MDP) of an ideal Compton polarimeter can be improved by ∼21% when an unbinned, maximum likelihood method (MLM) is used instead of the standard approach of fitting a sinusoid to a histogram of azimuthal scattering angles. Here we outline a procedure for implementing this maximum likelihood approach for real, nonideal polarimeters. As an example, we use the recent observation of GRB 160530A with the Compton Spectrometer and Imager. We find that the MDP for this observation is reduced by 20% when the MLM is used instead of the standard method.
Maximum Entropy Estimation of Transition Probabilities of Reversible Markov Chains
Directory of Open Access Journals (Sweden)
Erik Van der Straeten
2009-11-01
Full Text Available In this paper, we develop a general theory for the estimation of the transition probabilities of reversible Markov chains using the maximum entropy principle. A broad range of physical models can be studied within this approach. We use one-dimensional classical spin systems to illustrate the theoretical ideas. The examples studied in this paper are: the Ising model, the Potts model and the Blume-Emery-Griffiths model.
Bayesian interpretation of Generalized empirical likelihood by maximum entropy
Rochet , Paul
2011-01-01
We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...
Towards Improved Optical Limiters
National Research Council Canada - National Science Library
Huffman, Peter
2002-01-01
.... The first approach was to synthesize and study soluble thallium phthalocyanines. Thallium, due to its proximity to lead and indium on the periodic table, should exhibit favorable optical limiting properties...
Energy Technology Data Exchange (ETDEWEB)
Loescher, D.H. [Sandia National Labs., Albuquerque, NM (United States). Systems Surety Assessment Dept.; Noren, K. [Univ. of Idaho, Moscow, ID (United States). Dept. of Electrical Engineering
1996-09-01
The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.
Tail Risk Constraints and Maximum Entropy
Directory of Open Access Journals (Sweden)
Donald Geman
2015-06-01
Full Text Available Portfolio selection in the financial literature has essentially been analyzed under two central assumptions: full knowledge of the joint probability distribution of the returns of the securities that will comprise the target portfolio; and investors’ preferences are expressed through a utility function. In the real world, operators build portfolios under risk constraints which are expressed both by their clients and regulators and which bear on the maximal loss that may be generated over a given time period at a given confidence level (the so-called Value at Risk of the position. Interestingly, in the finance literature, a serious discussion of how much or little is known from a probabilistic standpoint about the multi-dimensional density of the assets’ returns seems to be of limited relevance. Our approach in contrast is to highlight these issues and then adopt throughout a framework of entropy maximization to represent the real world ignorance of the “true” probability distributions, both univariate and multivariate, of traded securities’ returns. In this setting, we identify the optimal portfolio under a number of downside risk constraints. Two interesting results are exhibited: (i the left- tail constraints are sufficiently powerful to override all other considerations in the conventional theory; (ii the “barbell portfolio” (maximal certainty/ low risk in one set of holdings, maximal uncertainty in another, which is quite familiar to traders, naturally emerges in our construction.
Pushing concentration of stationary solar concentrators to the limit.
Winston, Roland; Zhang, Weiya
2010-04-26
We give the theoretical limit of concentration allowed by nonimaging optics for stationary solar concentrators after reviewing sun- earth geometry in direction cosine space. We then discuss the design principles that we follow to approach the maximum concentration along with examples including a hollow CPC trough, a dielectric CPC trough, and a 3D dielectric stationary solar concentrator which concentrates sun light four times (4x), eight hours per day year around.
MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.
Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang
2018-02-02
The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.; Alkhalifah, Tariq Ali
2017-01-01
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Time Reversal Migration for Passive Sources Using a Maximum Variance Imaging Condition
Wang, H.
2017-05-26
The conventional time-reversal imaging approach for micro-seismic or passive source location is based on focusing the back-propagated wavefields from each recorded trace in a source image. It suffers from strong background noise and limited acquisition aperture, which may create unexpected artifacts and cause error in the source location. To overcome such a problem, we propose a new imaging condition for microseismic imaging, which is based on comparing the amplitude variance in certain windows, and use it to suppress the artifacts as well as find the right location for passive sources. Instead of simply searching for the maximum energy point in the back-propagated wavefield, we calculate the amplitude variances over a window moving in both space and time axis to create a highly resolved passive event image. The variance operation has negligible cost compared with the forward/backward modeling operations, which reveals that the maximum variance imaging condition is efficient and effective. We test our approach numerically on a simple three-layer model and on a piece of the Marmousi model as well, both of which have shown reasonably good results.
Estimating the maximum potential revenue for grid connected electricity storage :
Energy Technology Data Exchange (ETDEWEB)
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01
The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the
International Nuclear Information System (INIS)
Tendler, M.
1984-06-01
The energy loss from a tokamak plasma due to neutral hydrogen radiation and recycling is of great importance for the energy balance at the periphery. It is shown that the requirement for thermal equilibrium implies a constraint on the maximum attainable edge density. The relation to other density limits is discussed. The average plasma density is shown to be a strong function of the refuelling deposition profile. (author)
Murray, Michael J.
2013-01-01
This article offers a critical examination of the transformative approach to education for political citizenship. The argument offered here is that the transformative approaches lacks the capacity to fully acknowledge the asymmetries of political power and as a consequence, it promotes an idealised construct of political citizenship which does not…
International Nuclear Information System (INIS)
Mubayi, V.
1995-05-01
The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ''long term interdiction limit,'' is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
On the maximum entropy distributions of inherently positive nuclear data
Energy Technology Data Exchange (ETDEWEB)
Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.
2017-05-11
The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.
Directory of Open Access Journals (Sweden)
Nicholas Wasonga Orago
2013-12-01
Full Text Available On 27 August 2010 Kenya adopted a transformative Constitution with the objective of fighting poverty and inequality as well as improving the standards of living of all people in Kenya. One of the mechanisms in the 2010 Constitution aimed at achieving this egalitarian transformation is the entrenchment of justiciable socio-economic rights (SERs, an integral part of the Bill of Rights. The entrenched SERs require the State to put in place a legislative, policy and programmatic framework to enhance the realisation of its constitutional obligations to respect, protect and fulfill these rights for all Kenyans. These SER obligations, just like any other fundamental human rights obligations, are, however, not absolute and are subject to legitimate limitation by the State. Two approaches have been used in international and comparative national law jurisprudence to limit SERs: the proportionality approach, using a general limitation clause that has found application in international and regional jurisprudence on the one hand; and the reasonableness approach, using internal limitations contained in the standard of progressive realisation, an approach that has found application in the SER jurisprudence of the South African Courts, on the other hand. This article proposes that if the entrenched SERs are to achieve their transformative objectives, Kenyan courts must adopt a proportionality approach in the judicial adjudication of SER disputes. This proposal is based on the reasoning that for the entrenched SERs to have a substantive positive impact on the lives of the Kenyan people, any measure by the government aimed at their limitation must be subjected to strict scrutiny by the courts, a form of scrutiny that can be achieved only by using the proportionality standard entrenched in the article 24 general limitation clause.
The maximum economic depth of groundwater abstraction for irrigation
Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.
2017-12-01
Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.
2016-03-01
Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.
Direct maximum parsimony phylogeny reconstruction from genotype data.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2007-12-05
Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
International Nuclear Information System (INIS)
Fitoussi, L.
1987-12-01
The dose limit is defined to be the level of harmfulness which must not be exceeded, so that an activity can be exercised in a regular manner without running a risk unacceptable to man and the society. The paper examines the effects of radiation categorised into stochastic and non-stochastic. Dose limits for workers and the public are discussed
Geiss, S; Einax, J W
2001-07-01
Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example