OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.
5 CFR 550.105 - Biweekly maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Biweekly maximum earnings limitation. 550.105 Section 550.105 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.105 Biweekly...
5 CFR 550.106 - Annual maximum earnings limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Annual maximum earnings limitation. 550.106 Section 550.106 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY ADMINISTRATION (GENERAL) Premium Pay Maximum Earnings Limitations § 550.106 Annual...
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Feedback Limits to Maximum Seed Masses of Black Holes
Pacucci, Fabio; Natarajan, Priyamvada; Ferrara, Andrea
2017-02-01
The most massive black holes observed in the universe weigh up to ∼1010 M ⊙, nearly independent of redshift. Reaching these final masses likely required copious accretion and several major mergers. Employing a dynamical approach that rests on the role played by a new, relevant physical scale—the transition radius—we provide a theoretical calculation of the maximum mass achievable by a black hole seed that forms in an isolated halo, one that scarcely merged. Incorporating effects at the transition radius and their impact on the evolution of accretion in isolated halos, we are able to obtain new limits for permitted growth. We find that large black hole seeds (M • ≳ 104 M ⊙) hosted in small isolated halos (M h ≲ 109 M ⊙) accreting with relatively small radiative efficiencies (ɛ ≲ 0.1) grow optimally in these circumstances. Moreover, we show that the standard M •–σ relation observed at z ∼ 0 cannot be established in isolated halos at high-z, but requires the occurrence of mergers. Since the average limiting mass of black holes formed at z ≳ 10 is in the range 104–6 M ⊙, we expect to observe them in local galaxies as intermediate-mass black holes, when hosted in the rare halos that experienced only minor or no merging events. Such ancient black holes, formed in isolation with subsequent scant growth, could survive, almost unchanged, until present.
Radiation Pressure Acceleration: the factors limiting maximum attainable ion energy
Bulanov, S S; Schroeder, C B; Bulanov, S V; Esirkepov, T Zh; Kando, M; Pegoraro, F; Leemans, W P
2016-01-01
Radiation pressure acceleration (RPA) is a highly efficient mechanism of laser-driven ion acceleration, with with near complete transfer of the laser energy to the ions in the relativistic regime. However, there is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. The tightly focused laser pulses have group velocities smaller than the vacuum light speed, and, since they offer the high intensity needed for the RPA regime, it is plausible that group velocity effects would manifest themselves in the experiments involving tightly focused pulses and thin foils. However, in this case, finite spot size effects are important, and another limiting factor, the transverse expansion of the target, may dominate over the group velocity effect. As the laser pulse diffracts after passing the focus, the target expands accordingly due to the transverse intensity profile of the laser. Due to this expansion, the areal density of the target decreases, making it trans...
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Maximum Likelihood Position Location with a Limited Number of References
D. Munoz-Rodriguez
2011-04-01
Full Text Available A Position Location (PL scheme for mobile users on the outskirts of coverage areas is presented. The proposedmethodology makes it possible to obtain location information with only two land-fixed references. We introduce ageneral formulation and show that maximum-likelihood estimation can provide adequate PL information in thisscenario. The Root Mean Square (RMS error and error-distribution characterization are obtained for differentpropagation scenarios. In addition, simulation results and comparisons to another method are provided showing theaccuracy and the robustness of the method proposed. We study accuracy limits of the proposed methodology fordifferent propagation environments and show that even in the case of mismatch in the error variances, good PLestimation is feasible.
Concerning the maximum frequency limits of Gunn operators
R; F; Macpherson; G; M; Dunn; Ata; Khalid; D; R; S; Cumming
2015-01-01
The length of the transit region of a Gunn diode determines the natural frequency at which it operates in fundamental mode-the shorter the device,the higher the frequency of operation.The long-held view on Gunn diode design is that for a functioning device the minimum length of the transit region is about 1.5μm,limiting the devices to fundamental mode operation at frequencies of roughly 60 GHz.The authors posit that this theoretical restriction is a consequence of limits of the hydrodynamic models by which it was determined.Study of these devices by more advanced Monte Carlo techniques,which simulate the ballistic transport and electron-phonon interactions that govern device behaviour,offers a new lower bound of 0.5μm,which is already being approached by the experimental evidence shown in planar and vertical devices exhibiting Gunn operation at 0.6μm and 0.7μm.It is shown that the limits for Gunn domain operation are determined by the device length required for the transferred electron effect to occur(approximately 0.15μm,which as demonstrated is largely field independent)and the fundamental size of the domain(approximately 0.3μm).At this new length,operation in fundamental mode at much higher frequencies becomes possible-the Monte Carlo model used predicts power output at frequencies over 300 GHz.
Improved Reliability of Single-Phase PV Inverters by Limiting the Maximum Feed-in Power
Yang, Yongheng; Wang, Huai; Blaabjerg, Frede
2014-01-01
. The CPG control strategy is activated only when the DC input power from PV panels exceeds a specific power limit. It enables to limit the maximum feed-in power to the electric grids and also to improve the utilization of PV inverters. As a further study, this paper investigates the reliability performance...... of the power devices (e.g. IGBTs) used in PV inverters with the CPG control under different feed-in power limits. A long-term mission profile (i.e. solar irradiance and ambient temperature) based stress analysis approach is extended and applied to obtain the yearly electrical and thermal stresses of the power...
75 FR 76482 - Federal Housing Administration (FHA): FHA Maximum Loan Limits for 2011
2010-12-08
... URBAN DEVELOPMENT Federal Housing Administration (FHA): FHA Maximum Loan Limits for 2011 AGENCY: Office...: This notice announces that FHA has posted on its Web site the single-family maximum loan limits for 2011. The loan limit limits can be found at...
A probabilistic approach to the concept of Probable Maximum Precipitation
Papalexiou, S. M.; D. Koutsoyiannis
2006-01-01
International audience; The concept of Probable Maximum Precipitation (PMP) is based on the assumptions that (a) there exists an upper physical limit of the precipitation depth over a given area at a particular geographical location at a certain time of year, and (b) that this limit can be estimated based on deterministic considerations. The most representative and widespread estimation method of PMP is the so-called moisture maximization method. This method maximizes observed storms assuming...
A New Detection Approach Based on the Maximum Entropy Model
DONG Xiaomei; XIANG Guang; YU Ge; LI Xiaohua
2006-01-01
The maximum entropy model was introduced and a new intrusion detection approach based on the maximum entropy model was proposed. The vector space model was adopted for data presentation. The minimal entropy partitioning method was utilized for attribute discretization. Experiments on the KDD CUP 1999 standard data set were designed and the experimental results were shown. The receiver operating characteristic(ROC) curve analysis approach was utilized to analyze the experimental results. The analysis results show that the proposed approach is comparable to those based on support vector machine(SVM) and outperforms those based on C4.5 and Naive Bayes classifiers. According to the overall evaluation result, the proposed approach is a little better than those based on SVM.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
A Unified Maximum Likelihood Approach to Document Retrieval.
Bodoff, David; Enache, Daniel; Kambil, Ajit; Simon, Gary; Yukhimets, Alex
2001-01-01
Addresses the query- versus document-oriented dichotomy in information retrieval. Introduces a maximum likelihood approach to utilizing feedback data that can be used to construct a concrete object function that estimates both document and query parameters in accordance with all available feedback data. (AEF)
Triadic conceptual structure of the maximum entropy approach to evolution.
Herrmann-Pillath, Carsten; Salthe, Stanley N
2011-03-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution in a process discourse. Following recent contributions to the naturalization of Peircean semiosis, pointing towards 'physiosemiosis' or 'pansemiosis', we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. In this, we accommodate the state-centered thermodynamic framework to a process approach. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference devices evolving under natural selection. In this view, the principles of Maximum Entropy, Maximum Power, and Maximum Entropy Production work together to drive the emergence of information carrying structures, which at the same time maximize information capacity as well as the gradients of energy flows, such that ultimately, contrary to Schrödinger's seminal contribution, the evolutionary process is seen to be a physical expression of the Second Law.
Multitime maximum principle approach of minimal submanifolds and harmonic maps
Udriste, Constantin
2011-01-01
Some optimization problems coming from the Differential Geometry, as for example, the minimal submanifolds problem and the harmonic maps problem are solved here via interior solutions of appropriate multitime optimal control problems. Section 1 underlines some science domains where appear multitime optimal control problems. Section 2 (Section 3) recalls the multitime maximum principle for optimal control problems with multiple (curvilinear) integral cost functionals and $m$-flow type constraint evolution. Section 4 shows that there exists a multitime maximum principle approach of multitime variational calculus. Section 5 (Section 6) proves that the minimal submanifolds (harmonic maps) are optimal solutions of multitime evolution PDEs in an appropriate multitime optimal control problem. Section 7 uses the multitime maximum principle to show that of all solids having a given surface area, the sphere is the one having the greatest volume. Section 8 studies the minimal area of a multitime linear flow as optimal c...
Collective behaviours in the stock market -- A maximum entropy approach
Bury, Thomas
2014-01-01
Scale invariance, collective behaviours and structural reorganization are crucial for portfolio management (portfolio composition, hedging, alternative definition of risk, etc.). This lack of any characteristic scale and such elaborated behaviours find their origin in the theory of complex systems. There are several mechanisms which generate scale invariance but maximum entropy models are able to explain both scale invariance and collective behaviours. The study of the structure and collective modes of financial markets attracts more and more attention. It has been shown that some agent based models are able to reproduce some stylized facts. Despite their partial success, there is still the problem of rules design. In this work, we used a statistical inverse approach to model the structure and co-movements in financial markets. Inverse models restrict the number of assumptions. We found that a pairwise maximum entropy model is consistent with the data and is able to describe the complex structure of financial...
MB Distribution and its application using maximum entropy approach
Bhadra Suman
2016-01-01
Full Text Available Maxwell Boltzmann distribution with maximum entropy approach has been used to study the variation of political temperature and heat in a locality. We have observed that the political temperature rises without generating any political heat when political parties increase their attractiveness by intense publicity, but voters do not shift their loyalties. It has also been shown that political heat is generated and political entropy increases with political temperature remaining constant when parties do not change their attractiveness, but voters shift their loyalties (to more attractive parties.
A simple approach for maximum heat recovery calculations
Jezowski, J. (Wroclaw Technical Univ. (PL). Inst. of Chemical Engineering and Heating Equipment); Friedler, F. (Hungarian Academy of Sciences, Egyetem (HU). Research Inst. for Technical Chmeistry)
1992-04-01
This paper addresses the problem of calculating the maximum heat energy recovery for a given set of process streams. Simple, straightforward algorithms of calculations are presented that account for tasks with multiple utilities, forbidden matches and nonpoint utilities. A new way of applying the so-called dual-stream approach to reduce utility usage for tasks with forbidden matches is also given in this paper. The calculation methods do not require computer programs and mathematical programming application. They give the user a proper insight into a problem to understand heat integration as well as to recognize options and traps in heat exchanger network synthesis. (author).
Triadic Conceptual Structure of the Maximum Entropy Approach to Evolution
Herrmann-Pillath, Carsten
2010-01-01
Many problems in evolutionary theory are cast in dyadic terms, such as the polar oppositions of organism and environment. We argue that a triadic conceptual structure offers an alternative perspective under which the information generating role of evolution as a physical process can be analyzed, and propose a new diagrammatic approach. Peirce's natural philosophy was deeply influenced by his reception of both Darwin's theory and thermodynamics. Thus, we elaborate on a new synthesis which puts together his theory of signs and modern Maximum Entropy approaches to evolution. Following recent contributions to the naturalization of Peircean semiosis, we show that triadic structures involve the conjunction of three different kinds of causality, efficient, formal and final. We apply this on Ulanowicz's analysis of autocatalytic cycles as primordial patterns of life. This paves the way for a semiotic view of thermodynamics which is built on the idea that Peircean interpretants are systems of physical inference device...
Narrow band interference cancelation in OFDM: Astructured maximum likelihood approach
Sohail, Muhammad Sadiq
2012-06-01
This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing (ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that the NBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paper also presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations. © 2012 IEEE.
Pesticide food safety standards as companions to tolerances and maximum residue limits
Carl K Winter; Elizabeth A Jara
2015-01-01
Alowable levels for pesticide residues in foods, known as tolerances in the US and as maximum residue limits (MRLs) in much of the world, are widely yet inappropriately perceived as levels of safety concern. A novel approach to develop scientiifcaly defensible levels of safety concern is presented and an example to determine acute and chronic pesticide food safety standard (PFSS) levels for the fungicide captan on strawberries is provided. Using this approach, the chronic PFSS level for captan on strawberries was determined to be 2000 mg kg–1 and the acute PFSS level was determined to be 250 mg kg–1. Both levels are far above the existing tolerance and MRLs that commonly range from 3 to 20 mg kg–1, and provide evidence that captan residues detected at levels greater than the tolerance or MRLs are not of acute or chronic health concern even though they represent violative residues. The beneifts of developing the PFSS approach to serve as a companion to existing tolerances/MRLs include a greater understanding concerning the health signiifcance, if any, from exposure to violative pesticide residues. In addition, the PFSS approach can be universaly applied to al potential pesticide residues on al food commodities, can be modiifed by speciifc jurisdictions to take into account differences in food consumption practices, and can help prioritize food residue monitoring by identifying the pesticide/commodity combinations of the greatest potential food safety concern and guiding development of ifeld level analytical methods to detect pesticide residues on prioritized pesticide/commodity combinations.
5 CFR 630.302 - Maximum annual leave accumulation-forty-five day limitation.
2010-01-01
... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum annual leave accumulation-forty-five day limitation. 630.302 Section 630.302 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS ABSENCE AND LEAVE Annual Leave § 630.302 Maximum annual leave...
Time-optimal excitation of maximum quantum coherence: Physical limits and pulse sequences
Köcher, S. S.; Heydenreich, T.; Zhang, Y.; Reddy, G. N. M.; Caldarelli, S.; Yuan, H.; Glaser, S. J.
2016-04-01
Here we study the optimum efficiency of the excitation of maximum quantum (MaxQ) coherence using analytical and numerical methods based on optimal control theory. The theoretical limit of the achievable MaxQ amplitude and the minimum time to achieve this limit are explored for a set of model systems consisting of up to five coupled spins. In addition to arbitrary pulse shapes, two simple pulse sequence families of practical interest are considered in the optimizations. Compared to conventional approaches, substantial gains were found both in terms of the achieved MaxQ amplitude and in pulse sequence durations. For a model system, theoretically predicted gains of a factor of three compared to the conventional pulse sequence were experimentally demonstrated. Motivated by the numerical results, also two novel analytical transfer schemes were found: Compared to conventional approaches based on non-selective pulses and delays, double-quantum coherence in two-spin systems can be created twice as fast using isotropic mixing and hard spin-selective pulses. Also it is proved that in a chain of three weakly coupled spins with the same coupling constants, triple-quantum coherence can be created in a time-optimal fashion using so-called geodesic pulses.
Delocalized Epidemics on Graphs: A Maximum Entropy Approach
Sahneh, Faryad Darabi; Scoglio, Caterina
2016-01-01
The susceptible--infected--susceptible (SIS) epidemic process on complex networks can show metastability, resembling an endemic equilibrium. In a general setting, the metastable state may involve a large portion of the network, or it can be localized on small subgraphs of the contact network. Localized infections are not interesting because a true outbreak concerns network--wide invasion of the contact graph rather than localized infection of certain sites within the contact network. Existing approaches to localization phenomenon suffer from a major drawback: they fully rely on the steady--state solution of mean--field approximate models in the neighborhood of their phase transition point, where their approximation accuracy is worst; as statistical physics tells us. We propose a dispersion entropy measure that quantifies the localization of infections in a generic contact graph. Formulating a maximum entropy problem, we find an upper bound for the dispersion entropy of the possible metastable state in the exa...
Kim, K. M.; Smetana, P.
1990-03-01
Growth of large diameter Czochralski (CZ) silicon crystals require complete elimination of dislocations by means of Dash technique, where the seed diameter is reduced to a small size typically 3 mm in conjunction with increase in the pull rate. The maximum length of the large CZ silicon is estimated at the fracture stress limit of the seed neck diameter ( d). The maximum lengths for 200 and 300 mm CZ crystals amount to 197 and 87 cm, respectively, with d = 0.3 cm; the estimated maximum weight is 144 kg.
Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna
2014-01-01
High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum...
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Rannama, Indrek; Port, Kristjan; Bazanov, Boriss
2012-01-01
Maximum gears for youth category riders are limited. As a result, youth category riders are regularly compelled to ride in a high cadence regime. The aim of this study was to investigate if regular work at high cadence regime due to limited transmission in youth category riders reflects in effectual cadence at the point of maximal power generation during the 10 second sprint effort. 24 junior and youth national team cyclist’s average maximal peak power at various cadence regimes was registere...
Evaluating the time limit at maximum aerobic speed in elite swimmers. Training implications.
Renoux, J C
2001-12-01
The aim of the present study was to make use of the concepts of maximum aerobic speed (MAS) and time limit (tlim) in order to determine the relationship between these two elements, and this in an attempt to significantly improve both speed and swimming performance during a training season. To this same end, an intermittent training model was used, which was adapted to the value obtained for the time limit at maximum aerobic speed. During a 12 week training period, the maximum aerobic speed for a group of 9 top-ranking varsity swimmers was measured on two occasions, as was the tlim. The values generated indicated that: 1) there was an inverse relationship between MAS and the time this speed could be maintained, thus confirming the studies by Billat et al. (1994b); 2) a significant increase in MAS occurred over the 12 week period, although no such evolution was seen for the tlim; 3) there was an improvement in results; 4) the time limit could be used in designing a training program based on intermittent exercises. In addition, results of the present study should allow swimming coaches to draw up individualized training programs for a given swimmer by taking into consideration maximum aerobic speed, time limit and propelling efficiency.
Adaptive Statistical Language Modeling; A Maximum Entropy Approach
1994-04-19
recognition systems were built that could recognize vowels or digits, but they could not be successfully extended to handle more realistic language...maximum likelihood of gener- ating the training data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely
Osterloh, Frank E
2014-10-02
The Shockley-Queisser analysis provides a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells. But besides the semiconductor bandgap no other semiconductor properties are considered in the analysis. Here, we show that the maximum conversion efficiency is limited further by the excited state entropy of the semiconductors. The entropy loss can be estimated with the modified Sackur-Tetrode equation as a function of the curvature of the bands, the degeneracy of states near the band edges, the illumination intensity, the temperature, and the band gap. The application of the second law of thermodynamics to semiconductors provides a simple explanation for the observed high performance of group IV, III-V, and II-VI materials with strong covalent bonding and for the lower efficiency of transition metal oxides containing weakly interacting metal d orbitals. The model also predicts efficient energy conversion with quantum confined and molecular structures in the presence of a light harvesting mechanism.
Maximum Growth Potential and Periods of Resource Limitation in Apple Tree.
Reyes, Francesco; DeJong, Theodore; Franceschi, Pietro; Tagliavini, Massimo; Gianelle, Damiano
2016-01-01
Knowledge of seasonal maximum potential growth rates are important for assessing periods of resource limitations in fruit tree species. In this study we assessed the periods of resource limitation for vegetative (current year stems, and woody biomass) and reproductive (fruit) organs of a major agricultural crop: the apple tree. This was done by comparing relative growth rates (RGRs) of individual organs in trees with reduced competition for resources to trees grown under standard field conditions. Special attention was dedicated to disentangling patterns and values of maximum potential growth for each organ type. The period of resource limitation for vegetative growth was much longer than in another fruit tree species (peach): from late May until harvest. Two periods of resource limitation were highlighted for fruit: from the beginning of the season until mid-June, and about 1 month prior to harvest. By investigating the variability in individual organs growth we identified substantial differences in RGRs among different shoot categories (proleptic and epicormic) and within each group of monitored organs. Qualitatively different and more accurate values of growth rates for vegetative organs, compared to the use of the simple compartmental means, were estimated. Detailed, source-sink based tree growth models, commonly in need of fine parameter tuning, are expected to benefit from the results produced by these analyses.
Iammarino, Marco; Di Taranto, Aurelia; Muscarella, Marilena
2012-02-01
Sulphiting agents are commonly used food additives. They are not allowed in fresh meat preparations. In this work, 2250 fresh meat samples were analysed to establish the maximum concentration of sulphites that can be considered as "natural" and therefore be admitted in fresh meat preparations. The analyses were carried out by an optimised Monier-Williams Method and the positive samples confirmed by ion chromatography. Sulphite concentrations higher than the screening method LOQ (10.0 mg · kg(-1)) were found in 100 samples. Concentrations higher than 76.6 mg · kg(-1), attributable to sulphiting agent addition, were registered in 40 samples. Concentrations lower than 41.3 mg · kg(-1) were registered in 60 samples. Taking into account the distribution of sulphite concentrations obtained, it is plausible to estimate a maximum allowable limit of 40.0 mg · kg(-1) (expressed as SO(2)). Below this value the samples can be considered as "compliant".
A Maximum Likelihood Approach to Least Absolute Deviation Regression
Yinbo Li
2004-09-01
Full Text Available Least absolute deviation (LAD regression is an important tool used in numerous applications throughout science and engineering, mainly due to the intrinsic robust characteristics of LAD. In this paper, we show that the optimization needed to solve the LAD regression problem can be viewed as a sequence of maximum likelihood estimates (MLE of location. The derived algorithm reduces to an iterative procedure where a simple coordinate transformation is applied during each iteration to direct the optimization procedure along edge lines of the cost surface, followed by an MLE of location which is executed by a weighted median operation. Requiring weighted medians only, the new algorithm can be easily modularized for hardware implementation, as opposed to most of the other existing LAD methods which require complicated operations such as matrix entry manipulations. One exception is Wesolowsky's direct descent algorithm, which among the top algorithms is also based on weighted median operations. Simulation shows that the new algorithm is superior in speed to Wesolowsky's algorithm, which is simple in structure as well. The new algorithm provides a better tradeoff solution between convergence speed and implementation complexity.
Scaling of wingbeat frequency with body mass in bats and limits to maximum bat size.
Norberg, Ulla M Lindhe; Norberg, R Åke
2012-03-01
The ability to fly opens up ecological opportunities but flight mechanics and muscle energetics impose constraints, one of which is that the maximum body size must be kept below a rather low limit. The muscle power available for flight increases in proportion to flight muscle mass and wingbeat frequency. The maximum wingbeat frequency attainable among increasingly large animals decreases faster than the minimum frequency required, so eventually they coincide, thereby defining the maximum body mass at which the available power just matches up to the power required for sustained aerobic flight. Here, we report new wingbeat frequency data for 27 morphologically diverse bat species representing nine families, and additional data from the literature for another 38 species, together spanning a range from 2.0 to 870 g. For these species, wingbeat frequency decreases with increasing body mass as M(b)(-0.26). We filmed 25 of our 27 species in free flight outdoors, and for these the wingbeat frequency varies as M(b)(-0.30). These exponents are strikingly similar to the body mass dependency M(b)(-0.27) among birds, but the wingbeat frequency is higher in birds than in bats for any given body mass. The downstroke muscle mass is also a larger proportion of the body mass in birds. We applied these empirically based scaling functions for wingbeat frequency in bats to biomechanical theories about how the power required for flight and the power available converge as animal size increases. To this end we estimated the muscle mass-specific power required for the largest flying extant bird (12-16 kg) and assumed that the largest potential bat would exert similar muscle mass-specific power. Given the observed scaling of wingbeat frequency and the proportion of the body mass that is made up by flight muscles in birds and bats, we estimated the maximum potential body mass for bats to be 1.1-2.3 kg. The largest bats, extinct or extant, weigh 1.6 kg. This is within the range expected if it
Approaching the Limit of Predictability in Human Mobility
Lu, Xin; Wetter, Erik; Bharti, Nita; Tatem, Andrew J.; Bengtsson, Linus
2013-10-01
In this study we analyze the travel patterns of 500,000 individuals in Cote d'Ivoire using mobile phone call data records. By measuring the uncertainties of movements using entropy, considering both the frequencies and temporal correlations of individual trajectories, we find that the theoretical maximum predictability is as high as 88%. To verify whether such a theoretical limit can be approached, we implement a series of Markov chain (MC) based models to predict the actual locations visited by each user. Results show that MC models can produce a prediction accuracy of 87% for stationary trajectories and 95% for non-stationary trajectories. Our findings indicate that human mobility is highly dependent on historical behaviors, and that the maximum predictability is not only a fundamental theoretical limit for potential predictive power, but also an approachable target for actual prediction accuracy.
Seymour, Roger S
2010-09-01
Effect of size of inflorescences, flowers and cones on maximum rate of heat production is analysed allometrically in 23 species of thermogenic plants having diverse structures and ranging between 1.8 and 600 g. Total respiration rate (, micromol s(-1)) varies with spadix mass (M, g) according to in 15 species of Araceae. Thermal conductance (C, mW degrees C(-1)) for spadices scales according to C = 18.5M(0.73). Mass does not significantly affect the difference between floral and air temperature. Aroids with exposed appendices with high surface area have high thermal conductance, consistent with the need to vaporize attractive scents. True flowers have significantly lower heat production and thermal conductance, because closed petals retain heat that benefits resident insects. The florets on aroid spadices, either within a floral chamber or spathe, have intermediate thermal conductance, consistent with mixed roles. Mass-specific rates of respiration are variable between species, but reach 900 nmol s(-1) g(-1) in aroid male florets, exceeding rates of all other plants and even most animals. Maximum mass-specific respiration appears to be limited by oxygen delivery through individual cells. Reducing mass-specific respiration may be one selective influence on the evolution of large size of thermogenic flowers.
Dhara, Chirag; Kleidon, Axel
2015-01-01
Convective and radiative cooling are the two principle mechanisms by which the Earth's surface transfers heat into the atmosphere and that shape surface temperature. However, this partitioning is not sufficiently constrained by energy and mass balances alone. We use a simple energy balance model in which convective fluxes and surface temperatures are determined with the additional thermodynamic limit of maximum convective power. We then show that the broad geographic variation of heat fluxes and surface temperatures in the climatological mean compare very well with the ERA-Interim reanalysis over land and ocean. We also show that the estimates depend considerably on the formulation of longwave radiative transfer and that a spatially uniform offset is related to the assumed cold temperature sink at which the heat engine operates.
Limit Distribution Theory for Maximum Likelihood Estimation of a Log-Concave Density.
Balabdaoui, Fadoua; Rufibach, Kaspar; Wellner, Jon A
2009-06-01
We find limiting distributions of the nonparametric maximum likelihood estimator (MLE) of a log-concave density, i.e. a density of the form f(0) = exp varphi(0) where varphi(0) is a concave function on R. Existence, form, characterizations and uniform rates of convergence of the MLE are given by Rufibach (2006) and Dümbgen and Rufibach (2007). The characterization of the log-concave MLE in terms of distribution functions is the same (up to sign) as the characterization of the least squares estimator of a convex density on [0, infinity) as studied by Groeneboom, Jongbloed and Wellner (2001b). We use this connection to show that the limiting distributions of the MLE and its derivative are, under comparable smoothness assumptions, the same (up to sign) as in the convex density estimation problem. In particular, changing the smoothness assumptions of Groeneboom, Jongbloed and Wellner (2001b) slightly by allowing some higher derivatives to vanish at the point of interest, we find that the pointwise limiting distributions depend on the second and third derivatives at 0 of H(k), the "lower invelope" of an integrated Brownian motion process minus a drift term depending on the number of vanishing derivatives of varphi(0) = log f(0) at the point of interest. We also establish the limiting distribution of the resulting estimator of the mode M(f(0)) and establish a new local asymptotic minimax lower bound which shows the optimality of our mode estimator in terms of both rate of convergence and dependence of constants on population values.
Marczak Monika
2015-09-01
Full Text Available The aim of this study was to determine the correlation between lipophilicity and maximum residue limit (MRL value specified for veterinary drugs in the fatty tissue of various animal species. The analysis was performed on a group of 73 compounds with different modes of action and MRL values determined for the fatty tissue of animals. Additionally, the logarithm of water/organic phase partition ratio (LogP and the ratio of ionised and unionised substance in buffer with pH 7.4 (LogD7.4 were calculated. The main analysis was performed after the division of the whole group into six fractions. The linear correlation and regression analysis were used to determine the indirect relationship between the mean arithmetic value of LogP or LogD7.4 in selected fractions and related LogMRL of the drugs tested. The calculations revealed a linear correlation between fractioned lipophilicity and LogMRL values for the analysed compounds. The existence of indirect relationship between lipophilicity and MRL values determined for fatty tissue was confirmed.
Maximum Parsimony and the Skewness Test: A Simulation Study of the Limits of Applicability
Määttä, Jussi; Roos, Teemu
2016-01-01
The maximum parsimony (MP) method for inferring phylogenies is widely used, but little is known about its limitations in non-asymptotic situations. This study employs large-scale computations with simulated phylogenetic data to estimate the probability that MP succeeds in finding the true phylogeny for up to twelve taxa and 256 characters. The set of candidate phylogenies are taken to be unrooted binary trees; for each simulated data set, the tree lengths of all (2n − 5)!! candidates are computed to evaluate quantities related to the performance of MP, such as the probability of finding the true phylogeny, the probability that the tree with the shortest length is unique, the probability that the true phylogeny has the shortest tree length, and the expected inverse of the number of trees sharing the shortest length. The tree length distributions are also used to evaluate and extend the skewness test of Hillis for distinguishing between random and phylogenetic data. The results indicate, for example, that the critical point after which MP achieves a success probability of at least 0.9 is roughly around 128 characters. The skewness test is found to perform well on simulated data and the study extends its scope to up to twelve taxa. PMID:27035667
The Betz-Joukowsky limit for the maximum power coefficient of wind turbines
Okulov, Valery; van Kuik, G.A.M.
2009-01-01
The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Reppert, Michael; Tokmakoff, Andrei
The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.
Maximum detection range limitation of pulse laser radar with Geiger-mode avalanche photodiode array
Luo, Hanjun; Xu, Benlian; Xu, Huigang; Chen, Jingbo; Fu, Yadan
2015-05-01
When designing and evaluating the performance of laser radar system, maximum detection range achievable is an essential parameter. The purpose of this paper is to propose a theoretical model of maximum detection range for simulating the Geiger-mode laser radar's ranging performance. Based on the laser radar equation and the requirement of the minimum acceptable detection probability, and assuming the primary electrons triggered by the echo photons obey Poisson statistics, the maximum range theoretical model is established. By using the system design parameters, the influence of five main factors, namely emitted pulse energy, noise, echo position, atmospheric attenuation coefficient, and target reflectivity on the maximum detection range are investigated. The results show that stronger emitted pulse energy, lower noise level, more front echo position in the range gate, higher atmospheric attenuation coefficient, and higher target reflectivity can result in greater maximum detection range. It is also shown that it's important to select the minimum acceptable detection probability, which is equivalent to the system signal-to-noise ratio for producing greater maximum detection range and lower false-alarm probability.
Janská, Veronika; Jiménez-Alfaro, Borja; Chytrý, Milan; Divíšek, Jan; Anenkhonov, Oleg; Korolyuk, Andrey; Lashchinskyi, Nikolai; Culek, Martin
2017-03-01
We modelled the European distribution of vegetation types at the Last Glacial Maximum (LGM) using present-day data from Siberia, a region hypothesized to be a modern analogue of European glacial climate. Distribution models were calibrated with current climate using 6274 vegetation-plot records surveyed in Siberia. Out of 22 initially used vegetation types, good or moderately good models in terms of statistical validation and expert-based evaluation were computed for 18 types, which were then projected to European climate at the LGM. The resulting distributions were generally consistent with reconstructions based on pollen records and dynamic vegetation models. Spatial predictions were most reliable for steppe, forest-steppe, taiga, tundra, fens and bogs in eastern and central Europe, which had LGM climate more similar to present-day Siberia. The models for western and southern Europe, regions with a lower degree of climatic analogy, were only reliable for mires and steppe vegetation, respectively. Modelling LGM vegetation types for the wetter and warmer regions of Europe would therefore require gathering calibration data from outside Siberia. Our approach adds value to the reconstruction of vegetation at the LGM, which is limited by scarcity of pollen and macrofossil data, suggesting where specific habitats could have occurred. Despite the uncertainties of climatic extrapolations and the difficulty of validating the projections for vegetation types, the integration of palaeodistribution modelling with other approaches has a great potential for improving our understanding of biodiversity patterns during the LGM.
Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors
Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi
2013-01-01
Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...
Approaching the Landauer limit via nanomechanical resonators
Wenzler, Josef-Stefan
order of ˜ 104 kT, just two orders of magnitude higher than energies involved in DNA polymerase (20--100 kT) and approaching the VNL limit to within a factor of 10,000. In addition to being promising candidates towards testing the VNL principle for the first time, reversible nanomechanical logic gates could play a crucial role in developing highly efficient reversible computers, with implications for efficient error correction and quantum computing.
2012-01-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are ...
Quantum cryptography approaching the classical limit.
Weedbrook, Christian; Pirandola, Stefano; Lloyd, Seth; Ralph, Timothy C
2010-09-10
We consider the security of continuous-variable quantum cryptography as we approach the classical limit, i.e., when the unknown preparation noise at the sender's station becomes significantly noisy or thermal (even by as much as 10(4) times greater than the variance of the vacuum mode). We show that, provided the channel transmission losses do not exceed 50%, the security of quantum cryptography is not dependent on the channel transmission, and is therefore incredibly robust against significant amounts of excess preparation noise. We extend these results to consider for the first time quantum cryptography at wavelengths considerably longer than optical and find that regions of security still exist all the way down to the microwave.
Curiale, Ariel H; Vegas-Sánchez-Ferrero, Gonzalo; Bosch, Johan G; Aja-Fernández, Santiago
2015-08-01
The strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle between the myocardial movement and the ultrasound beam should be small to provide reliable measures. This constraint makes it difficult to provide strain measures of the entire myocardium. Alternative non-Doppler techniques such as Speckle Tracking (ST) can provide strain measures without angle constraints. However, the spatial resolution and the noisy appearance of speckle still make the strain estimation a challenging task in EC. Several maximum likelihood approaches have been proposed to statistically characterize the behavior of speckle, which results in a better performance of speckle tracking. However, those models do not consider common transformations to achieve the final B-mode image (e.g. interpolation). This paper proposes a new maximum likelihood approach for speckle tracking which effectively characterizes speckle of the final B-mode image. Its formulation provides a diffeomorphic scheme than can be efficiently optimized with a second-order method. The novelty of the method is threefold: First, the statistical characterization of speckle generalizes conventional speckle models (Rayleigh, Nakagami and Gamma) to a more versatile model for real data. Second, the formulation includes local correlation to increase the efficiency of frame-to-frame speckle tracking. Third, a probabilistic myocardial tissue characterization is used to automatically identify more reliable myocardial motions. The accuracy and agreement assessment was evaluated on a set of 16 synthetic image sequences for three different scenarios: normal, acute ischemia and acute dyssynchrony. The proposed method was compared to six speckle tracking methods. Results revealed that the proposed method is the most
Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications
Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua
2015-01-01
-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...
Shurtleff, Amy C; Garza, Nicole; Lackemeyer, Matthew; Carrion, Ricardo; Griffiths, Anthony; Patterson, Jean; Edwin, Samuel S; Bavari, Sina
2012-12-01
We describe herein, limitations on research at biosafety level 4 (BSL-4) containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT) are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP) conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Jean Patterson
2012-12-01
Full Text Available We describe herein, limitations on research at biosafety level 4 (BSL-4 containment laboratories, with regard to biosecurity regulations, safety considerations, research space limitations, and physical constraints in executing experimental procedures. These limitations can severely impact the number of collaborations and size of research projects investigating microbial pathogens of biodefense concern. Acquisition, use, storage, and transfer of biological select agents and toxins (BSAT are highly regulated due to their potential to pose a severe threat to public health and safety. All federal, state, city, and local regulations must be followed to obtain and maintain registration for the institution to conduct research involving BSAT. These include initial screening and continuous monitoring of personnel, controlled access to containment laboratories, accurate and current BSAT inventory records. Safety considerations are paramount in BSL-4 containment laboratories while considering the types of research tools, workflow and time required for conducting both in vivo and in vitro experiments in limited space. Required use of a positive-pressure encapsulating suit imposes tremendous physical limitations on the researcher. Successful mitigation of these constraints requires additional time, effort, good communication, and creative solutions. Test and evaluation of novel vaccines and therapeutics conducted under good laboratory practice (GLP conditions for FDA approval are prioritized and frequently share the same physical space with important ongoing basic research studies. The possibilities and limitations of biomedical research involving microbial pathogens of biodefense concern in BSL-4 containment laboratories are explored in this review.
Maximum acceptable inherent buoyancy limit for aircrew/passenger helicopter immersion suit systems.
Brooks, C J
1988-12-01
Helicopter crew and passengers flying over cold water wear immersion suits to provide hypothermic protection in case of ditching in cold water. The suits and linings have trapped air in the material to provide the necessary insulation and are thus very buoyant. By paradox, this buoyancy may be too much for a survivor to overcome in escaping from the cabin of a rapidly sinking inverted helicopter. The Canadian General Standard Board requested that research be conducted to investigate what should be the maximum inherent buoyancy in an immersion suit that would not inhibit escape, yet would provide adequate thermal insulation. This experiment reports on 12 subjects who safely escaped with 146N (33 lbf) of added buoyancy from a helicopter underwater escape trainer. It discusses the logic for and recommendation that the inherent buoyancy in a helicopter crew/passenger immersion suit system should not exceed this figure.
Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.
Larkin, Joshua D; Cook, Peter R
2012-07-30
Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.
Yudong Zhang
2011-04-01
Full Text Available This paper proposes a global multi-level thresholding method for image segmentation. As a criterion for this, the traditional method uses the Shannon entropy, originated from information theory, considering the gray level image histogram as a probability distribution, while we applied the Tsallis entropy as a general information theory entropy formalism. For the algorithm, we used the artificial bee colony approach since execution of an exhaustive algorithm would be too time-consuming. The experiments demonstrate that: 1 the Tsallis entropy is superior to traditional maximum entropy thresholding, maximum between class variance thresholding, and minimum cross entropy thresholding; 2 the artificial bee colony is more rapid than either genetic algorithm or particle swarm optimization. Therefore, our approach is effective and rapid.
Zile, M R; Izzi, G; Gaasch, W H
1991-02-01
We tested the hypothesis that maximum systolic elastance (Emax) fails to detect a decline in left ventricular (LV) contractile function when diastolic dysfunction is present. Canine hearts were studied in an isolated blood-perfused heart apparatus (isovolumic LV); contractile dysfunction was produced by 60 or 90 minutes of global ischemia, followed by 90 minutes of reperfusion. Nine normal hearts underwent 60 minutes of ischemia, and five underwent 90 minutes of ischemia. After the ischemia-reperfusion sequence, developed pressure, pressure-volume area, and myocardial ATP level were significantly less than those at baseline in all 14 hearts. In the group undergoing 60 minutes of ischemia, LV diastolic pressure did not increase, whereas Emax decreased from 5.2 +/- 2.5 to 2.9 +/- 1.4 mm Hg/ml (p less than 0.05). In the group undergoing 90 minutes of ischemia, diastolic pressure increased (from 10 +/- 2 to 37 +/- 20 mm Hg, p less than 0.05), and Emax did not change significantly (from 5.1 +/- 4.3 to 4.3 +/- 2.5 mm Hg/ml). A second series of experiments was performed in 13 hearts with pressure-overload hypertrophy (aortic-band model with echocardiography and catheterization studies before the ischemia-reperfusion protocol). Five had evidence for pump failure, whereas eight remained compensated. After 60 minutes of ischemia and 90 minutes of reperfusion, developed pressure, pressure-volume area, and myocardial ATP level were significantly less than those at baseline in all 13 hearts. In the group with compensated LV hypertrophy, LV diastolic pressure did not change, whereas Emax decreased from 6.9 +/- 3.0 to 3.1 +/- 2.3 mm Hg/ml (p less than 0.05).(ABSTRACT TRUNCATED AT 250 WORDS)
Hyland, D. C.
1983-01-01
A stochastic structural control model is described. In contrast to the customary deterministic model, the stochastic minimum data/maximum entropy model directly incorporates the least possible a priori parameter information. The approach is to adopt this model as the basic design model, thus incorporating the effects of parameter uncertainty at a fundamental level, and design mean-square optimal controls (that is, choose the control law to minimize the average of a quadratic performance index over the parameter ensemble).
Longitudinal Examination of Age-Predicted Symptom-Limited Exercise Maximum Heart Rate
Zhu, Na; Suarez, Jose; Sidney, Steve; Sternfeld, Barbara; Schreiner, Pamela J.; Carnethon, Mercedes R.; Lewis, Cora E.; Crow, Richard S.; Bouchard, Claude; Haskell, William; Jacobs, David R.
2010-01-01
Purpose To estimate the association of age with maximal heart rate (MHR). Methods Data were obtained in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Participants were black and white men and women aged 18-30 in 1985-86 (year 0). A symptom-limited maximal graded exercise test was completed at years 0, 7, and 20 by 4969, 2583, and 2870 participants, respectively. After exclusion 9622 eligible tests remained. Results In all 9622 tests, estimated MHR (eMHR, beats/minute) had a quadratic relation to age in the age range 18 to 50 years, eMHR=179+0.29*age-0.011*age2. The age-MHR association was approximately linear in the restricted age ranges of consecutive tests. In 2215 people who completed both year 0 and 7 tests (age range 18 to 37), eMHR=189–0.35*age; and in 1574 people who completed both year 7 and 20 tests (age range 25 to 50), eMHR=199–0.63*age. In the lowest baseline BMI quartile, the rate of decline was 0.20 beats/minute/year between years 0-7 and 0.51 beats/minute/year between years 7-20; while in the highest baseline BMI quartile there was a linear rate of decline of approximately 0.7 beats/minute/year over the full age of 18 to 50 years. Conclusion Clinicians making exercise prescriptions should be aware that the loss of symptom-limited MHR is much slower at young adulthood and more pronounced in later adulthood. In particular, MHR loss is very slow in those with lowest BMI below age 40. PMID:20639723
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.
Minimum redundancy maximum relevance feature selection approach for temporal gene expression data.
Radovic, Milos; Ghalwash, Mohamed; Filipovic, Nenad; Obradovic, Zoran
2017-01-03
Feature selection, aiming to identify a subset of features among a possibly large set of features that are relevant for predicting a response, is an important preprocessing step in machine learning. In gene expression studies this is not a trivial task for several reasons, including potential temporal character of data. However, most feature selection approaches developed for microarray data cannot handle multivariate temporal data without previous data flattening, which results in loss of temporal information. We propose a temporal minimum redundancy - maximum relevance (TMRMR) feature selection approach, which is able to handle multivariate temporal data without previous data flattening. In the proposed approach we compute relevance of a gene by averaging F-statistic values calculated across individual time steps, and we compute redundancy between genes by using a dynamical time warping approach. The proposed method is evaluated on three temporal gene expression datasets from human viral challenge studies. Obtained results show that the proposed method outperforms alternatives widely used in gene expression studies. In particular, the proposed method achieved improvement in accuracy in 34 out of 54 experiments, while the other methods outperformed it in no more than 4 experiments. We developed a filter-based feature selection method for temporal gene expression data based on maximum relevance and minimum redundancy criteria. The proposed method incorporates temporal information by combining relevance, which is calculated as an average F-statistic value across different time steps, with redundancy, which is calculated by employing dynamical time warping approach. As evident in our experiments, incorporating the temporal information into the feature selection process leads to selection of more discriminative features.
Continuity of the maximum-entropy inference: Convex geometry and numerical ranges approach
Rodman, Leiba [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Spitkovsky, Ilya M., E-mail: ims2@nyu.edu, E-mail: ilya@math.wm.edu [Department of Mathematics, College of William and Mary, P.O. Box 8795, Williamsburg, Virginia 23187-8795 (United States); Division of Science and Mathematics, New York University Abu Dhabi, Saadiyat Island, P.O. Box 129188, Abu Dhabi (United Arab Emirates); Szkoła, Arleta, E-mail: szkola@mis.mpg.de; Weis, Stephan, E-mail: maths@stephan-weis.info [Max Planck Institute for Mathematics in the Sciences, Inselstrasse 22, D-04103 Leipzig (Germany)
2016-01-15
We study the continuity of an abstract generalization of the maximum-entropy inference—a maximizer. It is defined as a right-inverse of a linear map restricted to a convex body which uniquely maximizes on each fiber of the linear map a continuous function on the convex body. Using convex geometry we prove, amongst others, the existence of discontinuities of the maximizer at limits of extremal points not being extremal points themselves and apply the result to quantum correlations. Further, we use numerical range methods in the case of quantum inference which refers to two observables. One result is a complete characterization of points of discontinuity for 3 × 3 matrices.
A maximum likelihood approach to estimating articulator positions from speech acoustics
Hogden, J.
1996-09-23
This proposal presents an algorithm called maximum likelihood continuity mapping (MALCOM) which recovers the positions of the tongue, jaw, lips, and other speech articulators from measurements of the sound-pressure waveform of speech. MALCOM differs from other techniques for recovering articulator positions from speech in three critical respects: it does not require training on measured or modeled articulator positions, it does not rely on any particular model of sound propagation through the vocal tract, and it recovers a mapping from acoustics to articulator positions that is linearly, not topographically, related to the actual mapping from acoustics to articulation. The approach categorizes short-time windows of speech into a finite number of sound types, and assumes the probability of using any articulator position to produce a given sound type can be described by a parameterized probability density function. MALCOM then uses maximum likelihood estimation techniques to: (1) find the most likely smooth articulator path given a speech sample and a set of distribution functions (one distribution function for each sound type), and (2) change the parameters of the distribution functions to better account for the data. Using this technique improves the accuracy of articulator position estimates compared to continuity mapping -- the only other technique that learns the relationship between acoustics and articulation solely from acoustics. The technique has potential application to computer speech recognition, speech synthesis and coding, teaching the hearing impaired to speak, improving foreign language instruction, and teaching dyslexics to read. 34 refs., 7 figs.
Mat Jan, Nur Amalina; Shabri, Ani
2017-01-01
TL-moments approach has been used in an analysis to identify the best-fitting distributions to represent the annual series of maximum streamflow data over seven stations in Johor, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: Three-parameter lognormal (LN3) and Pearson Type III (P3) distribution. The main objective of this study is to derive the TL-moments ( t 1,0), t 1 = 1,2,3,4 methods for LN3 and P3 distributions. The performance of TL-moments ( t 1,0), t 1 = 1,2,3,4 was compared with L-moments through Monte Carlo simulation and streamflow data over a station in Johor, Malaysia. The absolute error is used to test the influence of TL-moments methods on estimated probability distribution functions. From the cases in this study, the results show that TL-moments with four trimmed smallest values from the conceptual sample (TL-moments [4, 0]) of LN3 distribution was the most appropriate in most of the stations of the annual maximum streamflow series in Johor, Malaysia.
Kinoshita, Takashi, E-mail: tkino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Kawayama, Tomotaka, E-mail: kawayama_tomotaka@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Imamura, Youhei, E-mail: mamura_youhei@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Sakazaki, Yuki, E-mail: sakazaki@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Hirai, Ryo, E-mail: hirai_ryou@kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Ishii, Hidenobu, E-mail: shii_hidenobu@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Suetomo, Masashi, E-mail: jin_t_f_c@yahoo.co.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Matsunaga, Kazuko, E-mail: kmatsunaga@kouhoukai.or.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Azuma, Koichi, E-mail: azuma@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan); Fujimoto, Kiminori, E-mail: kimichan@med.kurume-u.ac.jp [Department of Radiology, Kurume University School of Medicine, Kurume (Japan); Hoshino, Tomoaki, E-mail: hoshino@med.kurume-u.ac.jp [Division of Respirology, Neurology, and Rheumatology, Department of Medicine, Kurume University School of Medicine, Kurume (Japan)
2015-04-15
Highlights: •It is often to use computed tomography (CT) scan for diagnosis of chronic obstructive pulmonary disease. •CT scan is more expensive and higher. •A plane chest radiography more simple and cheap. Moreover, it is useful as detection of pulmonary emphysema, but not airflow limitation. •Our study demonstrated that the maximum inspiratory and expiratory plane chest radiography technique could detect severe airflow limitations. •We believe that the technique is helpful to diagnose the patients with chronic obstructive pulmonary disease. -- Abstract: Background: The usefulness of paired maximum inspiratory and expiratory (I/E) plain chest radiography (pCR) for diagnosis of chronic obstructive pulmonary disease (COPD) is still unclear. Objectives: We examined whether measurement of the I/E ratio using paired I/E pCR could be used for detection of airflow limitation in patients with COPD. Methods: Eighty patients with COPD (GOLD stage I = 23, stage II = 32, stage III = 15, stage IV = 10) and 34 control subjects were enrolled. The I/E ratios of frontal and lateral lung areas, and lung distance between the apex and base on pCR views were analyzed quantitatively. Pulmonary function parameters were measured at the same time. Results: The I/E ratios for the frontal lung area (1.25 ± 0.01), the lateral lung area (1.29 ± 0.01), and the lung distance (1.18 ± 0.01) were significantly (p < 0.05) reduced in COPD patients compared with controls (1.31 ± 0.02 and 1.38 ± 0.02, and 1.22 ± 0.01, respectively). The I/E ratios in frontal and lateral areas, and lung distance were significantly (p < 0.05) reduced in severe (GOLD stage III) and very severe (GOLD stage IV) COPD as compared to control subjects, although the I/E ratios did not differ significantly between severe and very severe COPD. Moreover, the I/E ratios were significantly correlated with pulmonary function parameters. Conclusions: Measurement of I/E ratios on paired I/E pCR is simple and
Maximum-likelihood approaches reveal signatures of positive selection in IL genes in mammals.
Neves, Fabiana; Abrantes, Joana; Steinke, John W; Esteves, Pedro J
2014-02-01
ILs are part of the immune system and are involved in multiple biological activities. ILs have been shown to evolve under positive selection; however, little information exists regarding which codons are specifically selected. By using different codon-based maximum-likelihood (ML) approaches, signatures of positive selection in mammalian ILs were searched for. Sequences of 46 ILs were retrieved from publicly available databases of mammalian genomes to detect signatures of positive selection in individual codons. Evolutionary analyses were conducted under two ML frameworks, the HyPhy package implemented in the Data Monkey Web Server and CODEML implemented in PAML. Signatures of positive selection were found in 28 ILs: IL-1A and B; IL-2, IL-4 to IL-10, IL-12A and B; IL-14 to IL-17A and C; IL-18, IL-20 to IL-22, IL-25, IL-26, IL-27B, IL-31, IL-34, IL-36A; and G. Codons under positive selection varied between 1 and 15. No evidence of positive selection was detected in IL-13; IL-17B and F; IL-19, IL-23, IL-24, IL-27A; or IL-29. Most mammalian ILs have sites evolving under positive selection, which may be explained by the multitude of biological processes in which ILs are enrolled. The results obtained raise hypotheses concerning the ILs functions, which should be pursued by using mutagenesis and crystallographic approaches.
Gul, Sehrish; Zou, Xiang; Hassan, Che Hashim; Azam, Muhammad; Zaman, Khalid
2015-12-01
This study investigates the relationship between energy consumption and carbon dioxide emission in the causal framework, as the direction of causality remains has a significant policy implication for developed and developing countries. The study employed maximum entropy bootstrap (Meboot) approach to examine the causal nexus between energy consumption and carbon dioxide emission using bivariate as well as multivariate framework for Malaysia, over a period of 1975-2013. This is a unified approach without requiring the use of conventional techniques based on asymptotical theory such as testing for possible unit root and cointegration. In addition, it can be applied in the presence of non-stationary of any type including structural breaks without any type of data transformation to achieve stationary. Thus, it provides more reliable and robust inferences which are insensitive to time span as well as lag length used. The empirical results show that there is a unidirectional causality running from energy consumption to carbon emission both in the bivariate model and multivariate framework, while controlling for broad money supply and population density. The results indicate that Malaysia is an energy-dependent country and hence energy is stimulus to carbon emissions.
Liang Xu
2015-01-01
Full Text Available Aiming to reduce the number of crashes caused by speeding at night on road section with a crosswalk, a study was conducted on the maximum speed limit and safe average luminance at night. In order to investigate the potential relationship between drivers’ recognitive characteristics and driving speed under different road lighting features, data of remaining driving time (period from the time that crossing pedestrian is recognized to the time that vehicle arrives at crosswalk with an uniform speed were recorded. The results of the data analysis show that it is more difficult for divers to recognize crossing pedestrian at night when a single pedestrian is statistic and wears dark clothes. The remaining driving time decreases with the increase of driving speed and the decrease of road luminance. With the collected data, several multivariate nonlinear regression models were established to capture the relationship among the variables of remaining driving time at night, the driving speed, and the average luminance. Then the modeling results were used to develop the reasonable speed limit and safe average luminance by physical equations. The case studies are also introduced at the end of the paper.
Analysis of Rayleigh waves with circular wavefront: a maximum likelihood approach
Maranò, Stefano; Hobiger, Manuel; Bergamo, Paolo; Fäh, Donat
2017-09-01
Analysis of Rayleigh waves is an important task in seismology and geotechnical investigations. In fact, properties of Rayleigh waves such as velocity and polarization are important observables that carry information about the structure of the subsoil. Applications analysing Rayleigh waves include active and passive seismic surveys. In active surveys, there is a controlled source of seismic energy and the sensors are typically placed near the source. In passive surveys, there is not a controlled source, rather, seismic waves from ambient vibrations are analysed and the sources are assumed to be far outside the array, simplifying the analysis by the assumption of plane waves. Whenever the source is in the proximity of the array of sensors or even within the array it is necessary to model the wave propagation accounting for the circular wavefront. In addition, it is also necessary to model the amplitude decay due to geometrical spreading. This is the case of active seismic surveys in which sensors are located near the seismic source. In this work, we propose a maximum likelihood (ML) approach for the analysis of Rayleigh waves generated at a near source. Our statistical model accounts for the curvature of the wavefront and amplitude decay due to geometrical spreading. Using our method, we show applications on real data of the retrieval of Rayleigh wave dispersion and ellipticity. We employ arrays with arbitrary geometry. Furthermore, we show how it is possible to combine active and passive surveys. This enables us to enlarge the analysable frequency range and therefore the depths investigated. We retrieve properties of Rayleigh waves from both active and passive surveys and show the excellent agreement of the results from the two surveys. In our approach we use the same array of sensors for both the passive and the active survey. This greatly simplifies the logistics necessary to perform a survey.
A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations
Lee, Tai-Sung; Radak, Brian K.; Pabis, Anna; York, Darrin M.
2013-01-01
A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem. PMID:23457427
A Maximum Entropy Approach to Assess Debonding in Honeycomb aluminum Plates
Viviana Meruane
2014-05-01
Full Text Available Honeycomb sandwich structures are used in a wide variety of applications. Nevertheless, due to manufacturing defects or impact loads, these structures can be subject to imperfect bonding or debonding between the skin and the honeycomb core. The presence of debonding reduces the bending stiffness of the composite panel, which causes detectable changes in its vibration characteristics. This article presents a new supervised learning algorithm to identify debonded regions in aluminum honeycomb panels. The algorithm uses a linear approximation method handled by a statistical inference model based on the maximum-entropy principle. The merits of this new approach are twofold: training is avoided and data is processed in a period of time that is comparable to the one of neural networks. The honeycomb panels are modeled with finite elements using a simplified three-layer shell model. The adhesive layer between the skin and core is modeled using linear springs, the rigidities of which are reduced in debonded sectors. The algorithm is validated using experimental data of an aluminum honeycomb panel under different damage scenarios.
A New Maximum Likelihood Approach for Free Energy Profile Construction from Molecular Simulations.
Lee, Tai-Sung; Radak, Brian K; Pabis, Anna; York, Darrin M
2013-01-08
A novel variational method for construction of free energy profiles from molecular simulation data is presented. The variational free energy profile (VFEP) method uses the maximum likelihood principle applied to the global free energy profile based on the entire set of simulation data (e.g from multiple biased simulations) that spans the free energy surface. The new method addresses common obstacles in two major problems usually observed in traditional methods for estimating free energy surfaces: the need for overlap in the re-weighting procedure and the problem of data representation. Test cases demonstrate that VFEP outperforms other methods in terms of the amount and sparsity of the data needed to construct the overall free energy profiles. For typical chemical reactions, only ~5 windows and ~20-35 independent data points per window are sufficient to obtain an overall qualitatively correct free energy profile with sampling errors an order of magnitude smaller than the free energy barrier. The proposed approach thus provides a feasible mechanism to quickly construct the global free energy profile and identify free energy barriers and basins in free energy simulations via a robust, variational procedure that determines an analytic representation of the free energy profile without the requirement of numerically unstable histograms or binning procedures. It can serve as a new framework for biased simulations and is suitable to be used together with other methods to tackle with the free energy estimation problem.
Maximum entropy approach to statistical inference for an ocean acoustic waveguide.
Knobles, D P; Sagers, J D; Koch, R A
2012-02-01
A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations.
Beltrán, M C; Romero, T; Althaus, R L; Molina, M P
2013-01-01
The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects...
Rahat, Alma A M; Everson, Richard M; Fieldsend, Jonathan E
2015-01-01
Mesh network topologies are becoming increasingly popular in battery-powered wireless sensor networks, primarily because of the extension of network range. However, multihop mesh networks suffer from higher energy costs, and the routing strategy employed directly affects the lifetime of nodes with limited energy resources. Hence when planning routes there are trade-offs to be considered between individual and system-wide battery lifetimes. We present a multiobjective routing optimisation approach using hybrid evolutionary algorithms to approximate the optimal trade-off between the minimum lifetime and the average lifetime of nodes in the network. In order to accomplish this combinatorial optimisation rapidly, our approach prunes the search space using k-shortest path pruning and a graph reduction method that finds candidate routes promoting long minimum lifetimes. When arbitrarily many routes from a node to the base station are permitted, optimal routes may be found as the solution to a well-known linear program. We present an evolutionary algorithm that finds good routes when each node is allowed only a small number of paths to the base station. On a real network deployed in the Victoria & Albert Museum, London, these solutions, using only three paths per node, are able to achieve minimum lifetimes of over 99% of the optimum linear program solution's time to first sensor battery failure.
Concept for estimating mitochondrial DNA haplogroups using a maximum likelihood approach (EMMA)☆
Röck, Alexander W.; Dür, Arne; van Oven, Mannis; Parson, Walther
2013-01-01
The assignment of haplogroups to mitochondrial DNA haplotypes contributes substantial value for quality control, not only in forensic genetics but also in population and medical genetics. The availability of Phylotree, a widely accepted phylogenetic tree of human mitochondrial DNA lineages, led to the development of several (semi-)automated software solutions for haplogrouping. However, currently existing haplogrouping tools only make use of haplogroup-defining mutations, whereas private mutations (beyond the haplogroup level) can be additionally informative allowing for enhanced haplogroup assignment. This is especially relevant in the case of (partial) control region sequences, which are mainly used in forensics. The present study makes three major contributions toward a more reliable, semi-automated estimation of mitochondrial haplogroups. First, a quality-controlled database consisting of 14,990 full mtGenomes downloaded from GenBank was compiled. Together with Phylotree, these mtGenomes serve as a reference database for haplogroup estimates. Second, the concept of fluctuation rates, i.e. a maximum likelihood estimation of the stability of mutations based on 19,171 full control region haplotypes for which raw lane data is available, is presented. Finally, an algorithm for estimating the haplogroup of an mtDNA sequence based on the combined database of full mtGenomes and Phylotree, which also incorporates the empirically determined fluctuation rates, is brought forward. On the basis of examples from the literature and EMPOP, the algorithm is not only validated, but both the strength of this approach and its utility for quality control of mitochondrial haplotypes is also demonstrated. PMID:23948335
Mikosch, Thomas Valentin; Moser, Martin
2013-01-01
We investigate the maximum increment of a random walk with heavy-tailed jump size distribution. Here heavy-tailedness is understood as regular variation of the finite-dimensional distributions. The jump sizes constitute a strictly stationary sequence. Using a continuous mapping argument acting on...... on the point processes of the normalized jump sizes, we prove that the maximum increment of the random walk converges in distribution to a Fréchet distributed random variable....
Park, Junhong; Palumbo, Daniel L.
2004-01-01
The use of shunted piezoelectric patches in reducing vibration and sound radiation of structures has several advantages over passive viscoelastic elements, e.g., lower weight with increased controllability. The performance of the piezoelectric patches depends on the shunting electronics that are designed to dissipate vibration energy through a resistive element. In past efforts most of the proposed tuning methods were based on modal properties of the structure. In these cases, the tuning applies only to one mode of interest and maximum tuning is limited to invariant points when based on den Hartog's invariant points concept. In this study, a design method based on the wave propagation approach is proposed. Optimal tuning is investigated depending on the dynamic and geometric properties that include effects from boundary conditions and position of the shunted piezoelectric patch relative to the structure. Active filters are proposed as shunting electronics to implement the tuning criteria. The developed tuning methods resulted in superior capabilities in minimizing structural vibration and noise radiation compared to other tuning methods. The tuned circuits are relatively insensitive to changes in modal properties and boundary conditions, and can applied to frequency ranges in which multiple modes have effects.
Derivation of some new distributions in statistical mechanics using maximum entropy approach
Ray Amritansu
2014-01-01
Full Text Available The maximum entropy principle has been earlier used to derive the Bose Einstein(B.E., Fermi Dirac(F.D. & Intermediate Statistics(I.S. distribution of statistical mechanics. The central idea of these distributions is to predict the distribution of the microstates, which are the particle of the system, on the basis of the knowledge of some macroscopic data. The latter information is specified in the form of some simple moment constraints. One distribution differs from the other in the way in which the constraints are specified. In the present paper, we have derived some new distributions similar to B.E., F.D. distributions of statistical mechanics by using maximum entropy principle. Some proofs of B.E. & F.D. distributions are shown, and at the end some new results are discussed.
Hyland, D. C.
1985-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modelling and reduced order control design method for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed and the application of the methodology to several large space structure (LSS) problems of representative complexity is illustrated.
Fraternali, Fernando; Marcelli, Gianluca
2011-01-01
We present a meshfree method for the curvature estimation of membrane networks based on the Local Maximum Entropy approach recently presented in (Arroyo and Ortiz, 2006). A continuum regularization of the network is carried out by balancing the maximization of the information entropy corresponding to the nodal data, with the minimization of the total width of the shape functions. The accuracy and convergence properties of the given curvature prediction procedure are assessed through numerical applications to benchmark problems, which include coarse grained molecular dynamics simulations of the fluctuations of red blood cell membranes (Marcelli et al., 2005; Hale et al., 2009). We also provide an energetic discrete-to-continuum approach to the prediction of the zero-temperature bending rigidity of membrane networks, which is based on the integration of the local curvature estimates. The Local Maximum Entropy approach is easily applicable to the continuum regularization of fluctuating membranes, and the predict...
The SIS and SIR stochastic epidemic models: a maximum entropy approach.
Artalejo, J R; Lopez-Herrero, M J
2011-12-01
We analyze the dynamics of infectious disease spread by formulating the maximum entropy (ME) solutions of the susceptible-infected-susceptible (SIS) and the susceptible-infected-removed (SIR) stochastic models. Several scenarios providing helpful insight into the use of the ME formalism for epidemic modeling are identified. The ME results are illustrated with respect to several descriptors, including the number of recovered individuals and the time to extinction. An application to infectious data from outbreaks of extended spectrum beta lactamase (ESBL) in a hospital is also considered.
An Exact Solution Approach for the Maximum Multicommodity K-splittable Flow Problem
Gamst, Mette; Petersen, Bjørn
2009-01-01
This talk concerns the NP-hard Maximum Multicommodity k-splittable Flow Problem (MMCkFP) in which each commodity may use at most k paths between its origin and its destination. A new branch-and-cut-and-price algorithm is presented. The master problem is a two-index formulation of the MMCk......FP and the pricing problem is the shortest path problem with forbidden paths. A new branching strategy forcing and forbidding the use of certain paths is developed. The new branch-and-cut-and-price algorithm is computationally evaluated and compared to results from the literature. The new algorithm shows very...
Maximum-Likelihood Approach to Topological Charge Fluctuations in Lattice Gauge Theory
Brower, R C; Fleming, G T; Lin, M F; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Voronov, G; Vranas, P; Weinberg, E; Witzel, O
2014-01-01
We present a novel technique for the determination of the topological susceptibility (related to the variance of the distribution of global topological charge) from lattice gauge theory simulations, based on maximum-likelihood analysis of the Markov-chain Monte Carlo time series. This technique is expected to be particularly useful in situations where relatively few tunneling events are observed. Restriction to a lattice subvolume on which topological charge is not quantized is explored, and may lead to further improvement when the global topology is poorly sampled. We test our proposed method on a set of lattice data, and compare it to traditional methods.
A maximum entropy approach to separating noise from signal in bimodal affiliation networks
Dianati, Navid
2016-01-01
In practice, many empirical networks, including co-authorship and collocation networks are unimodal projections of a bipartite data structure where one layer represents entities, the second layer consists of a number of sets representing affiliations, attributes, groups, etc., and an inter-layer link indicates membership of an entity in a set. The edge weight in the unimodal projection, which we refer to as a co-occurrence network, counts the number of sets to which both end-nodes are linked. Interpreting such dense networks requires statistical analysis that takes into account the bipartite structure of the underlying data. Here we develop a statistical significance metric for such networks based on a maximum entropy null model which preserves both the frequency sequence of the individuals/entities and the size sequence of the sets. Solving the maximum entropy problem is reduced to solving a system of nonlinear equations for which fast algorithms exist, thus eliminating the need for expensive Monte-Carlo sam...
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.
2017-01-01
. An objective method is suggested that provides an optimal set of fishing mortality within the range, minimizing the risk of total allowable catch mismatches among stocks captured within mixed fisheries, and addressing explicitly the trade-offs between the most and least productive stocks.......Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative...... ranges to combine long-term single-stock targets with flexible, short-term, mixed-fisheries management requirements applied to the main North Sea demersal stocks. It is shown that sustained fishing at the upper bound of the range may lead to unacceptable risks when technical interactions occur...
A maximum-entropy approach to the adiabatic freezing of a supercooled liquid.
Prestipino, Santi
2013-04-28
I employ the van der Waals theory of Baus and co-workers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
López-Valcarce Roberto
2004-01-01
Full Text Available We address the problem of estimating the speed of a road vehicle from its acoustic signature, recorded by a pair of omnidirectional microphones located next to the road. This choice of sensors is motivated by their nonintrusive nature as well as low installation and maintenance costs. A novel estimation technique is proposed, which is based on the maximum likelihood principle. It directly estimates car speed without any assumptions on the acoustic signal emitted by the vehicle. This has the advantages of bypassing troublesome intermediate delay estimation steps as well as eliminating the need for an accurate yet general enough acoustic traffic model. An analysis of the estimate for narrowband and broadband sources is provided and verified with computer simulations. The estimation algorithm uses a bank of modified crosscorrelators and therefore it is well suited to DSP implementation, performing well with preliminary field data.
A Sum-of-Squares and Semidefinite Programming Approach for Maximum Likelihood DOA Estimation
Shu Cai
2016-12-01
Full Text Available Direction of arrival (DOA estimation using a uniform linear array (ULA is a classical problem in array signal processing. In this paper, we focus on DOA estimation based on the maximum likelihood (ML criterion, transform the estimation problem into a novel formulation, named as sum-of-squares (SOS, and then solve it using semidefinite programming (SDP. We first derive the SOS and SDP method for DOA estimation in the scenario of a single source and then extend it under the framework of alternating projection for multiple DOA estimation. The simulations demonstrate that the SOS- and SDP-based algorithms can provide stable and accurate DOA estimation when the number of snapshots is small and the signal-to-noise ratio (SNR is low. Moveover, it has a higher spatial resolution compared to existing methods based on the ML criterion.
A.H. Curiale (Ariel H.); G. Vegas-Sanchez-Ferrero (Gonzalo); J.G. Bosch (Hans); S. Aja-Fernández (Santiago)
2015-01-01
textabstractThe strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle
A.H. Curiale (Ariel H.); G. Vegas-Sanchez-Ferrero (Gonzalo); J.G. Bosch (Johan); S. Aja-Fernández (Santiago)
2015-01-01
textabstractThe strain and strain-rate measures are commonly used for the analysis and assessment of regional myocardial function. In echocardiography (EC), the strain analysis became possible using Tissue Doppler Imaging (TDI). Unfortunately, this modality shows an important limitation: the angle b
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
LIMITATIONS OF CURRENT APPROACHES FOR THE TREATMENT OF ACROMEGALY.
Shanik, Michael H
2016-02-01
Acromegaly is a rare disease characterized by hypersecretion of growth hormone (GH), typically from a benign pituitary somatotroph adenoma, that leads to subsequent hypersecretion of insulin-like growth factor 1 (IGF-1). Patients with acromegaly have an increased risk of mortality and progressive worsening of comorbidities. Surgery, medical therapy, and radiotherapy are currently available treatment approaches for patients with acromegaly, with overall therapeutic goals of lowering GH levels and achieving normal IGF-1 levels, reducing tumor size, improving comorbidities, and minimizing mortality risk. Although surgery can lead to biochemical remission in some patients with acromegaly, many patients will continue to have uncontrolled disease and require additional treatment. We reviewed recently published reports and present a summary of the safety and efficacy of current treatment modalities for patients with acromegaly. A substantial proportion of patients who receive medical therapy or radiotherapy will have persistently elevated GH and/or IGF-1. Because of the serious health consequences of continued elevation of GH and IGF-1, there is a need to improve therapeutic approaches to optimize biochemical control, particularly in high-need patient populations for whom current treatment options provide limited benefit. This review discusses current treatment options for patients with acromegaly, limitations associated with each treatment approach, and areas within the current treatment algorithm, as well as patient populations for which improved therapeutic options are needed. Novel agents in development were also highlighted, which have the potential to improve management of patients with uncontrolled or persistent acromegaly.
Degree-Based Approach for the Maximum Clique Problem%度数法求解最大团问题
胡新; 王丽珍; 何瓦特; 姚华传
2013-01-01
由于最大团问题(maximum clique problem,MCP)的复杂性、挑战性,以及在数据挖掘等领域的广泛应用,使得求解MCP问题具有非常重要的意义.根据最大团顶点度数较大的特点,提出了从图中第一个度数最大的顶点出发递归求解最大团的算法(简称度数法).为了进一步提高算法的效率,根据图的特点和最大团的特点提出了三个改进的剪枝策略.从理论上证明了算法的正确性和完整性,其时间复杂度为 O(1.442n),空间为 O(n2).通过实验验证了度数法及其改进剪枝策略的效果和效率.%The maximum clique problem (MCP) is a significant problem in computer science field because of its complexity, challenging and the extensive applications in data mining and other fields. This paper puts forward a new degree-based approach to finding the maximum clique in a given graph G. According to the characteristic that the degree of the vertexes in a maximum clique is relatively larger, the new approach solves the maximum clique by starting from the first vertex which degree is the largest in the graph G. In order to further improve the efficiency of the algorithm, this paper presents three improvement and pruning strategies based on the characteristics of the graph and the maximum clique. This paper proves that the new approach is correct and complete, the time complexity is O(1.442n) , and the space cost is O(n2) . Finally, an empirical study verifies the effectiveness and efficiency of the new approach.
An articulatorily constrained, maximum entropy approach to speech recognition and speech coding
Hogden, J.
1996-12-31
Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values are constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.
Approaching the Heisenberg Limit without Single-Particle Detection.
Davis, Emily; Bentsen, Gregory; Schleier-Smith, Monika
2016-02-05
We propose an approach to quantum phase estimation that can attain precision near the Heisenberg limit without requiring single-particle-resolved state detection. We show that the "one-axis twisting" interaction, well known for generating spin squeezing in atomic ensembles, can also amplify the output signal of an entanglement-enhanced interferometer to facilitate readout. Applying this interaction-based readout to oversqueezed, non-Gaussian states yields a Heisenberg scaling in phase sensitivity, which persists in the presence of detection noise as large as the quantum projection noise of an unentangled ensemble. Even in dissipative implementations-e.g., employing light-mediated interactions in an optical cavity or Rydberg dressing-the method significantly relaxes the detection resolution required for spectroscopy beyond the standard quantum limit.
Yu, Hwa-Lung; Wang, Chih-Hsin
2013-02-05
Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.
Maximum shortening velocity of lymphatic muscle approaches that of striated muscle.
Zhang, Rongzhen; Taucer, Anne I; Gashev, Anatoliy A; Muthuchamy, Mariappan; Zawieja, David C; Davis, Michael J
2013-11-15
Lymphatic muscle (LM) is widely considered to be a type of vascular smooth muscle, even though LM cells uniquely express contractile proteins from both smooth muscle and cardiac muscle. We tested the hypothesis that LM exhibits an unloaded maximum shortening velocity (Vmax) intermediate between that of smooth muscle and cardiac muscle. Single lymphatic vessels were dissected from the rat mesentery, mounted in a servo-controlled wire myograph, and subjected to isotonic quick release protocols during spontaneous or agonist-evoked contractions. After maximal activation, isotonic quick releases were performed at both the peak and plateau phases of contraction. Vmax was 0.48 ± 0.04 lengths (L)/s at the peak: 2.3 times higher than that of mesenteric arteries and 11.4 times higher than mesenteric veins. In cannulated, pressurized lymphatic vessels, shortening velocity was determined from the maximal rate of constriction [rate of change in internal diameter (-dD/dt)] during spontaneous contractions at optimal preload and minimal afterload; peak -dD/dt exceeded that obtained during any of the isotonic quick release protocols (2.14 ± 0.30 L/s). Peak -dD/dt declined with pressure elevation or activation using substance P. Thus, isotonic methods yielded Vmax values for LM in the mid to high end (0.48 L/s) of those the recorded for phasic smooth muscle (0.05-0.5 L/s), whereas isobaric measurements yielded values (>2.0 L/s) that overlapped the midrange of values for cardiac muscle (0.6-3.3 L/s). Our results challenge the dogma that LM is classical vascular smooth muscle, and its unusually high Vmax is consistent with the expression of cardiac muscle contractile proteins in the lymphatic vessel wall.
A seqlet-based maximum entropy Markov approach for protein secondary structure prediction
DONG; Qiwen; WANG; Xiaolong; LIN; Lei; GUAN; Yi
2005-01-01
A novel method for predicting the secondary structures of proteins from amino acid sequence has been presented. The protein secondary structure seqlets that are analogous to the words in natural language have been extracted. These seqlets will capture the relationship between amino acid sequence and the secondary structures of proteins and further form the protein secondary structure dictionary. To be elaborate, the dictionary is organism-specific. Protein secondary structure prediction is formulated as an integrated word segmentation and part of speech tagging problem. The word-lattice is used to represent the results of the word segmentation and the maximum entropy model is used to calculate the probability of a seqlet tagged as a certain secondary structure type. The method is markovian in the seqlets, permitting efficient exact calculation of the posterior probability distribution over all possible word segmentations and their tags by viterbi algorithm. The optimal segmentations and their tags are computed as the results of protein secondary structure prediction. The method is applied to predict the secondary structures of proteins of four organisms respectively and compared with the PHD method. The results show that the performance of this method is higher than that of PHD by about 3.9% Q3 accuracy and 4.6% SOV accuracy. Combining with the local similarity protein sequences that are obtained by BLAST can give better prediction. The method is also tested on the 50 CASP5 target proteins with Q3 accuracy 78.9% and SOV accuracy 77.1%. A web server for protein secondary structure prediction has been constructed which is available at http://www.insun. hit. edu. cn: 81/demos/biology/index.html.
Maximum ADPE Approach for a High Rate CCSDS Return Link Processing System
Krimchansky, Alexander; Moe, Brian; Erickson, David
1996-01-01
The earth observing system data and operations system (EDOS) multi-mission data processing and distribution system for the earth observing system is considered. The EDOS was based on the Consultative Committee for Space Data Systems (CCSDS) protocols. The development included the challenge of developing and demonstrating a 150 Mbps CCSDS return link processing capability for the support of the first EDOS delivery. The approach used general-purpose automated data processing equipment (ADPE) and minimized the use of customized hardware. The way in which the system was developed is described. The principle design decisions and the performance benchmark results are presented.
Kaiadi, Mehrzad; Tunestål, Per; Johansson, Bengt
2010-01-01
High EGR rates combined with turbocharging has been identified as a promising way to increase the maximum load and efficiency of heavy duty spark ignition Natural Gas engines. With stoichiometric conditions a three way catalyst can be used which means that regulated emissions can be kept at very low levels. Most of the heavy duty NG engines are diesel engines which are converted for SI operation. These engine's components are in common with the diesel-engine which put limits on higher exh...
Comparative study on maximum residue limits standards of pesticides in peanuts%花生农药最大残留限量标准比对研究
丁小霞; 李培武; 周海燕; 李娟; 白艺珍
2011-01-01
It is important to protect the health of consumers and standardize the agricultural products in trading market. One essential aspect is to develop and implement a standardized scientific and applicable maximum residue limits( MRL) pesticides. A comparative study of maximum residue limits standards of pesticides in peanuts was carried out among China,Codex Alimentarius Commission (CAC) , Unite States, Japan and European Union. Corre-sponding suggestion was put forward after analyzing the problems in maximum residue limit standards of pesticides in China.%制定和实施科学合理的农药最大残留限量标准是保护消费者健康和规范农产品国际贸易的重要手段.对我国、国际食品法典委员会、花生主产国美国以及我国花生主要出口目的国日本和欧盟的花生农药最大残留限量标准进行了系统比较,分析了我国花生农药最大残留限量标准存在的问题,提出了相应的建议.
Nguyen, Truong-Huy; El Outayek, Sarah; Lim, Sun Hee; Nguyen, Van-Thanh-Van
2017-10-01
Many probability distributions have been developed to model the annual maximum rainfall series (AMS). However, there is no general agreement as to which distribution should be used due to the lack of a suitable evaluation method. This paper presents hence a general procedure for assessing systematically the performance of ten commonly used probability distributions in rainfall frequency analyses based on their descriptive as well as predictive abilities. This assessment procedure relies on an extensive set of graphical and numerical performance criteria to identify the most suitable models that could provide the most accurate and most robust extreme rainfall estimates. The proposed systematic assessment approach has been shown to be more efficient and more robust than the traditional model selection method based on only limited goodness-of-fit criteria. To test the feasibility of the proposed procedure, an illustrative application was carried out using 5-min, 1-h, and 24-h annual maximum rainfall data from a network of 21 raingages located in the Ontario region in Canada. Results have indicated that the GEV, GNO, and PE3 models were the best models for describing the distribution of daily and sub-daily annual maximum rainfalls in this region. The GEV distribution, however, was preferred to the GNO and PE3 because it was based on a more solid theoretical basis for representing the distribution of extreme random variables.
Harry X.ZHANG; Shaw L.YU
2008-01-01
One of the key challenges in the total max-imum daily load (TMDL) development process is how to define the critical condition for a receiving water-body. The main concern in using a continuous simu-lation approach is the absence of any guarantee that the most critical condition will be captured during the selected representative hydrologic period, given the scar-city of long-term continuous data. The objectives of this paper are to clearly address the critical condition in the TMDL development process and to compare continu-ous and evEnt-based approaches in defining critical con-dition during TMDL development for a waterbody impacted by both point and nonpoint source pollution. A practical, event-based critical flow-storm (CFS) approach was developed to explicitly addresses the crit-ical condition as a combination of a low stream flow and a storm event of a selected magnitude, both having cer-tain frequencies of occurrence. This paper illustrated the CFS concept and provided its theoretical basis using a derived analytical conceptual model. The CFS approach clearly defined a critical condition, obtained reasonable results and could be considered as an alternative method in TMDL development.
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Approaching the ideal elastic strain limit in silicon nanowires.
Zhang, Hongti; Tersoff, Jerry; Xu, Shang; Chen, Huixin; Zhang, Qiaobao; Zhang, Kaili; Yang, Yong; Lee, Chun-Sing; Tu, King-Ning; Li, Ju; Lu, Yang
2016-08-01
Achieving high elasticity for silicon (Si) nanowires, one of the most important and versatile building blocks in nanoelectronics, would enable their application in flexible electronics and bio-nano interfaces. We show that vapor-liquid-solid-grown single-crystalline Si nanowires with diameters of ~100 nm can be repeatedly stretched above 10% elastic strain at room temperature, approaching the theoretical elastic limit of silicon (17 to 20%). A few samples even reached ~16% tensile strain, with estimated fracture stress up to ~20 GPa. The deformations were fully reversible and hysteresis-free under loading-unloading tests with varied strain rates, and the failures still occurred in brittle fracture, with no visible sign of plasticity. The ability to achieve this "deep ultra-strength" for Si nanowires can be attributed mainly to their pristine, defect-scarce, nanosized single-crystalline structure and atomically smooth surfaces. This result indicates that semiconductor nanowires could have ultra-large elasticity with tunable band structures for promising "elastic strain engineering" applications.
Ke, Jau-Chuan; Lin, Chuen-Horng
2008-11-01
We consider the M[x]/G/1 queueing system, in which the server operates N policy and a single vacation. As soon as the system becomes empty the server leaves for a vacation of random length V. When he returns from the vacation and the system size is greater than or equal to a threshold value N, he starts to serve the waiting customers. If he finds fewer customers than N. he waits in the system until the system size reaches or exceeds N. The server is subject to breakdowns according to a Poisson process and his repair time obeys an arbitrary distribution. We use maximum entropy principle to derive the approximate formulas for the steady-state probability distributions of the queue length. We perform a comparative analysis between the approximate results with established exact results for various batch size, vacation time, service time and repair time distributions. We demonstrate that the maximum entropy approach is efficient enough for practical purpose and is a feasible method for approximating the solution of complex queueing systems.
Riley, Pete; Mikic, Z.; Linker, J. A.
2003-01-01
In this study we describe a series of MHD simulations covering the time period from 12 January 1999 to 19 September 2001 (Carrington Rotation 1945 to 1980). This interval coincided with: (1) the Sun s approach toward solar maximum; and (2) Ulysses second descent to the southern polar regions, rapid latitude scan, and arrival into the northern polar regions. We focus on the evolution of several key parameters during this time, including the photospheric magnetic field, the computed coronal hole boundaries, the computed velocity profile near the Sun, and the plasma and magnetic field parameters at the location of Ulysses. The model results provide a global context for interpreting the often complex in situ measurements. We also present a heuristic explanation of stream dynamics to describe the morphology of interaction regions at solar maximum and contrast it with the picture that resulted from Ulysses first orbit, which occurred during more quiescent solar conditions. The simulation results described here are available at: http://sun.saic.com.
Jorge Pereira
2015-12-01
Full Text Available Biological invasion by exotic organisms became a key issue, a concern associated to the deep impacts on several domains described as resultant from such processes. A better understanding of the processes, the identification of more susceptible areas, and the definition of preventive or mitigation measures are identified as critical for the purpose of reducing associated impacts. The use of species distribution modeling might help on the purpose of identifying areas that are more susceptible to invasion. This paper aims to present preliminary results on assessing the susceptibility to invasion by the exotic species Acacia dealbata Mill. in the Ceira river basin. The results are based on the maximum entropy modeling approach, considered one of the correlative modelling techniques with better predictive performance. Models which validation is based on independent data sets present better performance, an evaluation based on the AUC of ROC accuracy measure.
Heemstra de Groot, S.M.; Herrmann, O.E.
1990-01-01
An algorithm based on an alternative scheduling approach for iterative acyclic and cyclid DFGs (data-flow graphs) with limited resources that exploits inter- and intra-iteration parallelism is presented. The method is based on guiding the scheduling algorithm with the information supplied by a
Approaching the ideal elastic limit of metallic glasses
Tian, Lin; Cheng, Yong-Qiang; Shan, Zhi-Wei; Li, Ju; Cheng-cai WANG; Han, Xiao-dong; Sun, Jun; Ma, Evan
2012-01-01
The ideal elastic limit is the upper bound to the stress and elastic strain a material can withstand. This intrinsic property has been widely studied for crystalline metals, both theoretically and experimentally. For metallic glasses, however, the ideal elastic limit remains poorly characterized and understood. Here we show that the elastic strain limit and the corresponding strength of submicron-sized metallic glass specimens are about twice as high as the already impressive elastic limit ob...
Entropy-limited hydrodynamics: a novel approach to relativistic hydrodynamics
Guercilena, Federico; Radice, David; Rezzolla, Luciano
2017-07-01
We present entropy-limited hydrodynamics (ELH): a new approach for the computation of numerical fluxes arising in the discretization of hyperbolic equations in conservation form. ELH is based on the hybridisation of an unfiltered high-order scheme with the first-order Lax-Friedrichs method. The activation of the low-order part of the scheme is driven by a measure of the locally generated entropy inspired by the artificial-viscosity method proposed by Guermond et al. (J. Comput. Phys. 230(11):4248-4267, 2011, doi: 10.1016/j.jcp.2010.11.043). Here, we present ELH in the context of high-order finite-differencing methods and of the equations of general-relativistic hydrodynamics. We study the performance of ELH in a series of classical astrophysical tests in general relativity involving isolated, rotating and nonrotating neutron stars, and including a case of gravitational collapse to black hole. We present a detailed comparison of ELH with the fifth-order monotonicity preserving method MP5 (Suresh and Huynh in J. Comput. Phys. 136(1):83-99, 1997, doi: 10.1006/jcph.1997.5745), one of the most common high-order schemes currently employed in numerical-relativity simulations. We find that ELH achieves comparable and, in many of the cases studied here, better accuracy than more traditional methods at a fraction of the computational cost (up to {˜}50% speedup). Given its accuracy and its simplicity of implementation, ELH is a promising framework for the development of new special- and general-relativistic hydrodynamics codes well adapted for massively parallel supercomputers.
Grujicic, M.; Ramaswami, S.; Snipes, J. S.; Yavari, R.; Yen, C.-F.; Cheeseman, B. A.
2015-01-01
Our recently developed multi-physics computational model for the conventional gas metal arc welding (GMAW) joining process has been upgraded with respect to its predictive capabilities regarding the process optimization for the attainment of maximum ballistic limit within the weld. The original model consists of six modules, each dedicated to handling a specific aspect of the GMAW process, i.e., (a) electro-dynamics of the welding gun; (b) radiation-/convection-controlled heat transfer from the electric arc to the workpiece and mass transfer from the filler metal consumable electrode to the weld; (c) prediction of the temporal evolution and the spatial distribution of thermal and mechanical fields within the weld region during the GMAW joining process; (d) the resulting temporal evolution and spatial distribution of the material microstructure throughout the weld region; (e) spatial distribution of the as-welded material mechanical properties; and (f) spatial distribution of the material ballistic limit. In the present work, the model is upgraded through the introduction of the seventh module in recognition of the fact that identification of the optimum GMAW process parameters relative to the attainment of the maximum ballistic limit within the weld region entails the use of advanced optimization and statistical sensitivity analysis methods and tools. The upgraded GMAW process model is next applied to the case of butt welding of MIL A46100 (a prototypical high-hardness armor-grade martensitic steel) workpieces using filler metal electrodes made of the same material. The predictions of the upgraded GMAW process model pertaining to the spatial distribution of the material microstructure and ballistic limit-controlling mechanical properties within the MIL A46100 butt weld are found to be consistent with general expectations and prior observations.
Chen, Xi Lin; De Santis, Valerio; Umenei, Aghuinyue Esai
2014-07-07
In this study, the maximum received power obtainable through wireless power transfer (WPT) by a small receiver (Rx) coil from a relatively large transmitter (Tx) coil is numerically estimated in the frequency range from 100 kHz to 10 MHz based on human body exposure limits. Analytical calculations were first conducted to determine the worst-case coupling between a homogeneous cylindrical phantom with a radius of 0.65 m and a Tx coil positioned 0.1 m away with the radius ranging from 0.25 to 2.5 m. Subsequently, three high-resolution anatomical models were employed to compute the peak induced field intensities with respect to various Tx coil locations and dimensions. Based on the computational results, scaling factors which correlate the cylindrical phantom and anatomical model results were derived. Next, the optimal operating frequency, at which the highest transmitter source power can be utilized without exceeding the exposure limits, is found to be around 2 MHz. Finally, a formulation is proposed to estimate the maximum obtainable power of WPT in a typical room scenario while adhering to the human body exposure compliance mandates.
Advantages and limitations of the 'worst case scenario' approach in IMPT treatment planning.
Casiraghi, M; Albertini, F; Lomax, A J
2013-03-07
The 'worst case scenario' (also known as the minimax approach in optimization terms) is a common approach to model the effect of delivery uncertainties in proton treatment planning. Using the 'dose-error-bar distribution' previously reported by our group as an example, we have investigated in more detail one of the underlying assumptions of this method. That is, the dose distributions calculated for a limited number of worst case patient positioning scenarios (i.e. limited number of shifts sampled on a spherical surface) represent the worst dose distributions that can occur during the patient treatment under setup uncertainties. By uniformly sampling patient shifts from anywhere within a spherical error-space, a number of treatment scenarios have been simulated and dose deviations from the nominal dose distribution have been computed. The dose errors from these simulations (comprehensive approach) have then been compared to the dose-error-bar approach previously reported (surface approximation) using both point-by-point and dose- and error-volume-histogram analysis (DVH/EVHs). This comparison has been performed for two different clinical cases treated using intensity modulated proton therapy (IMPT): a skull-base and a spinal-axis tumor. Point-by-point evaluation shows that the surface approximation leads to a correct estimation (95% accuracy) of the potential dose errors for the 96% and 85% of the irradiated voxels, for the two investigated cases respectively. We also found that the voxels for which the surface approximation fails are generally localized close to sharp soft tissue-bone interfaces and air cavities. Moreover, analysis of EVHs and DVHs for the two cases shows that the percentage of voxels of a given volume of interest potentially affected by a certain maximum dose error is correctly estimated using the surface approximation and that this approach also accurately predicts the upper and lower bounds of the DVH curves that can occur under positioning
Population pressure on coral atolls: trends and approaching limits.
Rapaport, M
1990-09-01
Trends and approaching limits of population pressure on coral atolls is discussed by examining the atoll environment in terms of the physical geography, the production systems, and resource distribution. Atoll populations are grouped as dependent and independent, and demographic trends in population growth, migraiton, urbanization, and political dependency are reviewed. Examination of the carrying capacity includes a dynamic model, the influences of the West, and philopsophical considerations. The carrying capacity is the "maximal population supportable in a given area". Traditional models are criticized because of a lack in accounting for external linkages. The proposed model is dynamic and considers perceived needs and overseas linkages. It also explains regional disparities in population distribution, and provides a continuing model for population movement from outer islands to district centers and mainland areas. Because of increased expectations and perceived needs, there is a lower carrying capacity for outlying areas, and expanded capacity in district centers. This leads to urbanization, emigration, and carrying capacity overshot in regional and mainland areas. Policy intervention is necessary at the regional and island community level. Atolls, which are islands surrounding deep lagoons, exist in archipelagoes across the oceans, and are rich in aquatic life. The balance in this small land area with a vulnerable ecosystem may be easily disturbed by scarce water supplies, barren soils, rising sea levels in the future, hurricanes, and tsunamis. Traditionally, fisheries and horticulture (pit-taro, coconuts, and breadfruit) have sustained populations, but modern influences such as blasting, reef mining, new industrial technologies, population pressure, and urbanization threaten the balance. Population pressure, which has lead to pollution, epidemics, malnutrition, crime, social disintegration, and foreign dependence, is evidenced in the areas of Tuvalu, Kiribati
Lee, Chieh-Han; Yu, Hwa-Lung; Chien, Lung-Chang
2014-05-01
Dengue fever has been identified as one of the most widespread vector-borne diseases in tropical and sub-tropical. In the last decade, dengue is an emerging infectious disease epidemic in Taiwan especially in the southern area where have annually high incidences. For the purpose of disease prevention and control, an early warning system is urgently needed. Previous studies have showed significant relationships between climate variables, in particular, rainfall and temperature, and the temporal epidemic patterns of dengue cases. However, the transmission of the dengue fever is a complex interactive process that mostly understated the composite space-time effects of dengue fever. This study proposes developing a one-week ahead warning system of dengue fever epidemics in the southern Taiwan that considered nonlinear associations between weekly dengue cases and meteorological factors across space and time. The early warning system based on an integration of distributed lag nonlinear model (DLNM) and stochastic Bayesian Maximum Entropy (BME) analysis. The study identified the most significant meteorological measures including weekly minimum temperature and maximum 24-hour rainfall with continuous 15-week lagged time to dengue cases variation under condition of uncertainty. Subsequently, the combination of nonlinear lagged effects of climate variables and space-time dependence function is implemented via a Bayesian framework to predict dengue fever occurrences in the southern Taiwan during 2012. The result shows the early warning system is useful for providing potential outbreak spatio-temporal prediction of dengue fever distribution. In conclusion, the proposed approach can provide a practical disease control tool for environmental regulators seeking more effective strategies for dengue fever prevention.
A. P. Tran
2013-07-01
Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.
Vallotton, Nathalie; Price, Paul S
2016-05-17
This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.
An Index-Mismatch Scattering Approach to Optical Limiting
Exarhos, Gregory J.; Ferris, Kim F.; Windisch, Charles F.; Bozlee, Brian J.; Risser, Steven M.; Van Swam, Simone L.
2001-08-01
A densely packed bed of alkaline earth fluoride particles percolated by a fluid medium has been investigated as a potential index-matched optical limiter in the spirit of a Christiansen-Shelyubskii filter. Marked optical limiting was observed through this transparent medium under conditions where the focused second-harmonic output of a Q-swtiched Nd: YAG laser was on the order of about 1 J/cm2. An open-aperture Z-scan technique was used to quantify the limiting behavior. In this case, the mechanism of optical limiting is thought to be a nonlinear shift in the fluid index of refraction, resulting in an index mismatch between the disparate phases at high laser fluence.
Credit card spending limit and personal finance: system dynamics approach
Mirjana Pejić Bach
2014-03-01
Full Text Available Credit cards have become one of the major ways for conducting cashless transactions. However, they have a long term impact on the well being of their owner through the debt generated by credit card usage. Credit card issuers approve high credit limits to credit card owners, thereby influencing their credit burden. A system dynamics model has been used to model behavior of a credit card owner in different scenarios according to the size of a credit limit. Experiments with the model demonstrated that a higher credit limit approved on the credit card decreases the budget available for spending in the long run. This is a contribution toward the evaluation of action for credit limit control based on their consequences.
Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.
2016-09-01
Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.
M. F. Müller
2015-01-01
Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.
Laínez, José M; Orcun, Seza; Pekny, Joseph F; Reklaitis, Gintaras V; Suvannasankha, Attaya; Fausel, Christopher; Anaissie, Elias J; Blau, Gary E
2014-01-01
Variable metabolism, dose-dependent efficacy, and a narrow therapeutic target of cyclophosphamide (CY) suggest that dosing based on individual pharmacokinetics (PK) will improve efficacy and minimize toxicity. Real-time individualized CY dose adjustment was previously explored using a maximum a posteriori (MAP) approach based on a five serum-PK sampling in patients with hematologic malignancy undergoing stem cell transplantation. The MAP approach resulted in an improved toxicity profile without sacrificing efficacy. However, extensive PK sampling is costly and not generally applicable in the clinic. We hypothesize that the assumption-free Bayesian approach (AFBA) can reduce sampling requirements, while improving the accuracy of results. Retrospective analysis of previously published CY PK data from 20 patients undergoing stem cell transplantation. In that study, Bayesian estimation based on the MAP approach of individual PK parameters was accomplished to predict individualized day-2 doses of CY. Based on these data, we used the AFBA to select the optimal sampling schedule and compare the projected probability of achieving the therapeutic end points. By optimizing the sampling schedule with the AFBA, an effective individualized PK characterization can be obtained with only two blood draws at 4 and 16 hours after administration on day 1. The second-day doses selected with the AFBA were significantly different than the MAP approach and averaged 37% higher probability of attaining the therapeutic targets. The AFBA, based on cutting-edge statistical and mathematical tools, allows an accurate individualized dosing of CY, with simplified PK sampling. This highly accessible approach holds great promise for improving efficacy, reducing toxicities, and lowering treatment costs. © 2013 Pharmacotherapy Publications, Inc.
Approaching the standard quantum limit of mechanical torque sensing
Kim, P. H.; Hauer, B. D.; Doolin, C.; Souris, F.; Davis, J. P.
2016-10-01
Reducing the moment of inertia improves the sensitivity of a mechanically based torque sensor, the parallel of reducing the mass of a force sensor, yet the correspondingly small displacements can be difficult to measure. To resolve this, we incorporate cavity optomechanics, which involves co-localizing an optical and mechanical resonance. With the resulting enhanced readout, cavity-optomechanical torque sensors are now limited only by thermal noise. Further progress requires thermalizing such sensors to low temperatures, where sensitivity limitations are instead imposed by quantum noise. Here, by cooling a cavity-optomechanical torque sensor to 25 mK, we demonstrate a torque sensitivity of 2.9 yNm/. At just over a factor of ten above its quantum-limited sensitivity, such cryogenic optomechanical torque sensors will enable both static and dynamic measurements of integrated samples at the level of a few hundred spins.
Interacting Atomic Interferometry for Rotation Sensing Approaching the Heisenberg Limit
Ragole, Stephen; Taylor, Jacob M.
2016-11-01
Atom interferometers provide exquisite measurements of the properties of noninertial frames. While atomic interactions are typically detrimental to good sensing, efforts to harness entanglement to improve sensitivity remain tantalizing. Here we explore the role of interactions in an analogy between atomic gyroscopes and SQUIDs, motivated by recent experiments realizing ring-shaped traps for ultracold atoms. We explore the one-dimensional limit of these ring systems with a moving weak barrier, such as that provided by a blue-detuned laser beam. In this limit, we employ Luttinger liquid theory and find an analogy with the superconducting phase-slip qubit, in which the topological charge associated with persistent currents can be put into superposition. In particular, we find that strongly interacting atoms in such a system could be used for precision rotation sensing. We compare the performance of this new sensor to an equivalent noninteracting atom interferometer, and find improvements in sensitivity and bandwidth beyond the atomic shot-noise limit.
Rama, Aarti; Kesari, Shreekant; Das, Pradeep; Kumar, Vijay
2017-07-24
Extensive application of routine insecticide i.e., dichlorodiphenyltrichloroethane (DDT) to control Phlebotomus argentipes (Diptera: Psychodidae), the proven vector of visceral leishmaniasis in India, had evoked the problem of resistance/tolerance against DDT, eventually nullifying the DDT dependent strategies to control this vector. Because tolerating an hour-long exposure to DDT is not challenging enough for the resistant P. argentipes, estimating susceptibility by exposing sand flies to insecticide for just an hour becomes a trivial and futile task.Therefore, this bioassay study was carried out to investigate the maximum limit of exposure time to which DDT resistant P. argentipes can endure the effect of DDT for their survival. The mortality rate of laboratory-reared DDT resistant strain P. argentipes exposed to DDT was studied at discriminating time intervals of 60 min and it was concluded that highly resistant sand flies could withstand up to 420 min of exposure to this insecticide. Additionally, the lethal time for female P. argentipes was observed to be higher than for males suggesting that they are highly resistant to DDT's toxicity. Our results support the monitoring of tolerance limit with respect to time and hence points towards an urgent need to change the World Health Organization's protocol for susceptibility identification in resistant P. argentipes.
Explanatory Limitations of Cognitive-Developmental Approaches to Morality
Krebs, Dennis L.; Denton, Kathy
2006-01-01
In response to Gibbs' (see record 2006-08257-011) defense of neo-Kohlbergian models of morality, the authors question whether revisions in Kohlberg's model constitute a coherent refinement of the cognitive-developmental approach. The authors argue that neo-Kohlbergian measures of moral development assess an aspect of morality (the most…
Chandrasekhar Limit: An Elementary Approach Based on Classical Physics and Quantum Theory
Pinochet, Jorge; Van Sint Jan, Michael
2016-01-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the "Chandrasekhar limit." This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons…
Chandrasekhar Limit: An Elementary Approach Based on Classical Physics and Quantum Theory
Pinochet, Jorge; Van Sint Jan, Michael
2016-01-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the "Chandrasekhar limit." This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons…
Potvin, Jean; Goldbogen, Jeremy A; Shadwick, Robert E
2012-01-01
Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti) and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae) exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals), the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae), fin (Balaenoptera physalus), blue (Balaenoptera musculus) and minke (Balaenoptera acutorostrata) whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting individual prey
Jean Potvin
Full Text Available Bulk-filter feeding is an energetically efficient strategy for resource acquisition and assimilation, and facilitates the maintenance of extreme body size as exemplified by baleen whales (Mysticeti and multiple lineages of bony and cartilaginous fishes. Among mysticetes, rorqual whales (Balaenopteridae exhibit an intermittent ram filter feeding mode, lunge feeding, which requires the abandonment of body-streamlining in favor of a high-drag, mouth-open configuration aimed at engulfing a very large amount of prey-laden water. Particularly while lunge feeding on krill (the most widespread prey preference among rorquals, the effort required during engulfment involve short bouts of high-intensity muscle activity that demand high metabolic output. We used computational modeling together with morphological and kinematic data on humpback (Megaptera noveaangliae, fin (Balaenoptera physalus, blue (Balaenoptera musculus and minke (Balaenoptera acutorostrata whales to estimate engulfment power output in comparison with standard metrics of metabolic rate. The simulations reveal that engulfment metabolism increases across the full body size of the larger rorqual species to nearly 50 times the basal metabolic rate of terrestrial mammals of the same body mass. Moreover, they suggest that the metabolism of the largest body sizes runs with significant oxygen deficits during mouth opening, namely, 20% over maximum VO2 at the size of the largest blue whales, thus requiring significant contributions from anaerobic catabolism during a lunge and significant recovery after a lunge. Our analyses show that engulfment metabolism is also significantly lower for smaller adults, typically one-tenth to one-half VO2|max. These results not only point to a physiological limit on maximum body size in this lineage, but also have major implications for the ontogeny of extant rorquals as well as the evolutionary pathways used by ancestral toothed whales to transition from hunting
Deutsch, Karol; Śledź, Janusz; Mazij, Mariusz; Ludwik, Bartosz; Labus, Michał; Karbarz, Dariusz; Pasicka, Bernadetta; Chrabąszcz, Michał; Śledź, Arkadiusz; Klank-Szafran, Monika; Vitali-Sendoz, Laura; Kameczura, Tomasz; Śpikowski, Jerzy; Stec, Piotr; Ujda, Marek; Stec, Sebastian
2017-06-01
Radiofrequency catheter ablation (RFCA) is an established effective method for the treatment of typical cavo-tricuspid isthmus (CTI)-dependent atrial flutter (AFL). The introduction of 3-dimensional electro-anatomic systems enables RFCA without fluoroscopy (No-X-Ray [NXR]). The aim of this study was to evaluate the feasibility and effectiveness of CTI RFCA during implementation of the NXR approach and the maximum voltage-guided (MVG) technique for ablation of AFL.Data were obtained from prospective standardized multicenter ablation registry. Consecutive patients with the first RFCA for CTI-dependent AFL were recruited. Two navigation approaches (NXR and fluoroscopy based as low as reasonable achievable [ALARA]) and 2 mapping and ablation techniques (MVG and pull-back technique [PBT]) were assessed. NXR + MVG (n = 164; age: 63.7 ± 9.5; 30% women), NXR + PBT (n = 55; age: 63.9 ± 10.7; 39% women); ALARA + MVG (n = 36; age: 64.2 ± 9.6; 39% women); and ALARA + PBT (n = 205; age: 64.7 ± 9.1; 30% women) were compared, respectively. All groups were simplified with a 2-catheter femoral approach using 8-mm gold tip catheters (Osypka AG, Germany or Biotronik, Germany) with 15 min of observation. The MVG technique was performed using step-by-step application by mapping the largest atrial signals within the CTI.Bidirectional block in CTI was achieved in 99% of all patients (P = NS, between groups). In NXR + MVG and NXR + PBT groups, the procedure time decreased (45.4 ± 17.6 and 47.2 ± 15.7 min vs. 52.6 ± 23.7 and 59.8 ± 24.0 min, P < .01) as compared to ALARA + MVG and ALARA + PBT subgroups. In NXR + MVG and NXR + PBT groups, 91% and 98% of the procedures were performed with complete elimination of fluoroscopy. The NXR approach was associated with a significant reduction in fluoroscopy exposure (from 0.2 ± 1.1 [NXR + PBT] and 0.3 ± 1.6 [NXR + MVG] to 7.7 ± 6.0 min [ALARA + MVG] and 9
The Preliminary Pollutant Limit Value Approach: Manual for Users
1988-07-01
regulated by Federal and State governments. From the PPLV viewpoint, the differentiation between carcinogens and non-carcinogens is reflected in the DT...Table B-1) for meaning of "sufficient" and "limited". alter the function or genetic structure of one-cell organisms or cellular culture from higher...1.78]2 / 2.44) is based on studies of the relative concentration of a test compound in the root and the xylem stream in the barley stem. Apparently
A Practical Approach for Parameter Identification with Limited Information
Zeni, Lorenzo; Yang, Guangya; Tarnowski, Germán Claudio;
2014-01-01
A practical parameter estimation procedure for a real excitation system is reported in this paper. The core algorithm is based on genetic algorithm (GA) which estimates the parameters of a real AC brushless excitation system with limited information about the system. Practical considerations...... are integrated in the estimation procedure to reduce the complexity of the problem. The effectiveness of the proposed technique is demonstrated via real measurements. Besides, it is seen that GA can converge to a satisfactory solution even when starting from large initial variation ranges of the estimated...
A general approach to total repair cost limit replacement policies
F. Beichelt
2014-01-01
Full Text Available A common replacement policy for technical systems consists in replacing a system by a new one after its economic lifetime, i.e. at that moment when its long-run maintenance cost rate is minimal. However, the strict application of the economic lifetime does not take into account the individual deviations of maintenance cost rates of single systems from the average cost development. Hence, Beichet proposed the total repair cost limit replacement policy: the system is replaced by a new one as soon as its total repair cost reaches or exceeds a given level. He modelled the repair cost development by functions of the Wiener process with drift. Here the same policy is considered under the assumption that the one-dimensional probability distribution of the process describing the repair cost development is given. In the examples analysed, applying the total repair cost limit replacement policy instead of the economic life-time leads to cost savings of between 4% and 30%. Finally, it is illustrated how to include the reliability aspect into the policy.
Embracing the limits of psychoanalysis: a dialogic approach to healing.
Sucharov, Maxwell
2009-04-01
This article outlines my essential paradigm as it relates to self psychology, how I arrived at it, and how I would position my perspective in the context of the larger psychoanalytic and scientific community. My dialogic complexity systems model is most closely aligned with the intersubjective systems theory of Atwood and Stolorow and was shown to have acquired its defining shape in the context of an in-depth exploration of the connection between the latter theory and Kohut's self psychology. My paradigm is part of the wider relational turn in contemporary psychoanalysis. I have characterized the evolution of my perspective as my continuous preoccupation with the deepening and refinement of my understanding of the limits of psychoanalytic theory and practice and the cultivation of a clinical attitude that allows me to fully embrace those limits, an attitude that combines the caring ambience of genuine dialogue with the spiritual calmness of nondual awareness. My perspective can, therefore, be understood as my ongoing attempt at unifying my intellect, my heart, and my spirit into one experiential whole. A dialogic complexity systems model grounded in a post-Cartesian nondual philosophy constitutes the explanatory reduction of my theory and philosophy as lived in real time.
[Limitation of therapeutic effort: Approach to a combined view].
Bueno Muñoz, M J
2013-01-01
Over the past few decades, we have been witnessing that increasing fewer people pass away at home and increasing more do so within the hospital. More specifically, 20% of deaths now occur in an intensive care unit (ICU). However, death in the ICU has become a highly technical process. This sometimes originates excesses because the resources used are not proportionate related to the purposes pursued (futility). It may create situations that do not respect the person's dignity throughout the death process. It is within this context that the situation of the clinical procedure called "limitation of the therapeutic effort" (LTE) is reviewed. This has become a true bridge between Intensive Care and Palliative Care. Its final goal is to guarantee a dignified and painless death for the terminally ill. Copyright © 2012 Elsevier España, S.L. y SEEIUC. All rights reserved.
A Dynamic Programming Approach To Length-Limited Huffman Coding
Golin, Mordecai
2008-01-01
The ``state-of-the-art'' in Length Limited Huffman Coding algorithms is the $\\Theta(ND)$-time, $\\Theta(N)$-space one of Hirschberg and Larmore, where $D\\le N$ is the length restriction on the code. This is a very clever, very problem specific, technique. In this note we show that there is a simple Dynamic-Programming (DP) method that solves the problem with the same time and space bounds. The fact that there was an $\\Theta(ND)$ time DP algorithm was previously known; it is a straightforward DP with the Monge property (which permits an order of magnitude speedup). It was not interesting, though, because it also required $\\Theta(ND)$ space. The main result of this paper is the technique developed for reducing the space. It is quite simple and applicable to many other problems modeled by DPs with the Monge property. We illustrate this with examples from web-proxy design and wireless mobile paging.
Dodrill, Michael J.; Yackulic, Charles B.; Kennedy, Theodore A.; Haye, John W
2016-01-01
The cold and clear water conditions present below many large dams create ideal conditions for the development of economically important salmonid fisheries. Many of these tailwater fisheries have experienced declines in the abundance and condition of large trout species, yet the causes of these declines remain uncertain. Here, we develop, assess, and apply a drift-foraging bioenergetics model to identify the factors limiting rainbow trout (Oncorhynchus mykiss) growth in a large tailwater. We explored the relative importance of temperature, prey quantity, and prey size by constructing scenarios where these variables, both singly and in combination, were altered. Predicted growth matched empirical mass-at-age estimates, particularly for younger ages, demonstrating that the model accurately describes how current temperature and prey conditions interact to determine rainbow trout growth. Modeling scenarios that artificially inflated prey size and abundance demonstrate that rainbow trout growth is limited by the scarcity of large prey items and overall prey availability. For example, shifting 10% of the prey biomass to the 13 mm (large) length class, without increasing overall prey biomass, increased lifetime maximum mass of rainbow trout by 88%. Additionally, warmer temperatures resulted in lower predicted growth at current and lower levels of prey availability; however, growth was similar across all temperatures at higher levels of prey availability. Climate change will likely alter flow and temperature regimes in large rivers with corresponding changes to invertebrate prey resources used by fish. Broader application of drift-foraging bioenergetics models to build a mechanistic understanding of how changes to habitat conditions and prey resources affect growth of salmonids will benefit management of tailwater fisheries.
Brigham-Grette, J.; Gualtieri, L.M.; Glushkova, O.Y.; Hamilton, T.D.; Mostoller, D.; Kotov, A.
2003-01-01
The Pekulney Mountains and adjacent Tanyurer River valley are key regions for examining the nature of glaciation across much of northeast Russia. Twelve new cosmogenic isotope ages and 14 new radiocarbon ages in concert with morphometric analyses and terrace stratigraphy constrain the timing of glaciation in this region of central Chukotka. The Sartan Glaciation (Last Glacial Maximum) was limited in extent in the Pekulney Mountains and dates to ???20,000 yr ago. Cosmogenic isotope ages > 30,000 yr as well as non-finite radiocarbon ages imply an estimated age no younger than the Zyryan Glaciation (early Wisconsinan) for large sets of moraines found in the central Tanyurer Valley. Slope angles on these loess-mantled ridges are less than a few degrees and crest widths are an order of magnitude greater than those found on the younger Sartan moraines. The most extensive moraines in the lower Tanyurer Valley are most subdued implying an even older, probable middle Pleistocene age. This research provides direct field evidence against Grosswald's Beringian ice-sheet hypothesis. ?? 2003 Elsevier Science (USA). All rights reserved.
Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.
1984-10-19
A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.
Can we still comply with the maximum limit of 2°C? Approaches to a New Climate Contract
F. J. Radermacher
2014-10-01
Full Text Available The international climate policy is in trouble. CO2 emissions are rising instead of shrinking. The 2025 climate summit in Paris should lead to a global agreement, but what should be its design? In an earlier paper in Cadmus on the issue, the author outlined a contract formula based on the so-called ‘Copenhagen Accord’ that is based on a dynamic cap and an intelligent burden sharing between politics and the private sector. The private sector was brought into the deal via the idea of a voluntary climate neutrality of private emissions culminating in a ‘Global Neutral’ promoted by the United Nations. All this was based on a global cap-and-trade system. For a number of reasons, it may be that a global cap-and-trade system cannot or will not be established. States may use other instruments to fulfil their promises. The present paper elaborates that even under such conditions, the basic proposal can still be implemented. This may prove useful for the Paris negotiations.
Abels, B; Klotz, E; Tomandl, B F; Kloska, S P; Lell, M M
2010-10-01
PCT postprocessing commonly uses either the MS or a variant of the DC approach for modeling of voxel-based time-attenuation curves. There is an ongoing discussion about the respective merits and limitations of both methods, frequently on the basis of theoretic reasoning or simulated data. We performed a qualitative and quantitative comparison of DC and MS by using identical source datasets and preprocessing parameters. From the PCT data of 50 patients with acute ischemic stroke, color maps of CBF, CBV, and various temporal parameters were calculated with software implementing both DC and MS algorithms. Color maps were qualitatively categorized. Quantitative region-of-interest-based measurements were made in nonischemic GM and WM, suspected penumbra, and suspected infarction core. Qualitative results, quantitative results, and PCT lesion sizes from DC and MS were statistically compared. CBF and CBV color maps based on DC and MS were of comparably high quality. Quantitative CBF and CBV values calculated by DC and MS were within the same range in nonischemic regions. In suspected penumbra regions, average CBF(DC) was lower than CBF(MS). In suspected infarction core regions, average CBV(DC) was similar to CBV(MS). Using adapted tissue-at-risk/nonviable-tissue thresholds, we found excellent correlation of DC and MS lesion sizes. DC and MS yielded comparable qualitative and quantitative results. Lesion sizes indicated by DC and MS showed excellent agreement when using adapted thresholds. In all cases, the same therapy decision would have been made.
Multicore in Production: Advantages and Limits of the Multiprocess Approach
Binet, S; The ATLAS collaboration; Lavrijsen, W; Leggett, Ch; Lesny, D; Jha, M K; Severini, H; Smith, D; Snyder, S; Tatarkhanov, M; Tsulaia, V; van Gemmeren, P; Washbrook, A
2011-01-01
The shared memory architecture of multicore CPUs provides HENP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelizing HENP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm which allows to share up to 50% memory pages among event worker processes with negligible CPU overhead. By leaving the task of managing shared memory pages to the operating system, we have been able to run in parallel large reconstruction and simulation applications originally written to be run in a single thread of execution with little to no change to the application code. In spite of this, the process of validating athena multi-process for production took ten months of concentrated effort and is expected to continue for several more months. In general terms, we had two classes of problems in the multi-process port: merging the output fil...
Analysis of operational limit of an aircraft: An aeroelastic approach
Hasan, Md. Mehedi; Hassan, M. D. Mehedi; Sarrowar, S. M. Bayazid; Faisal, Kh. Md.; Ahmed, Sheikh Reaz, Dr.
2017-06-01
In classical theory of elasticity, external loading acting on the body is independent of deformation of the body. But, in aeroelasticity, aerodynamic forces depend on the attitude of the body relative to the flow. Aircraft's are subjected to a range of static loads resulting from equilibrium or steady flight maneuvers such as coordinated level turn, steady pitch and bank rate, steady and level flight. Interaction of these loads with elastic forces of aircraft structure creates some aeroelastic phenomena. In this paper, we have summarized recent developments in the area of aeroelasticity. A numerical approach has been applied for finding divergence speed, a static aeroelastic phenomena, of a typical aircraft. This paper also involves graphical representations of constraints on load factor and bank angle during different steady flight maneuvers taking flexibility into account and comparing it with the value without flexibility. Effect of wing skin thickness, spar web thickness and position of flexural axis of wing on this divergence speed as well as load factor and bank angle has also been observed using MATLAB.
Hsia, Wei-Shen
1987-01-01
A stochastic control model of the NASA/MSFC Ground Facility for Large Space Structures (LSS) control verification through Maximum Entropy (ME) principle adopted in Hyland's method was presented. Using ORACLS, a computer program was implemented for this purpose. Four models were then tested and the results presented.
Ma, Hong -Hao [Chongqing Univ., Chongqing (People' s Republic of China); Wu, Xing -Gang [Chongqing Univ., Chongqing (People' s Republic of China); Ma, Yang [Chongqing Univ., Chongqing (People' s Republic of China); Brodsky, Stanley J. [Stanford Univ., Stanford, CA (United States); Mojaza, Matin [KTH Royal Inst. of Technology and Stockholm Univ., Stockholm (Sweden)
2015-05-26
A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach to all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R
Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung 80424 (China)
2010-06-15
To achieve maximum power point tracking (MPPT) for wind power generation systems, the rotational speed of wind turbines should be adjusted in real time according to wind speed. In this paper, a Wilcoxon radial basis function network (WRBFN) with hill-climb searching (HCS) MPPT strategy is proposed for a permanent magnet synchronous generator (PMSG) with a variable-speed wind turbine. A high-performance online training WRBFN using a back-propagation learning algorithm with modified particle swarm optimization (MPSO) regulating controller is designed for a PMSG. The MPSO is adopted in this study to adapt to the learning rates in the back-propagation process of the WRBFN to improve the learning capability. The MPPT strategy locates the system operation points along the maximum power curves based on the dc-link voltage of the inverter, thus avoiding the generator speed detection. (author)
Beltrán, M C; Romero, T; Althaus, R L; Molina, M P
2013-05-01
The Charm maximum residue limit β-lactam and tetracycline test (Charm MRL BLTET; Charm Sciences Inc., Lawrence, MA) is an immunoreceptor assay utilizing Rapid One-Step Assay lateral flow technology that detects β-lactam or tetracycline drugs in raw commingled cow milk at or below European Union maximum residue levels (EU-MRL). The Charm MRL BLTET test procedure was recently modified (dilution in buffer and longer incubation) by the manufacturers to be used with raw ewe and goat milk. To assess the Charm MRL BLTET test for the detection of β-lactams and tetracyclines in milk of small ruminants, an evaluation study was performed at Instituto de Ciencia y Tecnologia Animal of Universitat Politècnica de València (Spain). The test specificity and detection capability (CCβ) were studied following Commission Decision 2002/657/EC. Specificity results obtained in this study were optimal for individual milk free of antimicrobials from ewes (99.2% for β-lactams and 100% for tetracyclines) and goats (97.9% for β-lactams and 100% for tetracyclines) along the entire lactation period regardless of whether the results were visually or instrumentally interpreted. Moreover, no positive results were obtained when a relatively high concentration of different substances belonging to antimicrobial families other than β-lactams and tetracyclines were present in ewe and goat milk. For both types of milk, the CCβ calculated was lower or equal to EU-MRL for amoxicillin (4 µg/kg), ampicillin (4 µg/kg), benzylpenicillin (≤ 2 µg/kg), dicloxacillin (30 µg/kg), oxacillin (30 µg/kg), cefacetrile (≤ 63 µg/kg), cefalonium (≤ 10 µg/kg), cefapirin (≤ 30 µg/kg), desacetylcefapirin (≤ 30 µg/kg), cefazolin (≤ 25 µg/kg), cefoperazone (≤ 25 µg/kg), cefquinome (20 µg/kg), ceftiofur (≤ 50 µg/kg), desfuroylceftiofur (≤ 50µg/kg), and cephalexin (≤ 50 µg/kg). However, this test could neither detect cloxacillin nor nafcillin at or below EU-MRL (CCβ >30 µg/kg). The
Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah
2017-01-01
Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.
Breece, Matthew W; Oliver, Matthew J; Cimino, Megan A; Fox, Dewayne A
2013-01-01
Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus) experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th) century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt) approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th) century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.
Matthew W Breece
Full Text Available Atlantic sturgeon (Acipenser oxyrinchus oxyrinchus experienced severe declines due to habitat destruction and overfishing beginning in the late 19(th century. Subsequent to the boom and bust period of exploitation, there has been minimal fishing pressure and improving habitats. However, lack of recovery led to the 2012 listing of Atlantic sturgeon under the Endangered Species Act. Although habitats may be improving, the availability of high quality spawning habitat, essential for the survival and development of eggs and larvae may still be a limiting factor in the recovery of Atlantic sturgeon. To estimate adult Atlantic sturgeon spatial distributions during riverine occupancy in the Delaware River, we utilized a maximum entropy (MaxEnt approach along with passive biotelemetry during the likely spawning season. We found that substrate composition and distance from the salt front significantly influenced the locations of adult Atlantic sturgeon in the Delaware River. To broaden the scope of this study we projected our model onto four scenarios depicting varying locations of the salt front in the Delaware River: the contemporary location of the salt front during the likely spawning season, the location of the salt front during the historic fishery in the late 19(th century, an estimated shift in the salt front by the year 2100 due to climate change, and an extreme drought scenario, similar to that which occurred in the 1960's. The movement of the salt front upstream as a result of dredging and climate change likely eliminated historic spawning habitats and currently threatens areas where Atlantic sturgeon spawning may be taking place. Identifying where suitable spawning substrate and water chemistry intersect with the likely occurrence of adult Atlantic sturgeon in the Delaware River highlights essential spawning habitats, enhancing recovery prospects for this imperiled species.
Izumi, Kenji; Bartlein, Patrick J.
2016-10-01
The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.
He, Yi; Scheraga, Harold A., E-mail: has5@cornell.edu [Department of Chemistry and Chemical Biology, Cornell University, Ithaca, New York 14853 (United States); Liwo, Adam [Faculty of Chemistry, University of Gdańsk, Wita Stwosza 63, 80-308 Gdańsk (Poland)
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Deshpande, Paritosh C; Tilwankar, Atit K; Asolekar, Shyam R
2012-11-01
The 180 ship recycling yards located on Alang-Sosiya beach in the State of Gujarat on the west coast of India is the world's largest cluster engaged in dismantling. Yearly 350 ships have been dismantled (avg. 10,000 ton steel/ship) with the involvement of about 60,000 workers. Cutting and scrapping of plates or scraping of painted metal surfaces happens to be the commonly performed operation during ship breaking. The pollutants released from a typical plate-cutting operation can potentially either affect workers directly by contaminating the breathing zone (air pollution) or can potentially add pollution load into the intertidal zone and contaminate sediments when pollutants get emitted in the secondary working zone and gets subjected to tidal forces. There was a two-pronged purpose behind the mathematical modeling exercise performed in this study. First, to estimate the zone of influence up to which the effect of plume would extend. Second, to estimate the cumulative maximum concentration of heavy metals that can potentially occur in ambient atmosphere of a given yard. The cumulative maximum heavy metal concentration was predicted by the model to be between 113 μg/Nm(3) and 428 μg/Nm(3) (at 4m/s and 1m/s near-ground wind speeds, respectively). For example, centerline concentrations of lead (Pb) in the yard could be placed between 8 and 30 μg/Nm(3). These estimates are much higher than the Indian National Ambient Air Quality Standards (NAAQS) for Pb (0.5 μg/Nm(3)). This research has already become the critical science and technology inputs for formulation of policies for eco-friendly dismantling of ships, formulation of ideal procedure and corresponding health, safety, and environment provisions. The insights obtained from this research are also being used in developing appropriate technologies for minimizing exposure to workers and minimizing possibilities of causing heavy metal pollution in the intertidal zone of ship recycling yards in India.
A revision of the tolerable limits approach : Searching for the important coefficients
Tarancón, M.A.; Callejas, F.; Dietzenbacher, E.; Lahr, M.L.
2008-01-01
A wide range of approaches are available for classifying coefficients according to their importance to an economy. The 'tolerable limits' approach is one that has been extensively written about. Nevertheless, it seems unsuitable for assessing the overall importance of a coefficient to an economy, bu
Hogden, J.
1996-11-05
The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.
N. Ranjbar
2016-09-01
Full Text Available Knowledge of species’ habitat needs is considered as one of the requirements of wildlife management. We studied seasonal habitat suitability and habitat associations of wild goat (Capra aegagrus in Kolah-Qazi National Park, one of its typical habitats in central Asia, using Maximum Entropy approach. The study area was confined to mountainous areas as the potential habitat of the wild goat. Elevation, distance to water sources, distance to human settlements, and distance to guard patrol roads were recognised as the most important variables determining habitat suitability of the species. The extent of suitable habitats was maximum in spring (3882.25 ha and the least in summer (1362.5 ha. The AUC values of MaxEnt revealed acceptable to good efficiency (AUC ≥0.7. The obtained results may have implications for conservation of the wild goat in similar habitats across its distribution range.
Chandrasekhar limit: an elementary approach based on classical physics and quantum theory
Pinochet, Jorge; Van Sint Jan, Michael
2016-05-01
In a brief article published in 1931, Subrahmanyan Chandrasekhar made public an important astronomical discovery. In his article, the then young Indian astrophysicist introduced what is now known as the Chandrasekhar limit. This limit establishes the maximum mass of a stellar remnant beyond which the repulsion force between electrons due to the exclusion principle can no longer stop the gravitational collapse. In the present article, we create an elemental approximation to the Chandrasekhar limit, accessible to non-graduate science and engineering students. The article focuses especially on clarifying the origins of Chandrasekhar’s discovery and the underlying physical concepts. Throughout the article, only basic algebra is used as well as some general notions of classical physics and quantum theory.
Imaging interferometric microscopy-approaching the linear systems limits of optical resolution.
Kuznetsova, Yuliya; Neumann, Alexander; Brueck, S R
2007-05-28
The linear systems optical resolution limit is a dense grating pattern at a lambda/2 pitch or a critical dimension (resolution) of lambda/4. However, conventional microscopy provides a (Rayleigh) resolution of only ~ 0.6lambda/NA, approaching lambda/1.67 as NA ?lambda1. A synthetic aperture approach to reaching the lambda/4 linear-systems limit, extending previous developments in imaginginterferometric microscopy, is presented. Resolution of non-periodic 180-nm features using 633-nm illumination (lambda/3.52) and of a 170-nm grating (lambda/3.72) is demonstrated. These results are achieved with a 0.4-NA optical system and retain the working distance, field-of-view, and depth-of-field advantages of low-NA systems while approaching ultimate resolution limits.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Semiclassical limit of the focusing NLS: Whitham equations and the Riemann-Hilbert Problem approach
Tovbis, Alexander; El, Gennady A.
2016-10-01
The main goal of this paper is to put together: a) the Whitham theory applicable to slowly modulated N-phase nonlinear wave solutions to the focusing nonlinear Schrödinger (fNLS) equation, and b) the Riemann-Hilbert Problem approach to particular solutions of the fNLS in the semiclassical (small dispersion) limit that develop slowly modulated N-phase nonlinear wave in the process of evolution. Both approaches have their own merits and limitations. Understanding of the interrelations between them could prove beneficial for a broad range of problems involving the semiclassical fNLS.
Kelisani, M. Dayyani; Doebert, S.; Aslaninejad, M.
2016-08-01
The critical process of beam loading compensation in high intensity accelerators brings under control the undesired effect of the beam induced fields to the accelerating structures. A new analytical approach for optimizing standing wave accelerating structures is found which is hugely fast and agrees very well with simulations. A perturbative analysis of cavity and waveguide excitation based on the Bethe theorem and normal mode expansion is developed to compensate the beam loading effect and excite the maximum field gradient in the cavity. The method provides the optimum values for the coupling factor and the cavity detuning. While the approach is very accurate and agrees well with simulation software, it massively shortens the calculation time compared with the simulation software.
Kelisani, M. Dayyani, E-mail: mdayyani@cern.ch [Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); European Organization for Nuclear Research (CERN), BE Department, CH-1211 Geneva 23 (Switzerland); Doebert, S. [European Organization for Nuclear Research (CERN), BE Department, CH-1211 Geneva 23 (Switzerland); Aslaninejad, M. [Institute for Research in Fundamental Sciences (IPM), School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)
2016-08-21
The critical process of beam loading compensation in high intensity accelerators brings under control the undesired effect of the beam induced fields to the accelerating structures. A new analytical approach for optimizing standing wave accelerating structures is found which is hugely fast and agrees very well with simulations. A perturbative analysis of cavity and waveguide excitation based on the Bethe theorem and normal mode expansion is developed to compensate the beam loading effect and excite the maximum field gradient in the cavity. The method provides the optimum values for the coupling factor and the cavity detuning. While the approach is very accurate and agrees well with simulation software, it massively shortens the calculation time compared with the simulation software.
Limiting flows of a viscous fluid with stationary separation zones with Re approaching infinity
Taganov, G. I.
1982-01-01
The limiting flows of a viscous noncondensable fluid, which are approached by flows with stationary separation zones behind planar symmetrical bodies, with an unlimited increase in the Reynolds number are studied. Quantitative results are obtained in the case of a circulation flow inside of a separation zone.
The Impact of the Graphical Approach on Students' Understanding of the Formal Definition of Limit
Quesada, Antonio; Einsporn, Richard L.; Wiggins, Muserref
2008-01-01
The purpose of this study was to determine if the use of a graphical teaching and learning approach via the graphing calculator enhances students' understanding of the formal definition of limit. College students in six sections of Calculus I participated by completing a test prior to the introduction of the definition, and completing a second…
Approaching the downsizing limit of silicon for surface-controlled lithium storage.
Wang, Bin; Li, Xianglong; Luo, Bin; Hao, Long; Zhou, Min; Zhang, Xinghao; Fan, Zhuangjun; Zhi, Linjie
2015-03-04
Graphene-sheet-supported uniform ultrasmall (≈3 nm) silicon quantum dots have been successfully synthesized by a simple and effective self-assembly strategy, exhibiting unprecedented fast, surface-controlled lithium-storage behavior and outstanding lithium-storage properties including extraordinary rate capability and remarkable cycling stability, attributable to the intrinsic role of approaching the downsizing limit of silicon.
Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck; Aabo, Søren
2017-01-02
Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support hygiene interventions. The objective of the present study was to develop an approach which could make it possible to define potential risk-based microbiological limits for an indicator, enterococci, in order to evaluate the risk from potential growth of Salmonella. A positive correlation between the concentration of enterococci and the prevalence and concentration of Salmonella was shown for 6640 pork samples taken at Danish cutting plants and retail butchers. The samples were collected in five different studies in 2001, 2002, 2010, 2011 and 2013. The observations that both Salmonella and enterococci are carried in the intestinal tract, contaminate pork by the same mechanisms and share similar growth characteristics (lag phase and maximum specific growth rate) at temperatures around 5-10°C, suggest a potential of enterococci to be used as an indicator of potential growth of Salmonella in pork. Elevated temperatures during processing will lead to growth of both enterococci and, if present, also Salmonella. By combining the correlation between enterococci and Salmonella with risk modelling, it is possible to predict the risk of salmonellosis based on the level of enterococci. The risk model used for this purpose includes the dose-response relationship for Salmonella and a reduction factor to account for preparation of the fresh pork. By use of the risk model, it was estimated that the majority of salmonellosis cases, caused by the consumption of pork in Denmark, is caused by the small fraction of pork products that has enterococci concentrations above 5logCFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological
Shaw, A; Takács, I; Pagilla, K R; Murthy, S
2013-10-15
The Monod equation is often used to describe biological treatment processes and is the foundation for many activated sludge models. The Monod equation includes a "half-saturation coefficient" to describe the effect of substrate limitations on the process rate and it is customary to consider this parameter to be a constant for a given system. The purpose of this study was to develop a methodology, and its use to show that the half-saturation coefficient for denitrification is not constant but is in fact a function of the maximum denitrification rate. A 4-step procedure is developed to investigate the dependency of half-saturation coefficients on the maximum rate and two different models are used to describe this dependency: (a) an empirical linear model and (b) a deterministic model based on Fick's law of diffusion. Both models are proved better for describing denitrification kinetics than assuming a fixed K(NO3) at low nitrate concentrations. The empirical model is more utilitarian whereas the model based on Fick's law has a fundamental basis that enables the intrinsic K(NO3) to be estimated. In this study data was analyzed from 56 denitrification rate tests and it was found that the extant K(NO3) varied between 0.07 mgN/L and 1.47 mgN/L (5th and 95th percentile respectively) with an average of 0.47 mgN/L. In contrast to this, the intrinsic K(NO3) estimated for the diffusion model was 0.01 mgN/L which indicates that the extant K(NO3) is greatly influenced by, and mostly describes, diffusion limitations.
Leclercq, C; Arcella, D; Turrini, A
2000-12-01
The three recent EU directives which fixed maximum permitted levels (MPL) for food additives for all member states also include the general obligation to establish national systems for monitoring the intake of these substances in order to evaluate their use safety. In this work, we considered additives with primary antioxidant technological function for which an acceptable daily intake (ADI) was established by the Scientific Committee for Food (SCF): gallates, butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and erythorbic acid. The potential intake of these additives in Italy was estimated by means of a hierarchical approach using, step by step, more refined methods. The likelihood of the current ADI to be exceeded was very low for erythorbic acid, BHA and gallates. On the other hand, the theoretical maximum daily intake (TMDI) of BHT was above the current ADI. The three food categories found to be main potential sources of BHT were "pastry, cake and biscuits", "chewing gums" and "vegetables oils and margarine"; they overall contributed 74% of the TMDI. Actual use of BHT in these food categories is discussed, together with other aspects such as losses of this substance in the technological process and percentage of ingestion in the case of chewing gums.
A large deviations approach to limit theory for heavy-tailed time series
Mikosch, Thomas Valentin; Wintenberger, Olivier
2016-01-01
In this paper we propagate a large deviations approach for proving limit theory for (generally) multivariate time series with heavy tails. We make this notion precise by introducing regularly varying time series. We provide general large deviation results for functionals acting on a sample path...... and vanishing in some neighborhood of the origin. We study a variety of such functionals, including large deviations of random walks, their suprema, the ruin functional, and further derive weak limit theory for maxima, point processes, cluster functionals and the tail empirical process. One of the main results...
On Approaching the Ultimate Limits of Photon-Efficient and Bandwidth-Efficient Optical Communication
Dolinar, Sam; Erkmen, Baris I; Moision, Bruce
2011-01-01
It is well known that ideal free-space optical communication at the quantum limit can have unbounded photon information efficiency (PIE), measured in bits per photon. High PIE comes at a price of low dimensional information efficiency (DIE), measured in bits per spatio-temporal-polarization mode. If only temporal modes are used, then DIE translates directly to bandwidth efficiency. In this paper, the DIE vs. PIE tradeoffs for known modulations and receiver structures are compared to the ultimate quantum limit, and analytic approximations are found in the limit of high PIE. This analysis shows that known structures fall short of the maximum attainable DIE by a factor that increases linearly with PIE for high PIE. The capacity of the Dolinar receiver is derived for binary coherent-state modulations and computed for the case of on-off keying (OOK). The DIE vs. PIE tradeoff for this case is improved only slightly compared to OOK with photon counting. An adaptive rule is derived for an additive local oscillator th...
Point-coupling models from mesonic hyper massive limit and mean-field approaches
Lourenco, O.; Dutra, M., E-mail: odilon@ita.br [Departamento de Fisica, Instituto Tecnologico da Aeronautica - CTA, Sao Jose dos Campos, SP (Brazil); Delfino, Antonio, E-mail: delfino@if.uff.br [Instituto de Fisica, Universidade Federal Fluminense, Niteroi, RJ (Brazil); Amaral, R.L.P.G. [Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA (United States)
2012-08-15
t In this work, we show how nonlinear point coupling models, described by a Lagrangian density containing only terms up to fourth order in the fermion condensate ({Psi}-bar{Psi}), are derived from a modified meson exchange nonlinear Walecka model. We present two methods of derivation, namely the hyper massive meson limit within a functional integral approach and the mean-field approximation, in which equations of state at zero temperature of the nonlinear point-coupling models are directly obtained. (author)
The limitations of the reverse-engineering approach to cognitive modeling.
Rueckl, Jay G
2012-10-01
Frost's critique reveals the limitations of the reverse-engineering approach to cognitive modeling--the style of psychological explanation in which a stipulated internal organization (in the form of a computational mechanism) explains a relatively narrow set of phenomena. An alternative is to view organization as both the explanation for some phenomena and a phenomenon to be explained. This move poses new and interesting theoretical challenges for theories of word reading.
Gradient approach for the evaluation of the fatigue limit of welded structures under complex loading
Y. Nadot
2017-07-01
Full Text Available Welded ‘T-junctions’ are tested at different load ratio for constant and variable amplitude loading. Fatigue results are analyzed through the type of fatigue mechanisms depending on the loading type. A gradient approach (WSG: Welded Stress Gradient is used to evaluate the fatigue limit and the comparison with experimental results shows a relative good agreement. Nonlinear cumulative damage theory is used to take into account the variable amplitude loading.
Carr Andrew J
2008-09-01
Full Text Available Abstract Varying surgical techniques, patient groups and results have been described regards the surgical treatment of post traumatic flexion contracture of the elbow. We present our experience using the limited lateral approach on patients with carefully defined contracture types. Surgical release of post-traumatic flexion contracture of the elbow was performed in 23 patients via a limited lateral approach. All patients had an established flexion contracture with significant functional deficit. Contracture types were classified as either extrinsic if the contracture was not associated with damage to the joint surface or as intrinsic if it was. Overall, the mean pre-operative deformity was 55 degrees (95%CI 48 – 61 which was corrected at the time of surgery to 17 degrees (95%CI 12 – 22. At short-term follow-up (7.5 months the mean residual deformity was 25 degrees (95%CI 19 – 30 and at medium-term follow-up (43 months it was 32 degrees (95%CI 25 – 39. This deformity correction was significant (p Surgical release of post-traumatic flexion contracture of the elbow via a limited lateral approach is a safe technique, which reliably improves extension especially for extrinsic contractures. In this series all patients with an extrinsic contracture regained a functional range of movement and were satisfied with their surgery.
Stanisław Sieniutycz
2013-02-01
Full Text Available This research presents a unified approach to power limits in power producing and power consuming systems, in particular those using renewable resources. As a benchmark system which generates or consumes power, a well-known standardized arrangement is considered, in which two different reservoirs are separated by an engine or a heat pump. Either of these units is located between a resource fluid (‘upper’ fluid 1 and the environmental fluid (‘lower’ fluid, 2. Power yield or power consumption is determined in terms of conductivities, reservoir temperatures and internal irreversibility coefficient, F. While bulk temperatures Ti of reservoirs’ are the only necessary state coordinates describing purely thermal units, in chemical (electrochemical engines, heat pumps or separators it is necessary to use both temperatures and chemical potentials mk. Methods of mathematical programming and dynamic optimization are applied to determine limits on power yield or power consumption in various energy systems, such as thermal engines, heat pumps, solar dryers, electrolysers, fuel cells, etc. Methodological similarities when treating power limits in engines, separators, and heat pumps are shown. Numerical approaches to multistage systems are based on methods of dynamic programming (DP or on Pontryagin’s maximum principle. The first method searches for properties of optimal work and is limited to systems with low dimensionality of state vector, whereas the second investigates properties of differential (canonical equations derived from the process Hamiltonian. A relatively unknown symmetry in behaviour of power producers (engines and power consumers is enunciated in this paper. An approximate evaluation shows that, at least ¼ of power dissipated in the natural transfer process must be added to a separator or a heat pump in order to assure a required process rate. Applications focus on drying systems which, by nature, require a large amount of thermal
Vacchi, Matteo; Misson, Gloria; Montefalcone, Monica; Archetti, Renata; Nike Bianchi, Carlo; Ferrari, Marco
2014-05-01
The upper portion of the meadows of the protected Mediterranean seagrass Posidonia oceanica occurs in the region of the seafloor mostly affected by surf-related effects. Evaluation of its status is part of monitoring programs, but proper conclusions are difficult to draw due to the lack of definite reference conditions. Comparing the position of the meadow upper limit with the beach morphodynamics (i.e. the distinctive type of beach produced by topography and wave climate) provided evidence that the natural landwards extension of meadows can be predicted. Here we present an innovative predictive cartographic approach able to identify the seafloor portion where the meadow upper limit should naturally lies (i.e. its reference conditions). The conceptual framework of this model is based on 3 essential components: i) Definition of the breaking depth geometry: the breaking limit represents the major constrain for the landward meadow development. We modelled the breaking limit (1 year return time) using the software Mike 21 sw. ii) Definition of the morphodynamic domain of the beach using the surf scaling index ɛ; iii) Definition of the P. oceanica upper limit geometry. We coupled detailed aerial photo with thematic bionomic cartography. In GIS environment, we modelled the seafloor extent where the meadow should naturally lies according to the breaking limit position and the morphodynamic domain of the beach. Then, we added the GIS layer with the meadow upper limit geometry. Therefore, the final output shows, on the same map, both the reference condition and the actual location of the upper limit. It make possible to assess the status of the landward extent of a given P. oceanica meadow and quantify any suspected or observed regression caused by anthropic factors. The model was elaborated and validated along the Ligurian coastline (NW Mediteraanean) and was positively tested in other Mediterranean areas.
Novel approach to epicardial pacemaker implantation in patients with limited venous access.
Costa, Roberto; Scanavacca, Mauricio; da Silva, Kátia Regina; Martinelli Filho, Martino; Carrillo, Roger
2013-11-01
Limited venous access in certain patients increases the procedural risk and complexity of conventional transvenous pacemaker implantation. The purpose of this study was to determine a minimally invasive epicardial approach using pericardial reflections for dual-chamber pacemaker implantation in patients with limited venous access. Between June 2006 and November 2011, 15 patients underwent epicardial pacemaker implantation. Procedures were performed through a minimally invasive subxiphoid approach and pericardial window with subsequent fluoroscopy-assisted lead placement. Mean patient age was 46.4 ± 15.3 years (9 male [(60.0%], 6 female [40.0%]). The new surgical approach was used in patients determined to have limited venous access due to multiple abandoned leads in 5 (33.3%), venous occlusion in 3 (20.0%), intravascular retention of lead fragments from prior extraction in 3 (20.0%), tricuspid valve vegetation currently under treatment in 2 (13.3%), and unrepaired intracardiac defects in 2 (13.3%). All procedures were successful with no perioperative complications or early deaths. Mean operating time for isolated pacemaker implantation was 231.7 ± 33.5 minutes. Lead placement on the superior aspect of right atrium, through the transverse sinus, was possible in 12 patients. In the remaining 3 patients, the atrial lead was implanted on the left atrium through the oblique sinus, the postcaval recess, or the left pulmonary vein recess. None of the patients displayed pacing or sensing dysfunction, and all parameters remained stable throughout the follow-up period of 36.8 ± 25.1 months. Epicardial pacemaker implantation through pericardial reflections is an effective alternative therapy for those patients requiring physiologic pacing in whom venous access is limited. © 2013 Heart Rhythm Society. All rights reserved.
Adam-Poupart, Ariane; Brand, Allan; Fournier, Michel; Jerrett, Michael
2014-01-01
Background: Ambient air ozone (O3) is a pulmonary irritant that has been associated with respiratory health effects including increased lung inflammation and permeability, airway hyperreactivity, respiratory symptoms, and decreased lung function. Estimation of O3 exposure is a complex task because the pollutant exhibits complex spatiotemporal patterns. To refine the quality of exposure estimation, various spatiotemporal methods have been developed worldwide. Objectives: We sought to compare the accuracy of three spatiotemporal models to predict summer ground-level O3 in Quebec, Canada. Methods: We developed a land-use mixed-effects regression (LUR) model based on readily available data (air quality and meteorological monitoring data, road networks information, latitude), a Bayesian maximum entropy (BME) model incorporating both O3 monitoring station data and the land-use mixed model outputs (BME-LUR), and a kriging method model based only on available O3 monitoring station data (BME kriging). We performed leave-one-station-out cross-validation and visually assessed the predictive capability of each model by examining the mean temporal and spatial distributions of the average estimated errors. Results: The BME-LUR was the best predictive model (R2 = 0.653) with the lowest root mean-square error (RMSE ;7.06 ppb), followed by the LUR model (R2 = 0.466, RMSE = 8.747) and the BME kriging model (R2 = 0.414, RMSE = 9.164). Conclusions: Our findings suggest that errors of estimation in the interpolation of O3 concentrations with BME can be greatly reduced by incorporating outputs from a LUR model developed with readily available data. Citation: Adam-Poupart A, Brand A, Fournier M, Jerrett M, Smargiassi A. 2014. Spatiotemporal modeling of ozone levels in Quebec (Canada): a comparison of kriging, land-use regression (LUR), and combined Bayesian maximum entropy–LUR approaches. Environ Health Perspect 122:970–976; http://dx.doi.org/10.1289/ehp.1306566 PMID:24879650
A pragmatic approach to estimate the number of days in exceedance of PM10 limit value
Beauchamp, Maxime; Malherbe, Laure; de Fouquet, Chantal
2015-06-01
European legislation on ambient air quality requests that Member States report the annual number of exceedances of short-term concentration regulatory thresholds for PM10 and delimit the concerned areas. Measurements at the monitoring stations do not allow to fully describe those areas. We present a methodology to estimate the number of exceedances of the daily limit value over a year, that can be extended to any similar issue. This methodology is applied to PM10 concentrations in France for which the daily limit value is 50 μg m-3, not to be exceeded more that 35 days. A probabilistic model is built using preliminary mapping of daily mean concentrations. First, daily atmospheric concentration fields are estimated at 1 km resolution by external drift kriging, combining surface monitoring observations and outputs from the CHIMERE chemistry transport model. Setting a conventional Gaussian hypothesis for the estimation error, the kriging variance is used to compute the probability of exceeding the daily limit value and to identify three areas: those where we can suppose as certain that the concentrations exceed or not the daily limit value and those where the situation is indeterminate because of the estimation uncertainty. Then, from the set of 365 daily mappings of the probability to exceed the daily limit value, the parameters of a translated Poisson distribution is fitted on the annual number of exceedances of the daily limit value at each grid cell, which enables to compute the probability for this number to exceed 35. The methodology is tested for three years (2007, 2009 and 2011) which present numerous exceedances of the daily limit concentration at some monitoring stations. A cross-validation analysis is carried out to check the efficiency of the methodology. The way to interpret probability maps is discussed. A comparison is made with simpler kriging approaches using indicator kriging of exceedances. Lastly, estimation of the population exposed to PM10
An Adaptive Approach to Mitigate Background Covariance Limitations in the Ensemble Kalman Filter
Song, Hajoon
2010-07-01
A new approach is proposed to address the background covariance limitations arising from undersampled ensembles and unaccounted model errors in the ensemble Kalman filter (EnKF). The method enhances the representativeness of the EnKF ensemble by augmenting it with new members chosen adaptively to add missing information that prevents the EnKF from fully fitting the data to the ensemble. The vectors to be added are obtained by back projecting the residuals of the observation misfits from the EnKF analysis step onto the state space. The back projection is done using an optimal interpolation (OI) scheme based on an estimated covariance of the subspace missing from the ensemble. In the experiments reported here, the OI uses a preselected stationary background covariance matrix, as in the hybrid EnKF–three-dimensional variational data assimilation (3DVAR) approach, but the resulting correction is included as a new ensemble member instead of being added to all existing ensemble members. The adaptive approach is tested with the Lorenz-96 model. The hybrid EnKF–3DVAR is used as a benchmark to evaluate the performance of the adaptive approach. Assimilation experiments suggest that the new adaptive scheme significantly improves the EnKF behavior when it suffers from small size ensembles and neglected model errors. It was further found to be competitive with the hybrid EnKF–3DVAR approach, depending on ensemble size and data coverage.
An ICMP-Based Mobility Management Approach Suitable for Protocol Deployment Limitation
Jeng-Yueng Chen
2009-01-01
Full Text Available Mobility management is one of the important tasks on wireless networks. Many approaches have been proposed in the past, but none of them have been widely deployed so far. Mobile IP (MIP and Route Optimization (ROMIP, respectively, suffer from triangular routing problem and binding cache supporting upon each node on the entire Internet. One step toward a solution is the Mobile Routing Table (MRT, which enables edge routers to take over address binding. However, this approach demands that all the edge routers on the Internet support MRT, resulting in protocol deployment difficulties. To address this problem and to offset the limitation of the original MRT approach, we propose two different schemes, an ICMP echo scheme and an ICMP destination-unreachable scheme. These two schemes work with the MRT to efficiently find MRT-enabled routers that greatly reduce the number of triangular routes. In this paper, we analyze and compare the standard MIP and the proposed approaches. Simulation results have shown that the proposed approaches reduce transmission delay, with only a few routers supporting MRT.
陈富坚; 黄世斌; 包惠明
2011-01-01
To solve the problems in the current deterministic method for determining a maximum speed limit for expressway operation against disastrous events, a reliability method was presented. The dynamic analysis was made for vehicle traveling at a horizontal curve of expressway, and respective maximum allowable speeds were deduced for vehicle in horizontal circular motion without sliding and that in emergency stopping without hitting an obstacle in the visual range. Based on Reliability engineering, the reliability of a maximum speed limit was defined. With safety of horizontal circular motion and emergency stopping as constraints, the performance function of the maximum speed limit was established and the model for calculation of its reliability and reliable indicator were deduced. For solution of the reliability model, Monte Carlo method was recommended due to multi-parameter high complexity of the non-linear performance function. With a self-developed program, a case study was conducted to illustrate the reliability analysis of the maximum speed limit for expressway safety management under a detrimental event. The reliability method for determining a maximum speed limit of expressway operation is helpful for improving traffic safety.%针对灾变事件下高速公路安全管理中采用定值型限速标准存在的问题,对基于可靠性的限速标准进行了探讨.通过对高速公路平曲线路段车辆行驶的动力学分析,推导了车辆作圆周运动而不发生横向滑移的最大允许车速,以及司机在弯道内发现障碍物而紧急安全停车的最大允许速度.以可靠性工程理论为依据,对高速公路限速标准的可靠度进行了定义,并以圆周运动安全和紧急刹车安全为约束条件建立了高速公路限速标准的功能函数,推导了相应的可靠性计算模型.针对限速标准功能函数的多参数复杂非线性特征,提出采用Monte Carlo法对限速标准可靠性计算模型进行求解.以所
Miller, Owen D; Kurtz, Sarah R
2012-01-01
Absorbed sunlight in a solar cell produces electrons and holes. But, at the open circuit condition, the carriers have no place to go. They build up in density and, ideally, they emit external fluorescence that exactly balances the incoming sunlight. Any additional non-radiative recombination impairs the carrier density buildup, limiting the open-circuit voltage. At open-circuit, efficient external fluorescence is an indicator of low internal optical losses. Thus efficient external fluorescence is, counter-intuitively, a necessity for approaching the Shockley-Queisser efficiency limit. A great Solar Cell also needs to be a great Light Emitting Diode. Owing to the narrow escape cone for light, efficient external emission requires repeated attempts, and demands an internal luminescence efficiency >>90%.
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-01-01
was to develop an approach complex enough to capture the processes related to sediment supply limitation but simple enough to allow for rapid calculations of multi-year sediment budgets. The approach relies on empirical relations between suspended sediment concentration and discharge but on a particle size specific basis and also tracks and incorporates the particle size distribution of the bed sediment. We have applied this approach to the Colorado River below Glen Canyon Dam (GCD), a reach that is particularly suited to such an approach because it is substantially sediment supply limited such that transport rates are strongly dependent on both water discharge and sediment supply. The results confirm the ability of the approach to simulate the effects of supply limitation, including periods of accumulation and bed fining as well as erosion and bed coarsening, using a very simple formulation. Although more empirical in nature than standard one-dimensional morphodynamic models, this alternative approach is attractive because its simplicity allows for rapid evaluation of multi-year sediment budgets under a range of flow regimes and sediment supply conditions, and also because it requires substantially less data for model setup and use.
An approach for modeling sediment budgets in supply-limited rivers
Wright, Scott A.; Topping, David J.; Rubin, David M.; Melis, Theodore S.
2010-10-01
was to develop an approach complex enough to capture the processes related to sediment supply limitation but simple enough to allow for rapid calculations of multi-year sediment budgets. The approach relies on empirical relations between suspended sediment concentration and discharge but on a particle size specific basis and also tracks and incorporates the particle size distribution of the bed sediment. We have applied this approach to the Colorado River below Glen Canyon Dam (GCD), a reach that is particularly suited to such an approach because it is substantially sediment supply limited such that transport rates are strongly dependent on both water discharge and sediment supply. The results confirm the ability of the approach to simulate the effects of supply limitation, including periods of accumulation and bed fining as well as erosion and bed coarsening, using a very simple formulation. Although more empirical in nature than standard one-dimensional morphodynamic models, this alternative approach is attractive because its simplicity allows for rapid evaluation of multi-year sediment budgets under a range of flow regimes and sediment supply conditions, and also because it requires substantially less data for model setup and use.
A NOVEL APPROACH OF LIMITED-RANDOMNESS FOUNTAIN CODES IN DEEP SPACE COMMUNICATION
Gu Shushi; Zhang Qinyu; Jiao Jian
2011-01-01
Digital fountain is applied into deep space communication for its rateless and non-feedback forward error correction.However,the long code length and encoding overhead are confined factors to guarantee a considerable recovery probability as power and buffer-limited equipment in deep space environment.At the same time,the typical fountain decoding is sub-optimum decoding algorithm.We propose a new approach,Dependent Sequences Compensation Algorithm (DSCA),to improve the encoding efficiency by restricting the randomness in fountain encoding.While decoding algorithm is also optimized by redundant information in stopping set.The results show that the optimized method can obtain a 10-4 decoding failure rate with overhead under 0.20 for code length 500,which indicates the usefulness of the proposed approach in deep space communication.
An Optimization-Based Impedance Approach for Robot Force Regulation with Prescribed Force Limits
R. de J. Portillo-Vélez
2015-01-01
Full Text Available An optimization based approach for the regulation of excessive or insufficient forces at the end-effector level is introduced. The objective is to minimize the interaction force error at the robot end effector, while constraining undesired interaction forces. To that end, a dynamic optimization problem (DOP is formulated considering a dynamic robot impedance model. Penalty functions are considered in the DOP to handle the constraints on the interaction force. The optimization problem is online solved through the gradient flow approach. Convergence properties are presented and the stability is drawn when the force limits are considered in the analysis. The effectiveness of our proposal is validated via experimental results for a robotic grasping task.
The limitations of discrete-time approaches to continuous-time contagion dynamics
Fennell, Peter G; Gleeson, James P
2016-01-01
Continuous-time Markov process models of contagions are widely studied, not least because of their utility in predicting the evolution of real-world contagions and in formulating control measures. It is often the case, however, that discrete-time approaches are employed to analyze such models or to simulate them numerically. In such cases, time is discretized into uniform steps and transition rates between states are replaced by transition probabilities. In this paper, we illustrate potential limitations to this approach. We show how discretizing time leads to a restriction on the values of the model parameters that can accurately be studied. We examine numerical simulation schemes employed in the literature, showing how synchronous-type updating schemes can bias discrete-time formalisms when compared against continuous-time formalisms. Event-based simulations, such as the Gillespie algorithm, are proposed as optimal simulation schemes both in terms of replicating the continuous-time process and computational...
Nicy Sebastian
2015-08-01
Full Text Available The essentials of fractional calculus according to different approaches that can be useful for our applications in the theory of probability and stochastic processes are established. In addition to this, from this fractional integral, one can list out almost all of the extended densities for the pathway parameter q < 1 and q → 1. Here, we bring out the idea of thicker- or thinner-tailed models associated with a gamma-type distribution as a limiting case of the pathway operator. Applications of this extended gamma model in statistical mechanics, input-output models, solar spectral irradiance modeling, etc., are established.
Renzler, Michael; Harnisch, Martina; Daxner, Matthias; Kranabetter, Lorenz; Kuhn, Martin; Scheier, Paul; Echt, Olof
2016-01-01
Electron ionization of helium droplets doped with cesium or potassium results in doubly and, for cesium, triply charged cluster ions. The smallest observable doubly charged clusters are $Cs_{9}^{2+}$ and $K_{11}^{2+}$; they are a factor two smaller than reported previously. The size of potassium dications approaches the Rayleigh limit nRay for which the fission barrier is calculated to vanish, i.e. their fissilities are close to 1. Cesium dications are even smaller than nRay, implying that th...
Transoral robotic approach to parapharyngeal space tumors: Case series and technical limitations.
Boyce, Brian J; Curry, Joseph M; Luginbuhl, Adam; Cognetti, David M
2016-08-01
The transoral robotic approach to parapharyngeal space (PPS) tumors is a new technique with limited data available on its feasibility, safety, and efficacy. We analyzed our experience with transoral robotic excisions of PPS tumors to evaluate the safety and efficacy of this technique. Retrospective chart analysis at tertiary academic medical center. From July 2010 to June 2014, 17 patients who had transoral robotic excision of PPS tumors were included in the study. Our cohort had an average age of 61.6 years and was 52.9% male. All patients had successful removal of their PPS tumors, and the average size of the tumors was 27.3 cm(3) (range 2-80 cm(3) ). Two cases (11.7%) required a cervical incision to assist with tumor removal. The average total operative time was 140.5 minutes. Two PPS PAs had focal areas of capsule rupture and one was fragmented. The average length of stay was 1.8 days (range 1-7 days), and all patients were discharged on an oral diet. Three patients experienced complications. There was no clinical or radiographic evidence of recurrence. This is the largest single-institution case series of transoral robotic approaches to PPS tumors. We demonstrate that this approach is feasible and safe but also note limitations of the robotic approaches for tumors on the far lateral and superior areas of the PPS, which required transcervical assistance. There were no patients who demonstrated recurrent tumor either radiographically or clinically. 4. Laryngoscope, 126:1776-1782, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Limited Multiple-Writer: An Approach to Dealing with False Sharing in Software DSMs
谢向辉; 韩承德
2000-01-01
False sharing is one of the most important factors impacting the performance of DSM (distributed shared memory) systems. The single-writer ap proach is simple, but it cannot avoid the ping-pong effect of the data page thrashing, while the multiple-writer approach is effective for false sharing but with high cost. This paper proposes a new approach, called limited multiple-writer (LMW) to han dling multiple writers in software DSM. It distinguishes two kinds of multiple-writer as lock-based form and barrier-based form, and handles them with different policies. It discards the Twin and Diff in traditional multiple-writer approach, and simplifies the implementation of niultiple-writer in software DSM systems. The implementa tion of LMW in a CVM (Coherent Virtual Machine) software DSM system, which is based on a network of workstations, is introduced. Evaluation results show that for some applications such as SOR (Successive Over-Relaxation), LU (Lower triangular and Upper triangular), FFT (Fast Fourier Transformation), and IS (Integer Sorting), LMW provides a significant reduction inexecution time (11%, 16%, 33% and 46%) compared with the traditional multiple-writer approach on the platform.
Youssef, Carl A; Smotherman, Carmen R; Kraemer, Dale F; Aldana, Philipp R
2016-04-01
OBJECT The endoscopic endonasal approach (EEA) has been established as an alternative approach to craniovertebral junction (CVJ) pathology in adults. The authors have previously described the nasoaxial line (NAxL) as an accurate predictor of the lower limit of the EEA to the CVJ in adults. The surgical anatomy limiting the EEA to the pediatric CVJ has not been well studied. Furthermore, predicting the lower limit of the EEA in various pediatric age groups is important in surgical planning. To better understand the anatomy affecting the EEA to the CVJ, the authors examined the skull base anatomy relevant to the EEA in children of different age groups and used the NAxL to predict the EEA lower limit in children. METHODS Axial brain CT scans of 39 children with normal skull base anatomy were reconstructed sagittally. Children were divided into 4 groups according to age: 3-6, 7-10, 11-14, and 15-18 years old. The intersection of the NAxL with the odontoid process of C-2 was described for each group. Analyses of variance were used to estimate the effect of age, sex, interaction between age and sex on different anatomical parameters relevant to the endonasal corridor (including the length of the hard palate [HPLe]), dimensions of choana and piriform aperture, and the length of the NAxL to C-2. The effect of the HPLe on the working distance of NAxL to the odontoid was also estimated using analysis of covariance, controlling for age, sex, and their interaction. RESULTS The NAxL extended to the odontoid process in 38 of the 39 children. Among the 39 children, the NAxL intersected the upper third of the odontoid process in 25 while intersecting the middle third in the remaining 13 children. The measurements of the inferior limits did not differ with age, varying between 9 and 11 mm below the hard palate line at the ventral surface of C-2. Significant increases in the size of the piriform aperture and choana and the HPLe were observed after age 10. The HPLe predicted the
Zhang, Jie; Drinkwater, Bruce W; Dwyer-Joyce, Rob S
2007-05-01
The performance of ultrasonic oil-film thickness measurement in a ball bearing is quantified. A range of different viscosity oils (Shell T68, VG15, and VG5) are used to explore the lowest reflection coefficient and hence the thinnest oil-film thickness that the system can measure. The results show a minimum reflection coefficient of 0.07 for both oil VG15 and VG5 and 0.09 for oil T68 at 50 MHz. This corresponds to an oil-film thickness of 0.4 microm for T68 oil. An angular spectrum (or Fourier decomposition) approach is used to analyze the performance of this configuration. This models the interaction of component plane waves with the measurement system and quantifies the effect of the key parameters (transducer aperture, focal length, and center frequency). The simulation shows that for a focused transducer the reflection coefficient tends to a limiting value at small oil-film thickness. For the transducer used in this paper it is shown that the limiting reflection coefficient is 0.05 and the oil-film measurement errors increase as the reflection coefficient approaches this value. The implications for improved measurement systems are then discussed.
Ye, Chuyang; Murano, Emi; Stone, Maureen; Prince, Jerry L
2015-10-01
The tongue is a critical organ for a variety of functions, including swallowing, respiration, and speech. It contains intrinsic and extrinsic muscles that play an important role in changing its shape and position. Diffusion tensor imaging (DTI) has been used to reconstruct tongue muscle fiber tracts. However, previous studies have been unable to reconstruct the crossing fibers that occur where the tongue muscles interdigitate, which is a large percentage of the tongue volume. To resolve crossing fibers, multi-tensor models on DTI and more advanced imaging modalities, such as high angular resolution diffusion imaging (HARDI) and diffusion spectrum imaging (DSI), have been proposed. However, because of the involuntary nature of swallowing, there is insufficient time to acquire a sufficient number of diffusion gradient directions to resolve crossing fibers while the in vivo tongue is in a fixed position. In this work, we address the challenge of distinguishing interdigitated tongue muscles from limited diffusion magnetic resonance imaging by using a multi-tensor model with a fixed tensor basis and incorporating prior directional knowledge. The prior directional knowledge provides information on likely fiber directions at each voxel, and is computed with anatomical knowledge of tongue muscles. The fiber directions are estimated within a maximum a posteriori (MAP) framework, and the resulting objective function is solved using a noise-aware weighted ℓ1-norm minimization algorithm. Experiments were performed on a digital crossing phantom and in vivo tongue diffusion data including three control subjects and four patients with glossectomies. On the digital phantom, effects of parameters, noise, and prior direction accuracy were studied, and parameter settings for real data were determined. The results on the in vivo data demonstrate that the proposed method is able to resolve interdigitated tongue muscles with limited gradient directions. The distributions of the
Fuentes-Pardo, Angela P; Ruzzante, Daniel E
2017-07-26
Whole-genome resequencing (WGR) is a powerful method for addressing fundamental evolutionary biology questions that have not been fully resolved using traditional methods. WGR includes four approaches: the sequencing of individuals to a high depth of coverage with either unresolved (huWGR) or resolved haplotypes (hrWGR), the sequencing of population genomes to a high depth by mixing equimolar amounts of unlabelled-individual DNA (Pool-seq), and the sequencing of multiple individuals from a population to a low depth (lcWGR). These techniques require the availability of a reference genome. This, along with the still high cost of shotgun sequencing and the large demand for computing resources and storage, has limited their implementation in non-model species with scarce genomic resources and in fields such as conservation biology. Our goal here is to describe the various WGR methods, their pros and cons, and potential applications in conservation biology. WGR offers an unprecedented marker density and surveys a wide diversity of genetic variations not limited to single nucleotide polymorphisms (e.g. structural variants and mutations in regulatory elements), increasing their power for the detection of signatures of selection and local adaptation as well as for the identification of the genetic basis of phenotypic traits and diseases. Currently though, no single WGR approach fulfills all requirements of conservation genetics, and each method has its own limitations and sources of potential bias. We discuss proposed ways to minimize such biases. We envision a not distant future where the analysis of whole genomes becomes a routine task in many non-model species and fields including conservation biology. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
From A to B: A new approach to the limits of predictability of human mobility patterns
Ikanovic, Edin Lind
2016-01-01
Next place prediction algorithms are invaluable tools, capable of increasing the efficiency of a wide variety of tasks, ranging from reducing the spreading of diseases to better resource management in areas such as urban planning and communication networks. In this work we estimate upper and lower limits on the predictability of human mobility to help assess the performance of competing algorithms. We do this using GPS traces from 604 individuals participating in a multiyear long experiment, The Copenhagen Networks study. Earlier works, focusing on the prediction of a participants whereabouts in the next time bin, have found very high upper limits (> 90%). We show that these upper limits, at least for some spatiotemporal scales, are mainly driven by the fact that humans tend to stay in the same place for long periods of time. This leads us to propose a new approach, focusing on the prediction of the next Point of Interest. By removing the trivial parts of human mobility behaviour, we show that the predictabil...
Combinatorial approach to the interpolation method and scaling limits in sparse random graphs
Bayati, Mohsen; Tetali, Prasad
2009-01-01
We establish the existence of free energy limits for several sparse random hypergraph models corresponding to certain combinatorial models on Erd{\\"o}s-R\\'{e}nyi graph $\\G(N,c/N)$ and random $r$-regular graph $\\G(N,r)$. For a variety of models, including independent sets, MAX-CUT, Coloring and K-SAT, we prove that the free energy both at a positive and zero temperature, appropriately rescaled, converges to a limit as the size of the underlying graph diverges to infinity. For example, as a special case we prove that the size of a largest independent set in these graphs, normalized by the number of nodes converges to a limit w.h.p., thus resolving an open problem, (see Conjecture 2.20 in \\cite{WormaldModelsRandomGraphs}, as well as \\cite{Aldous:FavoriteProblems}, \\cite{BollobasRiordanMetrics}, \\cite{JansonThomason}, and \\cite{AldousSteele:survey}). Our approach is based on extending and simplifying the interpolation method developed by Guerra and Toninelli \\cite{GuerraTon} and Franz and Leone \\cite{FranzLeone},...
Wagai, R.; Mayer, L. M.
2014-12-01
Positive co-variation of organic matter (OM) with iron and aluminum phases has been known for decades in soil and, in case of OM-Fe, in marine sediments. More recent studies point to the metal control on the mean residence time of organic carbon in soils, suggesting that better understanding of the role of these metal phases and the nature of these organo-metal associations would help to improve the models of soil OM dynamics. We developed a selective dissolution approach to assess these associations (Wagai and Mayer, 2007; Wagai et al., 2013). By taking advantage of well-established extraction techniques that were targeted to dissolve specific metal and aluminosilicate phases in soil, we quantified the amounts of OM co-dissolved by the selective dissolution of these inorganic phases. The inherent limitations in this conceptually simple approach include the presence of C-based compounds (often as complexing agent for metal) in the extractants and the lack of selectivity when dissolving specific inorganic phases. The former was resolved by using nitrogen (N), instead of C, as a surrogate for OM because (i) soil N is mostly present as soil OM with relatively narrow C:N ratio, and (ii) the extractants are N free. We were able to partially overcome the lack of selectivity problem by comparing the co-dissolution of OM from a variety of extractants that use reductive, complexation, and acid/alkaline dissolutions. The potential advantages of our approach include the ability (i) to estimate the contribution of specific inorganic phases to OM stabilization, and (ii) to infer the possible modes of the organo-mineral associations that were extracted from field soils (e.g., adsorptive association vs. coprecipitation of organo-metallic complexes). In this presentation, we will further consider the advantages and limitations of this approach (e.g., methodological cautions), present some of the previous and new findings gained from this approach (including its application to
The maximum residue limits and determination of pesticide residues in asparagus%芦笋中农药最大残留限量及检测方法研究进展
宋欢; 车兰兰; 林勤保; 王蓉珍
2012-01-01
总结了芦笋中常用农药及欧盟、日本、美国和我国对其最大残留限量规定，并对农药残留相关检测方法在芦笋等果蔬中的应用进行综述，以期为我国芦笋中农药残留检测和控制提供参考。%Common pesticides used in asparagus planting and the maximum residue limits （MRLs） of those in European Union,Japan,USA and China were summarized,and also the application of relevant determination methods of pesticide residues to asparagus ,vegetables and fruit were reviewed. The paper is aimed at providing reference for pesticide residue determination and controlling in Chinese asparagus field.
Renzler, Michael; Daxner, Matthias; Kranabetter, Lorenz; Kuhn, Martin; Scheier, Paul; Echt, Olof
2016-01-01
Electron ionization of helium droplets doped with cesium or potassium results in doubly and, for cesium, triply charged cluster ions. The smallest observable doubly charged clusters are $Cs_{9}^{2+}$ and $K_{11}^{2+}$; they are a factor two smaller than reported previously. The size of potassium dications approaches the Rayleigh limit nRay for which the fission barrier is calculated to vanish, i.e. their fissilities are close to 1. Cesium dications are even smaller than nRay, implying that their fissilities have been significantly overestimated. Triply charged cesium clusters as small as $Cs_{19}^{3+}$ are observed; they are a factor 2.6 smaller than previously reported. Mechanisms that may be responsible for enhanced formation of clusters with high fissilities are discussed.
Byskov, Jens; Marchal, Bruno; Maluka, Stephen
2014-01-01
researchers was formed to implement, and continually assess and improve the application of the four conditions. Researchers evaluated the intervention using qualitative and quantitative data collection and analysis methods. RESULTS: The values underlying the AFR approach were in all three districts well...... to a broadened engagement of health team members and other stakeholders in priority setting and other decision-making processes. CONCLUSIONS: District stakeholders were able to take greater charge of closing the gap between nationally set planning on one hand and the local realities and demands of the served...... communities on the other within the limited resources at hand. This study thus indicates that the operationalization of the four broadly defined and linked conditions is both possible and seems to be responding to an actual demand. This provides arguments for the continued application and further assessment...
Fission of multiply charged alkali clusters in helium droplets - approaching the Rayleigh limit.
Renzler, Michael; Harnisch, Martina; Daxner, Matthias; Kranabetter, Lorenz; Kuhn, Martin; Scheier, Paul; Echt, Olof
2016-04-21
Electron ionization of helium droplets doped with sodium, potassium or cesium results in doubly and, for cesium, triply charged cluster ions. The smallest observable doubly charged clusters are Na9(2+), K11(2+), and Cs9(2+); they are a factor two to three smaller than reported previously. The size of sodium and potassium dications approaches the Rayleigh limit nRay for which the fission barrier is calculated to vanish, i.e. their fissilities are close to 1. Cesium dications are even smaller than nRay, implying that their fissilities have been significantly overestimated. Triply charged cesium clusters as small as Cs19(3+) are observed; they are a factor 2.6 smaller than previously reported. Mechanisms that may be responsible for enhanced formation of clusters with high fissilities are discussed.
Abellán-Nebot, J. V.; Liu, J.; Romero, F.
2009-11-01
The State Space modelling approach has been recently proposed as an engineering-driven technique for part quality prediction in Multistage Machining Processes (MMP). Current State Space models incorporate fixture and datum variations in the multi-stage variation propagation, without explicitly considering common operation variations such as machine-tool thermal distortions, cutting-tool wear, cutting-tool deflections, etc. This paper shows the limitations of the current State Space model through an experimental case study where the effect of the spindle thermal expansion, cutting-tool flank wear and locator errors are introduced. The paper also discusses the extension of the current State Space model to include operation variations and its potential benefits.
The limits of nanomechanical applications of shape memory alloys: an optical approach
Kolloch, Andreas; Boneberg, Johannes; Leiderer, Paul [Universitaet Konstanz (Germany)
2009-07-01
Shape Memory Alloys (SMA), with their high strain and stress values for small temperature changes and their excellent durability against environmental influences, may prove to be ideal candidates for the driving force of nanomechanical devices. In spite of this promising potential, however, very little is known about the properties of SMA materials, and in particular thin films, on the nanoscale. Our work concentrates on the classic SMA, Nitinol, an intermetallic compound consisting of nickel and titanium. While it is completely reversible, the martensite-austenite transition of this material is accompanied by large strain and stress changes of up to 6-8% and 600 MPa, respectively. The project aims at employing an ultrafast thermo-optical approach to investigating whether there is a lower thickness limit of the martensitic phase transition in NiTi SMAs and what the transition speed for the phase change of these materials is.
杨艳红; 姜兆兴; 赵敏
2015-01-01
食品中农药的最大残留限量(MRLs)是保障食品质量安全的重要立法依据，也是指导食品和农产品等生产的关键技术指标。MRLs的制定方法不仅影响农产品行业的持续发展，而且还对提高我国农产品行业的国际竞争力起到积极作用。本文简要介绍目前国内外 MRLs 标准的现状、涉及农药的种类以及制定农药最大残留限量的依据，概述了基于田间实验数据制定最大残留限量的方法，并且比较了国际上欧盟(EU)、北美自由贸易协定(NAFTA)成员国、经济合作与发展组织(OECD)及农药残留联席会议(JMPR)的限量制定方案。%Pesticide maximum residue limit is a critical legislative basis for food safety and a key technical indicator to instruct food and agro-products. The establishing methods for MRLs not only influence the sustainable development of agricultural industry, but also play an actively role on the improvement of international competitiveness of Chinese agricultural industry. The current situation and principles employed to establish maximum residue limits were briefly introduced in this paper. The methods derived from field trials were summarized for setting MRLs. Meanwhile, the calculation methods proposed by European Union (EU), members of North American Free Trade Agreement (NAFTA), Organization for Economic Cooperation and Development (OECD) and Joint Meeting of Pesticide Residues (JMPR) were compared.
A novel approach to derive halo-independent limits on dark matter properties
Ferrer, Francesc [Physics Department and McDonnell Center for the Space Sciences,Washington University in Saint Louis,1 Brookings Drive - CB 1105, St Louis, MO 63130 (United States); Ibarra, Alejandro; Wild, Sebastian [Physik-Department T30d, Technische Universität München,James-Franck-Straße, 85748 Garching (Germany)
2015-09-21
We propose a method that allows to place an upper limit on the dark matter elastic scattering cross section with nucleons which is independent of the velocity distribution. Our approach combines null results from direct detection experiments with indirect searches at neutrino telescopes, and goes beyond previous attempts to remove astrophysical uncertainties in that it directly constrains the particle physics properties of the dark matter. The resulting halo-independent upper limits on the scattering cross section of dark matter are remarkably strong and reach σ{sub SI}{sup p}≲10{sup −43} (10{sup −42}) cm{sup 2} and σ{sub SD}{sup p}≲10{sup −37} (3×10{sup −37}) cm{sup 2}, for dark matter particles of m{sub DM}∼1 TeV annihilating into W{sup +}W{sup −} (bb-bar), assuming ρ{sub loc}=0.3 GeV/cm{sup 3}.
Analysis of enamel development using murine model systems: approaches and limitations
Pugach, Megan K.; Gibson, Carolyn W.
2014-01-01
A primary goal of enamel research is to understand and potentially treat or prevent enamel defects related to amelogenesis imperfecta (AI). Rodents are ideal models to assist our understanding of how enamel is formed because they are easily genetically modified, and their continuously erupting incisors display all stages of enamel development and mineralization. While numerous methods have been developed to generate and analyze genetically modified rodent enamel, it is crucial to understand the limitations and challenges associated with these methods in order to draw appropriate conclusions that can be applied translationally, to AI patient care. We have highlighted methods involved in generating and analyzing rodent enamel and potential approaches to overcoming limitations of these methods: (1) generating transgenic, knockout, and knockin mouse models, and (2) analyzing rodent enamel mineral density and functional properties (structure and mechanics) of mature enamel. There is a need for a standardized workflow to analyze enamel phenotypes in rodent models so that investigators can compare data from different studies. These methods include analyses of gene and protein expression, developing enamel histology, enamel pigment, degree of mineralization, enamel structure, and mechanical properties. Standardization of these methods with regard to stage of enamel development and sample preparation is crucial, and ideally investigators can use correlative and complementary techniques with the understanding that developing mouse enamel is dynamic and complex. PMID:25278900
Analysis of enamel development using murine model systems: approaches and limitations.
Megan K Pugach
2014-09-01
Full Text Available A primary goal of enamel research is to understand and potentially treat or prevent enamel defects related to amelogenesis imperfecta (AI. Rodents are ideal models to assist our understanding of how enamel is formed because they are easily genetically modified, and their continuously erupting incisors display all stages of enamel development and mineralization. While numerous methods have been developed to generate and analyze genetically modified rodent enamel, it is crucial to understand the limitations and challenges associated with these methods in order to draw appropriate conclusions that can be applied translationally, to AI patient care. We have highlighted methods involved in generating and analyzing rodent enamel and potential approaches to overcoming limitations of these methods: 1 generating transgenic, knockout and knockin mouse models, and 2 analyzing rodent enamel mineral density and functional properties (structure, mechanics of mature enamel. There is a need for a standardized workflow to analyze enamel phenotypes in rodent models so that investigators can compare data from different studies. These methods include analyses of gene and protein expression, developing enamel histology, enamel pigment, degree of mineralization, enamel structure and mechanical properties. Standardization of these methods with regard to stage of enamel development and sample preparation is crucial, and ideally investigators can use correlative and complementary techniques with the understanding that developing mouse enamel is dynamic and complex.
MDI Biological Laboratory Arsenic Summit: Approaches to Limiting Human Exposure to Arsenic.
Stanton, Bruce A; Caldwell, Kathleen; Congdon, Clare Bates; Disney, Jane; Donahue, Maria; Ferguson, Elizabeth; Flemings, Elsie; Golden, Meredith; Guerinot, Mary Lou; Highman, Jay; James, Karen; Kim, Carol; Lantz, R Clark; Marvinney, Robert G; Mayer, Greg; Miller, David; Navas-Acien, Ana; Nordstrom, D Kirk; Postema, Sonia; Rardin, Laurie; Rosen, Barry; SenGupta, Arup; Shaw, Joseph; Stanton, Elizabeth; Susca, Paul
2015-09-01
This report is the outcome of the meeting "Environmental and Human Health Consequences of Arsenic" held at the MDI Biological Laboratory in Salisbury Cove, Maine, August 13-15, 2014. Human exposure to arsenic represents a significant health problem worldwide that requires immediate attention according to the World Health Organization (WHO). One billion people are exposed to arsenic in food, and more than 200 million people ingest arsenic via drinking water at concentrations greater than international standards. Although the US Environmental Protection Agency (EPA) has set a limit of 10 μg/L in public water supplies and the WHO has recommended an upper limit of 10 μg/L, recent studies indicate that these limits are not protective enough. In addition, there are currently few standards for arsenic in food. Those who participated in the Summit support citizens, scientists, policymakers, industry, and educators at the local, state, national, and international levels to (1) establish science-based evidence for setting standards at the local, state, national, and global levels for arsenic in water and food; (2) work with government agencies to set regulations for arsenic in water and food, to establish and strengthen non-regulatory programs, and to strengthen collaboration among government agencies, NGOs, academia, the private sector, industry, and others; (3) develop novel and cost-effective technologies for identification and reduction of exposure to arsenic in water; (4) develop novel and cost-effective approaches to reduce arsenic exposure in juice, rice, and other relevant foods; and (5) develop an Arsenic Education Plan to guide the development of science curricula as well as community outreach and education programs that serve to inform students and consumers about arsenic exposure and engage them in well water testing and development of remediation strategies.
Mottin, Stephane; Panasenko, Grigory; Ganesh, S Sivaji
2010-12-31
In biophotonics, the light absorption in a tissue is usually modeled by the Helmholtz equation with two constant parameters, the scattering coefficient and the absorption coefficient. This classic approximation of "haemoglobin diluted everywhere" (constant absorption coefficient) corresponds to the classical homogenization approach. The paper discusses the limitations of this approach. The scattering coefficient is supposed to be constant (equal to one) while the absorption coefficient is equal to zero everywhere except for a periodic set of thin parallel strips simulating the blood vessels, where it is a large parameter ω. The problem contains two other parameters which are small: ε, the ratio of the distance between the axes of vessels to the characteristic macroscopic size, and δ, the ratio of the thickness of thin vessels and the period. We construct asymptotic expansion in two cases: ε --> 0, ω --> ∞, δ --> 0, ωδ --> ∞, ε2ωδ --> 0 and ε --> 0, ω --> ∞, δ --> 0, ε2ωδ --> ∞, and and prove that in the first case the classical homogenization (averaging) of the differential equation is true while in the second case it is wrong. This result may be applied in the biomedical optics, for instance, in the modeling of the skin and cosmetics.
Stange, C. F.; Spott, O.
2009-04-01
Improvement in the analysis of stable isotopes, higher measurement capacity and faster and more complex analysis methods allow a more detailed insight into the complexity of N cycling in soils or sediments, in particular in the formation and emission of N2 gas. The knowledge about the site-specific N2 to N2O ratio of denitrification and perhaps other processes is important to develop sustainable land use strategies for reduction of GHG emissions. Adapted stable isotope approaches are an irreplaceable tool for process identification, process quantification and processes separation. In the last years a few of new processes were found (e.g. anammox, codenitrification) and new stable isotope approaches for quantification and processes separation were published (Wrage et al.). Source partitioning of N gas production in soils is inherently challenging, but is vital to better understand controls on the different processes, with a view to develope appropriate management practices for mitigation of harmful N gases (e.g.N2O) (Baggs, 2008). Recently dual-isotope labelling approaches (Wrage et al., 2005) and triplet 15N tracer experiments (TTE) with 15N labelling of different pools (e.g. Müller et al., 2006, Russow et al 2009) have been developed to differentiate between more than two processes. The high number of simultaneously occurring processes during soil N cycling (Hayatsu et al. 2008) limits an easy applicability of isotope approaches (Spott and Stange 2007 ;Wrage et al. 2005; Phillips and Gregg, 2003), and therefore partitioning and process quantification is often afflicted with high uncertainties (Ambus et al., 2006). Especially the heterogeneity of environmental conditions in soils caused by the soil structure is difficult to handle (e.g. homogeneously labelling a soil). Hence, spatially separated processes in combination with high turnover rates (gross production and consumption) can produce different pools of one substrate in the soil (Russow et al. 2009) and
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
A New Approach for 3D Ocean Reconstruction from Limited Observations
Xiao, X.
2014-12-01
Satellites can measure ocean surface height and temperature with sufficient spatial and temporal resolution to capture mesoscale features across the globe. Measurements of the ocean's interior, however, remain sparse and irregular, thus the dynamical inference of subsurface flows is necessary to interpret surface measurements. The most common (and accurate) approach is to incorporate surface measurements into a data-assimilating forward ocean model, but this approach is expensive and slow, and thus completely impractical for time-critical needs, such as offering guidance to ship-based observational campaigns. Two recently-developed approaches have made use of the apparent partial consistency of upper ocean dynamics with quasigeostrophic flows that take into account surface buoyancy gradients (i.e. the "surface quasigeostrophic" (SQG) model) to "reconstruct" the interior flow from knowledge of surface height and buoyancy. Here we improve on these methods in three ways: (1) we adopt a modal decomposition that represents the surface and interior dynamics in an efficient way, allowing the separation of surface energy from total energy; (2) we make use of instantaneous vertical profile observations (e.g. from ARGO data) to improve the reconstruction of eddy variables at depth; and (3) we use advanced statistical methods to choose the optimal modes for the reconstruction. The method is tested using a series of high horizontal and vertical resolution quasigeostrophic simulation, with a wide range of surface buoyancy and interior potential vorticity gradient combinations. In addtion, we apply the method to output from a very high resolution primitive equation simulation of a forced and dissipated baroclinic front in a channel. Our new method is systematically compared to the existing methods as well. Its advantages and limitations will be discussed.
The Limitations of Existing Approaches in Improving MicroRNA Target Prediction Accuracy.
Loganantharaj, Rasiah; Randall, Thomas A
2017-01-01
MicroRNAs (miRNAs) are small (18-24 nt) endogenous RNAs found across diverse phyla involved in posttranscriptional regulation, primarily downregulation of mRNAs. Experimentally determining miRNA-mRNA interactions can be expensive and time-consuming, making the accurate computational prediction of miRNA targets a high priority. Since miRNA-mRNA base pairing in mammals is not perfectly complementary and only a fraction of the identified motifs are real binding sites, accurately predicting miRNA targets remains challenging. The limitations and bottlenecks of existing algorithms and approaches are discussed in this chapter.A new miRNA-mRNA interaction algorithm was implemented in Python (TargetFind) to capture three different modes of association and to maximize detection sensitivity to around 95% for mouse (mm9) and human (hg19) reference data. For human (hg19) data, the prediction accuracy with any one feature among evolutionarily conserved score, multiple targets in a UTR or changes in free energy varied within a close range from 63.5% to 66%. When the results of these features are combined with majority voting, the expected prediction accuracy increases to 69.5%. When all three features are used together, the average best prediction accuracy with tenfold cross validation from the classifiers naïve Bayes, support vector machine, artificial neural network, and decision tree were, respectively, 66.5%, 67.1%, 69%, and 68.4%. The results reveal the advantages and limitations of these approaches.When comparing different sets of features on their strength in predicting true hg19 targets, evolutionarily conserved score slightly outperformed all other features based on thermostability, and target multiplicity. The sophisticated supervised learning algorithms did not improve the prediction accuracy significantly compared to a simple threshold based approach on conservation score or combining the results of each feature with majority agreements. The targets from randomly
Kremser, S.; Bodeker, G. E.; Lewis, J.
2014-01-01
A Climate Pattern-Scaling Model (CPSM) that simulates global patterns of climate change, for a prescribed emissions scenario, is described. A CPSM works by quantitatively establishing the statistical relationship between a climate variable at a specific location (e.g. daily maximum surface temperature, Tmax) and one or more predictor time series (e.g. global mean surface temperature, Tglobal) - referred to as the "training" of the CPSM. This training uses a regression model to derive fit coefficients that describe the statistical relationship between the predictor time series and the target climate variable time series. Once that relationship has been determined, and given the predictor time series for any greenhouse gas (GHG) emissions scenario, the change in the climate variable of interest can be reconstructed - referred to as the "application" of the CPSM. The advantage of using a CPSM rather than a typical atmosphere-ocean global climate model (AOGCM) is that the predictor time series required by the CPSM can usually be generated quickly using a simple climate model (SCM) for any prescribed GHG emissions scenario and then applied to generate global fields of the climate variable of interest. The training can be performed either on historical measurements or on output from an AOGCM. Using model output from 21st century simulations has the advantage that the climate change signal is more pronounced than in historical data and therefore a more robust statistical relationship is obtained. The disadvantage of using AOGCM output is that the CPSM training might be compromised by any AOGCM inadequacies. For the purposes of exploring the various methodological aspects of the CPSM approach, AOGCM output was used in this study to train the CPSM. These investigations of the CPSM methodology focus on monthly mean fields of daily temperature extremes (Tmax and Tmin). The methodological aspects of the CPSM explored in this study include (1) investigation of the advantage
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Oh, Jung Hun; Craft, Jeffrey M.; Townsend, Reid; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-01-01
discrimination between RP and non-RP patients (p = 0.002). These results suggest that the proposed methodology based on longitudinal proteomics analysis and a novel bioinformatics ranking algorithm is a potentially promising approach for the challenging problem of identifying relevant biomarkers in sample-limited clinical applications. PMID:21226504
Francesca Vinchi
2013-01-01
Full Text Available Hemolysis results in the release of hemoglobin and heme into the bloodstream and is associated with the development of several pathologic conditions of different etiology, including hemoglobinopathies, hemolytic anemias, bacterial infections, malaria, and trauma. In addition, hemolysis is associated with surgical procedures, hemodialysis, blood transfusion, and other conditions in which mechanical forces can lead to red blood cell rupture. Free plasma hemoglobin and heme are toxic for the vascular endothelium since heme iron promotes oxidative stress that causes endothelial activation responsible for vasoocclusive events and thrombus formation. Moreover, free hemoglobin scavenges nitric oxide, reducing its bioavailability, and heme favours ROS production, thus causing oxidative nitric oxide consumption. This results in the dysregulation of the endothelium vasodilator:vasoconstrictor balance, leading to severe vasoconstriction and hypertension. Thus, endothelial dysfunction and impairment of cardiovascular function represent a common feature of pathologic conditions associated with hemolysis. In this review, we discuss how hemoglobin/heme released following hemolysis may affect vascular function and summarise the therapeutic approaches available to limit hemolysis-driven endothelial dysfunction. Particular emphasis is put on recent data showing the beneficial effects obtained through the use of the plasma heme scavenger hemopexin in counteracting heme-mediated endothelial damage in mouse models of hemolytic diseases.
An integrated nano-scale approach to profile miRNAs in limited clinical samples
Seumois, Grégory; Vijayanand, Pandurangan; Eisley, Christopher J; Omran, Nada; Kalinke, Lukas; North, Mal; Ganesan, Asha P; Simpson, Laura J; Hunkapiller, Nathan; Moltzahn, Felix; Woodruff, Prescott G; Fahy, John V; Erle, David J; Djukanovic, Ratko; Blelloch, Robert; Ansel, K Mark
2012-01-01
Profiling miRNA expression in cells that directly contribute to human disease pathogenesis is likely to aid the discovery of novel drug targets and biomarkers. However, tissue heterogeneity and the limited amount of human diseased tissue available for research purposes present fundamental difficulties that often constrain the scope and potential of such studies. We established a flow cytometry-based method for isolating pure populations of pathogenic T cells from bronchial biopsy samples of asthma patients, and optimized a high-throughput nano-scale qRT-PCR method capable of accurately measuring 96 miRNAs in as little as 100 cells. Comparison of circulating and airway T cells from healthy and asthmatic subjects revealed asthma-associated and tissue-specific miRNA expression patterns. These results establish the feasibility and utility of investigating miRNA expression in small populations of cells involved in asthma pathogenesis, and set a precedent for application of our nano-scale approach in other human diseases. The microarray data from this study (Figure 7) has been submitted to the NCBI Gene Expression Omnibus (GEO; http://ncbi.nlm.nih.gov/geo) under accession no. GSE31030. PMID:23304658
Sherwin, Jason
At the start of the 21st century, the topic of complexity remains a formidable challenge in engineering, science and other aspects of our world. It seems that when disaster strikes it is because some complex and unforeseen interaction causes the unfortunate outcome. Why did the financial system of the world meltdown in 2008--2009? Why are global temperatures on the rise? These questions and other ones like them are difficult to answer because they pertain to contexts that require lengthy descriptions. In other words, these contexts are complex. But we as human beings are able to observe and recognize this thing we call 'complexity'. Furthermore, we recognize that there are certain elements of a context that form a system of complex interactions---i.e., a complex system. Many researchers have even noted similarities between seemingly disparate complex systems. Do sub-atomic systems bear resemblance to weather patterns? Or do human-based economic systems bear resemblance to macroscopic flows? Where do we draw the line in their resemblance? These are the kinds of questions that are asked in complex systems research. And the ability to recognize complexity is not only limited to analytic research. Rather, there are many known examples of humans who, not only observe and recognize but also, operate complex systems. How do they do it? Is there something superhuman about these people or is there something common to human anatomy that makes it possible to fly a plane? Or to drive a bus? Or to operate a nuclear power plant? Or to play Chopin's etudes on the piano? In each of these examples, a human being operates a complex system of machinery, whether it is a plane, a bus, a nuclear power plant or a piano. What is the common thread running through these abilities? The study of situational awareness (SA) examines how people do these types of remarkable feats. It is not a bottom-up science though because it relies on finding general principles running through a host of varied
Song, F.; Monsen, A.; Li, Z. S.; Choi, E. -M.; MacManus-Driscoll, J. L.; Xiong, J.; Jia, Q. X.; Wahlstrom, E.; Wells, J. W.
2012-01-01
The surface and near-surface chemical composition of BiFe0.5Mn0.5O3 has been studied using a combination of low photon energy synchrotron photoemission spectroscopy, and a newly developed maximum entropy finite element model from which it is possible to extract the depth dependent chemical compositi
Thompson, William L.; Lee, Danny C.
2000-11-01
Many anadromous salmonid stocks in the Pacific Northwest are at their lowest recorded levels, which has raised questions regarding their long-term persistence under current conditions. There are a number of factors, such as freshwater spawning and rearing habitat, that could potentially influence their numbers. Therefore, we used the latest advances in information-theoretic methods in a two-stage modeling process to investigate relationships between landscape-level habitat attributes and maximum recruitment of 25 index stocks of chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin. Our first-stage model selection results indicated that the Ricker-type, stock recruitment model with a constant Ricker a (i.e., recruits-per-spawner at low numbers of fish) across stocks was the only plausible one given these data, which contrasted with previous unpublished findings. Our second-stage results revealed that maximum recruitment of chinook salmon had a strongly negative relationship with percentage of surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and private moderate-high impact managed forest. That is, our model predicted that average maximum recruitment of chinook salmon would decrease by at least 247 fish for every increase of 33% in surrounding subwatersheds categorized as predominantly containing U.S. Forest Service and privately managed forest. Conversely, mean annual air temperature had a positive relationship with salmon maximum recruitment, with an average increase of at least 179 fish for every increase in 2 C mean annual air temperature.
Bollerslev, Anne Mette; Nauta, Maarten; Hansen, Tina Beck
2017-01-01
Microbiological limits are widely used in food processing as an aid to reduce the exposure to hazardous microorganisms for the consumers. However, in pork, the prevalence and concentrations of Salmonella are generally low and microbiological limits are not considered an efficient tool to support...... hygiene interventions. The objective of the present study was to develop an approach which could make it possible to define potential risk-based microbiological limits for an indicator, enterococci, in order to evaluate the risk from potential growth of Salmonella. A positive correlation between...... products that has enterococci concentrations above 5. log. CFU/g. This illustrates that our approach can be used to evaluate the potential effect of different microbiological limits and therefore, the perspective of this novel approach is that it can be used for definition of a risk-based microbiological...
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Microcanonical origin of the maximum entropy principle for open systems.
Lee, Julian; Pressé, Steve
2012-10-01
There are two distinct approaches for deriving the canonical ensemble. The canonical ensemble either follows as a special limit of the microcanonical ensemble or alternatively follows from the maximum entropy principle. We show the equivalence of these two approaches by applying the maximum entropy formulation to a closed universe consisting of an open system plus bath. We show that the target function for deriving the canonical distribution emerges as a natural consequence of partial maximization of the entropy over the bath degrees of freedom alone. By extending this mathematical formalism to dynamical paths rather than equilibrium ensembles, the result provides an alternative justification for the principle of path entropy maximization as well.
赵明; 谢松梅; 杨劲; 魏敏吉
2014-01-01
During the assessment of bioequivalence in our country ,the in-terval limits for maximum concentration ( Cmax ) is in an alternating phase.The aim of this paper is to introduce some principles and thoughts of bioequivalence for area under the curve (AUC) and Cmax.Examples of two drugs evaluation were presented here , which might be helpful for the development and review of generics.%在国内的生物等效性评价中，峰浓度（Cmax ）的等效界值尚处在新老标准交替阶段。当 Cmax处在新老标准之间，如何进行审评决策，是一个需要认真考虑的问题。本文通过2个药物审评实例，介绍关于该类问题的思考原则和思路，以期为仿制药的研究开发和审评提供参考。
Jha, S. W.; Pan, Y.-C.; Foley, R. J.; Rest, A.; Scolnic, D.; Kotze, M.
2016-03-01
We obtained SALT (+RSS) spectroscopy of LSQ16acz (= PS16bby = SN 2016bew; Baltay et al. 2013, PASP, 125, 683) on 2016 Mar 14.9 UT, covering the wavelength range 340-920 nm. Cross-correlation of the spectrum with a template library using SNID (Blondin & Tonry 2007, ApJ, 666, 1024) shows LSQ16acz is a type-Ia supernova a few days before maximum light.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Plant nutrition between chemical and physiological limitations: is a sustainable approach possible?
Zeno Varanini
2008-04-01
Full Text Available The estimate of world population growth and the extent of malnutrition problems due to lack of food or to deficit of specific micronutrients bring to light the importance of plant nutrition in the context of a sustainable development. Beside these aspects, which force to use fertilizers, the topic of nutrient use efficiency of by plants is far from being solved: recent estimates of world cereals productions indicate that use efficiency of nitrogen fertilizers is not higher than 35%. These values are even smaller for phosphorus fertilizers (estimate of use efficiency between 10 and 30%, worsen by the fact that, with the present technology and on the basis of present knowledge, it is expected that the phosphorus reserves used for fertilizer production will be sufficient for less than 100 years. Efficiency problems have also been recently raised concerning the use of synthetic chelates to alleviate deficiency of micronutrients: these compounds have been shown to be extremely mobile along soil profile and to be only partially utilizable by plants. The low uptake efficiency of nutrients from soil is, in one hand, caused by several intrinsic characteristics of the biogeochemical cycle of nutrients, by the other, seems to be limited by biochemical and physiological aspects of nutrient absorption. Only recently, the complexity of these aspects has been apprehended and it has been realized that the programs of breeding had neglected these problematic. In this review aspects related to the acquisition of a macro- (N and a micro- (Fe nutrient, will be discussed. The aim is to show that improvements of mineral nutrient use efficiency can be achieved only through a scientific approach, considering the whole soil-plant system. Particularly emphasis will be put on aspect of molecular physiology relevant to the improvement of nutrient capture efficiency; furthermore, the role of naturally occurring organic molecules in optimizing the nutritional capacity of
Plant nutrition between chemical and physiological limitations: is a sustainable approach possible?
Roberto Pinton
2011-02-01
Full Text Available The estimate of world population growth and the extent of malnutrition problems due to lack of food or to deficit of specific micronutrients bring to light the importance of plant nutrition in the context of a sustainable development. Beside these aspects, which force to use fertilizers, the topic of nutrient use efficiency of by plants is far from being solved: recent estimates of world cereals productions indicate that use efficiency of nitrogen fertilizers is not higher than 35%. These values are even smaller for phosphorus fertilizers (estimate of use efficiency between 10 and 30%, worsen by the fact that, with the present technology and on the basis of present knowledge, it is expected that the phosphorus reserves used for fertilizer production will be sufficient for less than 100 years. Efficiency problems have also been recently raised concerning the use of synthetic chelates to alleviate deficiency of micronutrients: these compounds have been shown to be extremely mobile along soil profile and to be only partially utilizable by plants. The low uptake efficiency of nutrients from soil is, in one hand, caused by several intrinsic characteristics of the biogeochemical cycle of nutrients, by the other, seems to be limited by biochemical and physiological aspects of nutrient absorption. Only recently, the complexity of these aspects has been apprehended and it has been realized that the programs of breeding had neglected these problematic. In this review aspects related to the acquisition of a macro- (N and a micro- (Fe nutrient, will be discussed. The aim is to show that improvements of mineral nutrient use efficiency can be achieved only through a scientific approach, considering the whole soil-plant system. Particularly emphasis will be put on aspect of molecular physiology relevant to the improvement of nutrient capture efficiency; furthermore, the role of naturally occurring organic molecules in optimizing the nutritional capacity of
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
General Trimmed Estimation : Robust Approach to Nonlinear and Limited Dependent Variable Models
Cizek, P.
2004-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. Motivated by some { the least trimmed squares and maximum trimmed like- lihood estimators { we propose a general trimmed estimator, which unifies and extends many existing robust procedures. We derive
A partial ensemble Kalman filtering approach to enable use of range limited observations
Borup, Morten; Grum, Morten; Madsen, Henrik;
2015-01-01
The ensemble Kalman filter (EnKF) relies on the assumption that an observed quantity can be regarded as a stochastic variable that is Gaussian distributed with mean and variance that equals the measurement and the measurement noise, respectively. When a gauge has a minimum and/or maximum detection...
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Saha, Jayanta Kumar; Panwar, N R; Singh, M V
2010-09-01
Cadmium and lead are important environmental pollutants with high toxicity to animals and human. Soils, though have considerable metal immobilizing capability, can contaminate food chain via plants grown upon them when their built-up occurs to a large extent. Present experiment was carried out with the objective of quantifying the limits of Pb and Cd loading in soil for the purpose of preventing food chain contamination beyond background concentration levels. Two separate sets of pot experiment were carried out for these two heavy metals with graded levels of application doses of Pb at 0.4-150 mg/kg and Cd at 0.02-20 mg/kg to an acidic light textured alluvial soil. Spinach crop was grown for 50 days on these treated soils after a stabilization period of 2 months. Upper limit of background concentration levels (C(ul)) of these metals were calculated through statistical approach from the heavy metals concentration values in leaves of spinach crop grown in farmers' fields. Lead and Cd concentration limits in soil were calculated by dividing C(ul) with uptake response slope obtained from the pot experiment. Cumulative loading limits (concentration limits in soil minus contents in uncontaminated soil) for the experimental soil were estimated to be 170 kg Pb/ha and 0.8 kg Cd/ha. Based on certain assumptions on application rate and computed cumulative loading limit values, maximum permissible Pb and Cd concentration values in municipal solid waste (MSW) compost were proposed as 170 mg Pb/kg and 0.8 mg Cd/kg, respectively. In view of these limiting values, about 56% and 47% of the MSW compost samples from different cities are found to contain Pb and Cd in the safe range.
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Application of the Semi-Empirical Force-Limiting Approach for the CoNNeCT SCAN Testbed
Staab, Lucas D.; McNelis, Mark E.; Akers, James C.; Suarez, Vicente J.; Jones, Trevor M.
2012-01-01
The semi-empirical force-limiting vibration method was developed and implemented for payload testing to limit the structural impedance mismatch (high force) that occurs during shaker vibration testing. The method has since been extended for use in analytical models. The Space Communications and Navigation Testbed (SCAN Testbed), known at NASA as, the Communications, Navigation, and Networking re-Configurable Testbed (CoNNeCT), project utilized force-limiting testing and analysis following the semi-empirical approach. This paper presents the steps in performing a force-limiting analysis and then compares the results to test data recovered during the CoNNeCT force-limiting random vibration qualification test that took place at NASA Glenn Research Center (GRC) in the Structural Dynamics Laboratory (SDL) December 19, 2010 to January 7, 2011. A compilation of lessons learned and considerations for future force-limiting tests is also included.
ROC-curve approach for determining the detection limit of a field chemical sensor.
Fraga, Carlos G; Melville, Angela M; Wright, Bob W
2007-03-01
The detection limit of a field chemical sensor under realistic operating conditions is determined by receiver operator characteristic (ROC) curves. The chemical sensor is an ion mobility spectrometry (IMS) device used to detect a chemical marker in diesel fuel. The detection limit is the lowest concentration of the marker in diesel fuel that obtains the desired true-positive probability (TPP) and false-positive probability (FPP). A TPP of 0.90 and a FPP of 0.10 were selected as acceptable levels for the field sensor in this study. The detection limit under realistic operating conditions is found to be between 2 to 4 ppm (w/w). The upper value is the detection limit under challenging conditions. The ROC-based detection limit is very reliable because it is determined from multiple and repetitive sensor analyses under realistic circumstances. ROC curves also clearly illustrate and gauge the effects data preprocessing and sampling environments have on the sensor's detection limit.
Mello, Pier A.; Shi, Zhou; Genack, Azriel Z.
2016-08-01
We study the average energy - or particle - density of waves inside disordered 1D multiply-scattering media. We extend the transfer-matrix technique that was used in the past for the calculation of the intensity beyond the sample to study the intensity in the interior of the sample by considering the transfer matrices of the two segments that form the entire waveguide. The statistical properties of the two disordered segments are found using a maximum-entropy ansatz subject to appropriate constraints. The theoretical expressions are shown to be in excellent agreement with 1D transfer-matrix simulations.
Lakshmi, V.; Libertino, A.; Sharma, A.; Claps, P.
2015-12-01
The prospect of climatic change and its impacts have brought spatial statistics of extreme events into sharper focus. The so-called "water bombs" are predicted to become more frequent in the extra-tropical regions, and, actually, they raise serious concerns in some regions of the Mediterranean area. However, quantitative statistical methods to properly account for the probability of occurrence of these super-extreme events are still lacking, due to their rare occurrence and to the limited spatial scale at which these events occur. In order to overcome the lack of data, we propose at first to exploit the information derived from remote sensed datasets. Despite the coarser resolution, these databases are able to provide information continuous in space and time, overcoming the problems related to the discontinuous nature of rainfall measurements. We propose to apply such a kind of approach with the adoption of a Bayesian framework, aimed at combining local measurements with climatic regional information, conditioning the exceedance probability on the large and mesoscale characteristics of the system. The case study refers to an area located in the North-West of Italy, historically affected by extraordinary precipitation events. We use a dataset of daily at-gauge rainfall measurements extracted from the NOAA GHCN-Daily dataset, combined with the ones provided by some local Environmental Agencies. Daily estimations from the TRMM are adopted too. First, we identify the most intense events occurred in the area, combining the information from the different datasets. Analysing the related synoptic conditions with the reanalysis of the ECMWF, we then define the conditional variables and the hierarchical relationships between the events and their type. Different climatic configurations that combined with the local morphology and the seasonal condition of the Mediterranean Sea can triggers very intense precipitation events are identified. The results, compared with those
Simple Approach to the Solution of a Trapped and Radiated Cold Ion Beyond the Lamb-Dicke Limit
FENG Mang; SHI Lei; GAO Ke-Lin; ZHU Xi-Wen
2002-01-01
Trapping ions outside the Lamb-Dicke limit have been proven to be useful for the laser-cooling and quantum computing.Under the supposition of the Rabi frequency much smaller than the Lamb Dicke parameter,we can use a simple method to analytically solve the system with a single cold ion trapped and radiated beyond the Lamb Dickc limit,in the absence of the rotating-wave approximation (RWA).Discussion has been made for the limitation of our approach and the comparison of our results with the solutions under the RWA.
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
Manktelow, Bradley N.; Seaton, Sarah E.
2012-01-01
Background Emphasis is increasingly being placed on the monitoring and comparison of clinical outcomes between healthcare providers. Funnel plots have become a standard graphical methodology to identify outliers and comprise plotting an outcome summary statistic from each provider against a specified ‘target’ together with upper and lower control limits. With discrete probability distributions it is not possible to specify the exact probability that an observation from an ‘in-control’ provider will fall outside the control limits. However, general probability characteristics can be set and specified using interpolation methods. Guidelines recommend that providers falling outside such control limits should be investigated, potentially with significant consequences, so it is important that the properties of the limits are understood. Methods Control limits for funnel plots for the Standardised Mortality Ratio (SMR) based on the Poisson distribution were calculated using three proposed interpolation methods and the probability calculated of an ‘in-control’ provider falling outside of the limits. Examples using published data were shown to demonstrate the potential differences in the identification of outliers. Results The first interpolation method ensured that the probability of an observation of an ‘in control’ provider falling outside either limit was always less than a specified nominal probability (p). The second method resulted in such an observation falling outside either limit with a probability that could be either greater or less than p, depending on the expected number of events. The third method led to a probability that was always greater than, or equal to, p. Conclusion The use of different interpolation methods can lead to differences in the identification of outliers. This is particularly important when the expected number of events is small. We recommend that users of these methods be aware of the differences, and specify which
Meiosis in a Bottle: New Approaches to Overcome Mammalian Meiocyte Study Limitations
Montserrat Garcia Caldes
2011-02-01
Full Text Available The study of meiosis is limited because of the intrinsic nature of gametogenesis in mammals. One way to overcome these limitations would be the use of culture systems that would allow meiotic progression in vitro. There have been some attempts to culture mammalian meiocytes in recent years. In this review we will summarize all the efforts to-date in order to culture mammalian sperm and oocyte precursor cells.
Meiosis in a Bottle : New Approaches to Overcome Mammalian Meiocyte Study Limitations
Montserrat Garcia Caldes; Ignasi Roig; Miguel Angel Brieno-Enriquez
2011-01-01
The study of meiosis is limited because of the intrinsic nature of gametogenesis in mammals. One way to overcome these limitations would be the use of culture systems that would allow meiotic progression in vitro. There have been some attempts to culture mammalian meiocytes in recent years. In this review we will summarize all the efforts to-date in order to culture mammalian sperm and oocyte precursor cells.
Exercise testing, limitation and training in patients with cystic fibrosis. A personalized approach
Werkman, M.S.
2014-01-01
Exercise testing and training are cornerstones in regular CF care. However, no consensus exists in literature about which exercise test protocol should be used for individual patients. Furthermore, divergence exists in insights about both the dominant exercise limiting mechanisms and the possibilities to aim and institute exercise training strategies, based on these individual limitations. Therefore, this thesis intends to expand current knowledge in [1] alternative exercise test procedures i...
Van Niekerk, L
2013-09-01
Full Text Available Estuarine, coastal and shelf science 130 (2013) 239e251 COUNTRY-WIDE ASSESSMENT OF ESTUARY HEALTH: AN APPROACH FOR INTEGRATING PRESSURES AND ECOSYSTEM RESPONSE IN A DATA LIMITED ENVIRONMENT L. Van Niekerk a,b,*, J.B. Adams b, G.C. Bate b, A...
2010-07-01
... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Data Quality Objective and Lower Confidence Limit Approaches for Alternative Capture Efficiency Protocols and Test Methods A Appendix A to Subpart KK of Part 63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS...
Zoetendal, E.G.; Ben-Amor, K.; Akkermans, A.D.L.; Abee, T.; Vos, de W.M.
2001-01-01
A major concern in molecular ecological studies is the lysis efficiency of different bacteria in a complex ecosystem. We used a PCR-based 16S rDNA approach to determine the effect of two DNA isolation protocols (i.e. the bead beating and Triton-X100 method) on the detection limit of seven feces-asso
李太平
2012-01-01
Under the information asymmetry and market supervision lack of agri-food quality,the excessively rigorous standard of maximum residue limits（MRLs）for pesticide in agrifood is not only unbeneficial to protect consumer health and ecological environment,but also aggregate serious pesticide residues in agrifood.Taking national standard Maximum Residue Limits for Pesticides in Food（GB2763-2005）as a case,calculated the Theory Daily Intake（TDI）related agrifoods of 439 residue indexes of 126 pesticides with the quantity relation between MRLs,acceptable daily intake（ADI）and TDI,and compared with the customer＇s Real Daily Intake（RDI）related agrifoods.It was found that there were 111 residue indexes related agri-foods which TDI were overly above and beyond their RDI of Chinese residents,accounted for 23.22% of 478 pesticide residue indexes in this national standards.This evidence proved that the national standard had the excessively rigorous tendency really and suggested our government would revise this standard at once in order to eliminate the food safety management trap.%在农产品质量信息不对称、市场监管不到位的情况下,农药最大残留限量标准制定过严,不但不能保护消费者健康和农业生态环境,反而会加剧农产品的农药残留问题泛滥。利用农药最大残留限量（MRLs）、日允许摄入量（ADI）与被测食品每日最大理论摄入量（TDI）三者之间的数量关系,以《食品中农药最大残留限量》GB2763-2005为例,计算了126种农药439个残留指标的TDI值,发现有111个残留指标的TDI值远远大于我国居民每日实际摄入量（RDI）,占GB2763-2005国家标准478个残留限量指标的23.22%,表明我国农药最大残留限量国家标准部分指标值设定存在过严的倾向,建议政府尽快修订该标准,以消除食品安全管理隐患。
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad
2014-09-01
α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential H(N) − H(α) and H(N) − H(N) NOEs, values for (3) JHNHα, (1) JHαCα, (2) JCαN, and (1) JCαN, as well as chemical shifts of (15)N, (13)C(α), and (13)C' nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20-30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20-40%) than seen in the database. A generally lower population of the αR region (10-20%) is found. Analysis of (1)H − (1)H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein.
Hsia, Wei Shen
1989-01-01
A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.
Liu, Tong; Hu, Liang; Ma, Chao; Wang, Zhi-Yan; Chen, Hui-Ling
2015-04-01
In this paper, a novel hybrid method, which integrates an effective filter maximum relevance minimum redundancy (MRMR) and a fast classifier extreme learning machine (ELM), has been introduced for diagnosing erythemato-squamous (ES) diseases. In the proposed method, MRMR is employed as a feature selection tool for dimensionality reduction in order to further improve the diagnostic accuracy of the ELM classifier. The impact of the type of activation functions, the number of hidden neurons and the size of the feature subsets on the performance of ELM have been investigated in detail. The effectiveness of the proposed method has been rigorously evaluated against the ES disease dataset, a benchmark dataset, from UCI machine learning database in terms of classification accuracy. Experimental results have demonstrated that our method has achieved the best classification accuracy of 98.89% and an average accuracy of 98.55% via 10-fold cross-validation technique. The proposed method might serve as a new candidate of powerful methods for diagnosing ES diseases.
Mantsyzov, Alexey B; Maltsev, Alexander S; Ying, Jinfa; Shen, Yang; Hummer, Gerhard; Bax, Ad
2014-01-01
α-Synuclein is an intrinsically disordered protein of 140 residues that switches to an α-helical conformation upon binding phospholipid membranes. We characterize its residue-specific backbone structure in free solution with a novel maximum entropy procedure that integrates an extensive set of NMR data. These data include intraresidue and sequential HN–Hα and HN–HN NOEs, values for 3JHNHα, 1JHαCα, 2JCαN, and 1JCαN, as well as chemical shifts of 15N, 13Cα, and 13C′ nuclei, which are sensitive to backbone torsion angles. Distributions of these torsion angles were identified that yield best agreement to the experimental data, while using an entropy term to minimize the deviation from statistical distributions seen in a large protein coil library. Results indicate that although at the individual residue level considerable deviations from the coil library distribution are seen, on average the fitted distributions agree fairly well with this library, yielding a moderate population (20–30%) of the PPII region and a somewhat higher population of the potentially aggregation-prone β region (20–40%) than seen in the database. A generally lower population of the αR region (10–20%) is found. Analysis of 1H–1H NOE data required consideration of the considerable backbone diffusion anisotropy of a disordered protein. PMID:24976112
Pan, Shu-Yuan; Chiang, Pen-Chi; Chen, Yi-Hung; Chen, Chun-Da; Lin, Hsun-Yu; Chang, E-E
2013-01-01
Accelerated carbonation of basic oxygen furnace slag (BOFS) coupled with cold-rolling wastewater (CRW) was performed in a rotating packed bed (RPB) as a promising process for both CO2 fixation and wastewater treatment. The maximum achievable capture capacity (MACC) via leaching and carbonation processes for BOFS in an RPB was systematically determined throughout this study. The leaching behavior of various metal ions from the BOFS into the CRW was investigated by a kinetic model. In addition, quantitative X-ray diffraction (QXRD) using the Rietveld method was carried out to determine the process chemistry of carbonation of BOFS with CRW in an RPB. According to the QXRD results, the major mineral phases reacting with CO2 in BOFS were Ca(OH)2, Ca2(HSiO4)(OH), CaSiO3, and Ca2Fe1.04Al0.986O5. Meanwhile, the carbonation product was identified as calcite according to the observations of SEM, XEDS, and mappings. Furthermore, the MACC of the lab-scale RPB process was determined by balancing the carbonation conversion and energy consumption. In that case, the overall energy consumption, including grinding, pumping, stirring, and rotating processes, was estimated to be 707 kWh/t-CO2. It was thus concluded that CO2 capture by accelerated carbonation of BOFS could be effectively and efficiently performed by coutilizing with CRW in an RPB.
Gunawan, H.; Puspito, N. T.; Ibrahim, G.; Harjadi, P. J. P. [ITB, Faculty of Earth Sciences and Tecnology (Indonesia); BMKG (Indonesia)
2012-06-20
The new approach method to determine the magnitude by using amplitude displacement relationship (A), epicenter distance ({Delta}) and duration of high frequency radiation (t) has been investigated for Tasikmalaya earthquake, on September 2, 2009, and their aftershock. Moment magnitude scale commonly used seismic surface waves with the teleseismic range of the period is greater than 200 seconds or a moment magnitude of the P wave using teleseismic seismogram data and the range of 10-60 seconds. In this research techniques have been developed a new approach to determine the displacement amplitude and duration of high frequency radiation using near earthquake. Determination of the duration of high frequency using half of period of P waves on the seismograms displacement. This is due tothe very complex rupture process in the near earthquake. Seismic data of the P wave mixing with other wave (S wave) before the duration runs out, so it is difficult to separate or determined the final of P-wave. Application of the 68 earthquakes recorded by station of CISI, Garut West Java, the following relationship is obtained: Mw = 0.78 log (A) + 0.83 log {Delta}+ 0.69 log (t) + 6.46 with: A (m), d (km) and t (second). Moment magnitude of this new approach is quite reliable, time processing faster so useful for early warning.
Approaching the Lambertian limit in randomly textured thin-film solar cells.
Fahr, Stephan; Kirchartz, Thomas; Rockstuhl, Carsten; Lederer, Falk
2011-07-01
The Lambertian limit for solar cells is a benchmark for evaluating their efficiency. It has been shown that the performance of either extremely thick or extremely thin solar cells can be driven close to this limit by using an appropriate photon management. Here we show that this is likewise possible for realistic, practically relevant thin-film solar cells based on amorphous silicon. Most importantly, we achieve this goal by relying on random textures already incorporated into state-of-the-art superstrates; with the only subtlety that their topology has to be downscaled to typical feature sizes of about 100 nm.
Universality and the approach to the continuum limit in lattice gauge theory
De Divitiis, G M; Guagnelli, M; Lüscher, Martin; Petronzio, Roberto; Sommer, Rainer; Weisz, P; Wolff, U; de Divitiis, G; Frezzotti, R; Guagnelli, M; Luescher, M; Petronzio, R; Sommer, R; Weisz, P; Wolff, U
1995-01-01
The universality of the continuum limit and the applicability of renormalized perturbation theory are tested in the SU(2) lattice gauge theory by computing two different non-perturbatively defined running couplings over a large range of energies. The lattice data (which were generated on the powerful APE computers at Rome II and DESY) are extrapolated to the continuum limit by simulating sequences of lattices with decreasing spacings. Our results confirm the expected universality at all energies to a precision of a few percent. We find, however, that perturbation theory must be used with care when matching different renormalized couplings at high energies.
Revegetation in China’s Loess Plateau is approaching sustainable water resource limits
Feng, Xiaoming; Fu, Bojie; Piao, Shilong; Wang, Shuai; Ciais, Philippe; Zeng, Zhenzhong; Lü, Yihe; Zeng, Yuan; Li, Yue; Jiang, Xiaohui; Wu, Bingfang
2016-11-01
Revegetation of degraded ecosystems provides opportunities for carbon sequestration and bioenergy production. However, vegetation expansion in water-limited areas creates potentially conflicting demands for water between the ecosystem and humans. Current understanding of these competing demands is still limited. Here, we study the semi-arid Loess Plateau in China, where the `Grain to Green’ large-scale revegetation programme has been in operation since 1999. As expected, we found that the new planting has caused both net primary productivity (NPP) and evapotranspiration (ET) to increase. Also the increase of ET has induced a significant (p ecological and socio-economic resource demands in a coupled anthropogenic-biological system.
Control-theoretic Approach to Communication with Feedback: Fundamental Limits and Code Design
Ardestanizadeh, Ehsan
2010-01-01
Feedback communication is studied from a control-theoretic perspective, mapping the communication problem to a control problem in which the control signal is received through the same noisy channel as in the communication problem, and the (nonlinear and time-varying) dynamics of the system determine a subclass of encoders available at the transmitter. The MMSE capacity is defined to be the supremum exponential decay rate of the mean square decoding error. This is upper bounded by the information-theoretic feedback capacity, which is the supremum of the achievable rates. A sufficient condition is provided under which the upper bound holds with equality. For the special class of stationary Gaussian channels, a simple application of Bode's integral formula shows that the feedback capacity, recently characterized by Kim, is equal to the maximum instability that can be tolerated by the controller under a given power constraint. Finally, the control mapping is generalized to the N-sender AWGN multiple access channe...
Cooper, Keith M
2013-08-15
A baseline dataset from 2005 was used to identify the spatial distribution of macrofaunal assemblages across the eastern English Channel. The range of sediment composition found in association with each assemblage was used to define limits for acceptable change at ten licensed marine aggregate extraction areas. Sediment data acquired in 2010, 4 years after the onset of dredging, were used to assess whether conditions remained within the acceptable limits. Despite the observed changes in sediment composition, the composition of sediments in and around nine extraction areas remained within pre-defined acceptable limits. At the tenth site, some of the observed changes within the licence area were judged to have gone beyond the acceptable limits. Implications of the changes are discussed, and appropriate management measures identified. The approach taken in this study offers a simple, objective and cost-effective method for assessing the significance of change, and could simplify the existing monitoring regime. Copyright © 2013 Elsevier Ltd. All rights reserved.
Elia, Iliada; Gagatsis, Athanasios; Panaoura, Areti; Zachariades, Theodosis; Zoulinaki, Fotini
2009-01-01
The present study explores students' abilities in conversions between geometric and algebraic representations, in problem-solving situations involving the concept of "limit" and the interrelation of these abilities with students' constructed understanding of this concept. An attempt is also made to examine the impact of the…
Exercise testing, limitation and training in patients with cystic fibrosis. A personalized approach
Werkman, M.S.
2014-01-01
Exercise testing and training are cornerstones in regular CF care. However, no consensus exists in literature about which exercise test protocol should be used for individual patients. Furthermore, divergence exists in insights about both the dominant exercise limiting mechanisms and the possibiliti
An Effective Approach to Biomedical Information Extraction with Limited Training Data
Jonnalagadda, Siddhartha
2011-01-01
In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…
A Discrete Event System approach to On-line Testing of digital circuits with measurement limitation
P.K. Biswal
2016-09-01
Full Text Available In the present era of complex systems like avionics, industrial processes, electronic circuits, etc., on-the-fly or on-line fault detection is becoming necessary to provide uninterrupted services. Measurement limitation based fault detection schemes are applied to a wide range of systems because sensors cannot be deployed in all the locations from which measurements are required. This paper focuses towards On-Line Testing (OLT of faults in digital electronic circuits under measurement limitation using the theory of discrete event systems. Most of the techniques presented in the literature on OLT of digital circuits have emphasized on keeping the scheme non-intrusive, low area overhead, high fault coverage, low detection latency etc. However, minimizing tap points (i.e., measurement limitation of the circuit under test (CUT by the on-line tester was not considered. Minimizing tap points reduces load on the CUT and this reduces the area overhead of the tester. However, reduction in tap points compromises fault coverage and detection latency. This work studies the effect of minimizing tap points on fault coverage, detection latency and area overhead. Results on ISCAS89 benchmark circuits illustrate that measurement limitation have minimal impact on fault coverage and detection latency but reduces the area overhead of the tester. Further, it was also found that for a given detection latency and fault coverage, area overhead of the proposed scheme is lower compared to other similar schemes reported in the literature.
Elia, Iliada; Gagatsis, Athanasios; Panaoura, Areti; Zachariades, Theodosis; Zoulinaki, Fotini
2009-01-01
The present study explores students' abilities in conversions between geometric and algebraic representations, in problem-solving situations involving the concept of "limit" and the interrelation of these abilities with students' constructed understanding of this concept. An attempt is also made to examine the impact of the…
An Effective Approach to Biomedical Information Extraction with Limited Training Data
Jonnalagadda, Siddhartha
2011-01-01
Overall, the two main contributions of this work include the application of sentence simplification to association extraction as described above, and the use of distributional semantics for concept extraction. The proposed work on concept extraction amalgamates for the first time two diverse research areas -distributional semantics and information extraction. This approach renders all the advantages offered in other semi-supervised machine learning systems, and, unlike other proposed semi-supervised approaches, it can be used on top of different basic frameworks and algorithms. http://gradworks.umi.com/34/49/3449837.html
Etkind, S N; Koffman, J
2016-07-01
Patients with any major illness can expect to experience uncertainty about the nature of their illness, its treatment and their prognosis. Prognostic uncertainty is a particular source of patient distress among those living with life-limiting disease. Uncertainty also affects professionals and it has been argued that the level of professional tolerance of uncertainty can affect levels of investigation as well as healthcare resource use. We know that the way in which uncertainty is recognised, managed and communicated can have important impacts on patients' treatment and quality of life. Current approaches to uncertainty in life-limiting illness include the use of care bundles and approaches that focus on communication and education. The experience in communicating in difficult situations that specialist palliative care professionals can provide may also be of benefit for patients with life-limiting illness in the context of uncertainty. While there are a number of promising approaches to uncertainty, as yet few interventions targeted at recognising and addressing uncertainty have been fully evaluated and further research is needed in this area. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Radu, Aleksandar; Peper, Shane M.; Bakker, Eric; Diamond, Dermot
2007-01-01
Zero-current membrane fluxes are the principal source of bias that has prohibited researchers from obtaining true, thermodynamic selectivity coefficients for membrane-based ion selective electrodes (ISEs). They are also responsible for the mediocre detection limits historically seen with these types of potentiometric sensors. By choosing an experimental protocol that suppresses these fluxes, it becomes possible to obtain unbiased thermodynamic selectivity coefficients that are needed to produce ISEs with greatly improved detection limits. In this work, a Cs+-selective electrode based on calix[6]arene-hexaacetic acid hexaethyl ester (Cs I) is used to systematically demonstrate how unbiased selectivity coefficients can be obtained, and how they can be used to optimize inner filling solutions for low detection limit measurements. A comparison of biased selectivity methods (e.g., classical separate solution method (SSM), fixed interference method (FIM), matched potential method (MPM)) with the unbiased modified separate solution method (MSSM) found that selectivity coefficients were underestimated in several cases by more than 4 orders of magnitude. The importance of key experimental parameters, including diffusion coefficients and diffusion layer thicknesses in the aqueous and organic phases, on the minimization of ion fluxes and the improvement of lower detection limits is also described. A dramatic reduction of membrane fluxes by the covalent attachment of a Ca2+-selective ionophore to a methyl methacrylate-decyl methacrylate copolymer matrix is also demonstrated. The ionophore-immobilized ISE exhibited no super-Nernstian response and yielded a detection limit of 40 ppt with an inner filling solution of 1 x 10-3 M KCl. Finally, a set of guidelines for experimental protocols leading to obtaining unbiased selectivity coefficients and producing ISEs for trace level analyses is given.
Advantages and limitations of the use of optogenetic approach in studying fast-scale spike encoding.
Aleksey Malyshev
Full Text Available Understanding single-neuron computations and encoding performed by spike-generation mechanisms of cortical neurons is one of the central challenges for cell electrophysiology and computational neuroscience. An established paradigm to study spike encoding in controlled conditions in vitro uses intracellular injection of a mixture of signals with fluctuating currents that mimic in vivo-like background activity. However this technique has two serious limitations: it uses current injection, while synaptic activation leads to changes of conductance, and current injection is technically most feasible in the soma, while the vast majority of synaptic inputs are located on the dendrites. Recent progress in optogenetics provides an opportunity to circumvent these limitations. Transgenic expression of light-activated ionic channels, such as Channelrhodopsin2 (ChR2, allows induction of controlled conductance changes even in thin distant dendrites. Here we show that photostimulation provides a useful extension of the tools to study neuronal encoding, but it has its own limitations. Optically induced fluctuating currents have a low cutoff (~70 Hz, thus limiting the dynamic range of frequency response of cortical neurons. This leads to severe underestimation of the ability of neurons to phase-lock their firing to high frequency components of the input. This limitation could be worked around by using short (2 ms light stimuli which produce membrane potential responses resembling EPSPs by their fast onset and prolonged decay kinetics. We show that combining application of short light stimuli to different parts of dendritic tree for mimicking distant EPSCs with somatic injection of fluctuating current that mimics fluctuations of membrane potential in vivo, allowed us to study fast encoding of artificial EPSPs photoinduced at different distances from the soma. We conclude that dendritic photostimulation of ChR2 with short light pulses provides a powerful tool to
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Tahmasebi poor, A; Barari, Amin; Behnia, M;
2015-01-01
In this study, a gene expression programming (GEP) approach was employed to develop modified expressions for predicting the bearing capacity of shallow foundations founded on granular material. The model was validate against the results of load tests on full-scale and model footings obtained from...
Rowe, D. J.; Turner, P. S.; Rosensteel, G.
2004-11-01
The asymptotic spectra and scaling properties of a mixed-symmetry Hamiltonian, which exhibits a second-order phase transition in its macroscopic limit, are examined for a system of N interacting bosons. A second interacting boson-model Hamiltonian, which exhibits a first-order phase transition, is also considered. The latter shows many parallel characteristics and some notable differences, leaving it open to question as to the nature of its asymptotic critical-point properties.
Comparison of Generative and Discriminative Approaches for Speaker Recognition with Limited Data
Silovsky, J.; P. Cerva; Zdansky, J.
2009-01-01
This paper presents a comparison of three different speaker recognition methods deployed in a broadcast news processing system. We focus on how the generative and discriminative nature of these methods affects the speaker recognition framework and we also deal with intersession variability compensation techniques in more detail, which are of great interest in broadcast processing domain. Performed experiments are specific particularly for the very limited amount of data used for both speaker ...
Assadi Soumeh, Elham; Hedemann, Mette Skou; Poulsen, Hanne Damgaard
2016-01-01
The metabolic response in plasma and urine of pigs when feeding an optimum level of branched chain amino acids (BCAAs) for best growth performance is unknown. The objective of the current study was to identify the metabolic phenotype associated with the BCAAs intake level that could be linked...... to the animal growth performance. Three dose–response studies were carried out to collect blood and urine samples from pigs fed increasing levels of Ile, Val, or Leu followed by a nontargeted LC–MS approach to characterize the metabolic profile of biofluids when dietary BCAAs are optimum for animal growth....... Results showed that concentrations of plasma hypoxanthine and tyrosine (Tyr) were higher while concentrations of glycocholic acid, tauroursodeoxycholic acid, and taurocholic acid were lower when the dietary Ile was optimum. Plasma 3-methyl-2-oxovaleric acid and creatine were lower when dietary Leu...
Ammari, Zied; Falconi, Marco
2014-10-01
We consider the classical limit of the Nelson model, a system of stable nucleons interacting with a meson field. We prove convergence of the quantum dynamics towards the evolution of the coupled Klein-Gordon-Schrödinger equation. Also, we show that the ground state energy level of nucleons, when is large and the meson field approaches its classical value, is given by the infimum of the classical energy functional at a fixed density of particles. Our study relies on a recently elaborated approach for mean field theory and uses Wigner measures.
Alam, Muhammad Ashraful; Khan, M. Ryyan
2016-10-01
Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.
Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users
Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane
2011-01-01
a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study....... To measure the emotional responses of the elderly, a questionnaire was designed and adapted from P.M.A. Desmet’s product-emotion measurement instrument: PrEmo. During the case study it was observed that there were several challenges when carrying out the user survey, and that those challenges particularly...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....
Challenges and Limitations of Applying an Emotion-driven Design Approach on Elderly Users
Andersen, Casper L.; Gudmundsson, Hjalte P.; Achiche, Sofiane
2011-01-01
Population ageing is without parallel in human history and the twenty-first century will witness even more rapid ageing than did the century just past. Understanding the user needs of the elderly and how to design better products for this segment of the population is crucial, as it can offer...... a competitive advantage for companies. In this paper, challenges of applying an emotion-driven design approach applied on elderly people, in order to identify their user needs towards walking frames, are discussed. The discussion will be based on the experiences and results obtained from the case study...... related to the participants’ age and cognitive abilities. The challenges encountered are discussed and guidelines on what should be taken into account to facilitate an emotion-driven design approach for elderly people are proposed....
Diversity label: exploring the potential and limits of a transparency approach to media diversity
Helberger, N.
2011-01-01
With the rapid growth of digital content, meaningful media diversity depends on users and the choices they make. The challenge is no longer facilitating content, but capturing attention, which is not subject to regulatory control. Empowering users with information, as exemplified in consumer law, thus becomes a more important element in the regulatory toolbox. According to Professor Helberger, the informational approach to advancing the goals of media diversity needs more coherent and informe...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Lacasse, Michael A.; Morelli, Martin
2016-01-01
to the effects of moisture accumulation in wall cavities. Several approaches to assessing the vulnerability of wood-frame structures to deterioration have been developed in recent years, some of which suggest applying a limit-states design approach to the performance assessment of the assembly. In this paper......The long-term performance in respect to moisture management within any wall assembly depends on the hygrothermal response of the wall. Critical factors in estimating the longevity of wood-frame structures include limiting the temperature range, wood moisture content, and time of exposure...... to conditions suitable for the onset, growth, and propagation of mold and rot to occur. The intent in constructing highly insulted wood-frame walls is evidently to reduce energy usage in buildings, but the energy savings as might accrue necessarily cannot be achieved if these walls fail prematurely due...
Martin-Creuzburg, Dominik; Oexle, Sarah; Wacker, Alexander
2014-09-01
Arthropods are incapable of synthesizing sterols de novo and thus require a dietary source to cover their physiological demands. The most prominent sterol in animal tissues is cholesterol, which is an indispensable structural component of cell membranes and serves as precursor for steroid hormones. Instead of cholesterol, plants and algae contain a variety of different phytosterols. Consequently, herbivorous arthropods have to metabolize dietary phytosterols to cholesterol to meet their requirements for growth and reproduction. Here, we investigated sterol-limited growth responses of the freshwater herbivore Daphnia magna by supplementing a sterol-free diet with increasing amounts of 10 different phytosterols and comparing thresholds for sterol-limited growth. In addition, we analyzed the sterol composition of D. magna to explore sterol metabolic constraints and bioconversion capacities. We show that dietary phytosterols strongly differ in their potential to support somatic growth of D. magna. The dietary threshold concentrations obtained by supplementing the different sterols cover a wide range (3.5-34.4 μg mg C(-1)) and encompass the one for cholesterol (8.9 μg mg C(-1)), indicating that certain phytosterols are more efficient in supporting somatic growth than cholesterol (e.g., fucosterol, brassicasterol) while others are less efficient (e.g., dihydrocholesterol, lathosterol). The dietary sterol concentration gradients revealed that the poor quality of particular sterols can be alleviated partially by increasing dietary concentrations, and that qualitative differences among sterols are most pronounced at low to moderate dietary concentrations. We infer that the dietary sterol composition has to be considered in zooplankton nutritional ecology to accurately assess potential sterol limitations under field conditions.
Variation within Limits: An Evolutionary Approach to the Structure and Dynamics of the Multiform
Michael D. C. Drout
2011-10-01
Full Text Available This essay draws upon research in evolutionary biology and cognitive psychology to explain the evolution and stability of the oral-traditional multiform. The mind tends to categorize variable entities in terms of cognitive _prototypes_. The dynamics of human mnemonic and communicative processes then generate both variability (in the absence of written texts and contrasting selection pressure on multiform oral-traditional forms to evolve towards these mental abstractions, thereby producing the variability of the multiform. By visualizing the variation spaces of such cultural entities as _adaptive landscapes_, we see that variation-within-limits of the multiform, rather than being paradoxical, results from universal processes of replication and selection.
A New Approach for Performance Evaluation of TCP over Interference-Limited Wireless Channels
LUO Rui; FAN Pingzhi
2003-01-01
A new metric for performance evaluation of transport control protocol(TCP) over wireless channels based on the interference-limited characteristics of code division multiple address(CDMA) system is proposed. According to the new metric, the performance of TCP over CDMA correlated channel for different protocol parameters and different versions is investigated. The results show that appropriate selection of protocol parameters and packet error rate(PER) operation point can improve significantly the capacity of packet-switched CDMA-based network.
Zheng, Qiang; Li, Kai
2017-07-01
Amplifier is at the heart of experiments carrying out the precise measurement of a weak signal. An idea quantum amplifier should have a large gain and minimum added noise simultaneously. Here, we consider the quantum measurement properties of the cavity with the OPA medium in the op-amp mode to amplify an input signal. We show that our nonlinear-cavity quantum amplifier has large gain in the single-value stable regime and achieves quantum limit unconditionally. Supported by the National Natural Science Foundation of China under Grant Nos. 11365006, 11364006, and the Natural Science Foundation of Guizhou Province QKHLHZ [2015]7767
Flow-through SIP - A novel stable isotope probing approach limiting cross-feeding
Mooshammer, Maria; Kitzinger, Katharina; Schintlmeister, Arno; Kjedal, Henrik; Nielsen, Jeppe Lund; Nielsen, Per; Wagner, Michael
2017-04-01
Stable isotope probing (SIP) is a widely applied tool to link specific microbial populations to metabolic processes in the environment without the prerequisite of cultivation, which has greatly advanced our understanding of the role of microorganisms in biogeochemical cycling. SIP relies on tracing specific isotopically labeled substrates (e.g., 13C, 15N, 18O) into cellular biomarkers, such as DNA, RNA or phospholipid fatty acids, and is considered to be a robust technique to identify microbial populations that assimilate the labeled substrate. However, cross-feeding can occur when labeled metabolites are released from a primary consumer and then used by other microorganisms. This leads to erroneous identification of organisms that are not directly responsible for the process of interest, but are rather connected to primary consumers via a microbial food web. Here, we introduce a new approach that has the potential to eliminate the effect of cross-feeding in SIP studies and can thus also be used to distinguish primary consumers from other members of microbial food webs. In this approach, a monolayer of microbial cells are placed on a filter membrane, and labeled substrates are supplied by a continuous flow. By means of flow-through, labeled metabolites and degradation products are constantly removed, preventing secondary consumption of the substrate. We present results from a proof-of-concept experiment using nitrifiers from activated sludge as model system, in which we used fluorescence in situ hybridization (FISH) with rRNA-targeted oligonucleotide probes for identification of nitrifiers in combination with nanoscale secondary ion mass spectrometry (NanoSIMS) for visualization of isotope incorporation at the single-cell level. Our results show that flow-through SIP is a promising approach to significantly reduce cross-feeding and secondary substrate consumption in SIP experiments.
Linear mRNA amplification approach for RNAseq from limited amount of RNA.
Ferreira, Elisa Napolitano; de Campos Molina, Gustavo; Puga, Renato David; Nagai, Maria Aparecida; Campos, Antônio Hugo José Froes Marques; Guimarães, Gustavo Cardoso; Nunes, Diana Noronha; Pasqualini, Renata; Arap, Wadih; Brentani, Helena; Dias-Neto, Emmanuel; Brentani, Ricardo R; Carraro, Dirce Maria
2015-06-15
Whole-transcriptome evaluation by next-generation sequencing (NGS) has been widely applied in the investigation of diverse transcriptional scenarios. In many clinical situations, including needle biopsy samples or laser microdissected cells, limited amounts of RNA are usually available for the assessment of the whole transcriptome. Here, we describe an mRNA amplification protocol based on in vitro T7 transcription for transcriptome evaluation by NGS. Initially, we performed RNAseq from two human mammary epithelial cell lines and evaluated several aspects of the transcriptomes generated by linear amplification of Poly (A)(+) mRNA species, including transcript representation, variability and abundance. Our protocol showed to be efficient with respect to full-length transcript coverage and quantitative expression levels. We then evaluated the applicability of using this protocol in a more realistic research scenario, analyzing tumor tissue samples microdissected by laser capture. In order to increase the quantification power of the libraries only the 3' end of transcripts were sequenced. We found highly reproducible RNAseq data among amplified tumor samples, with a median Spearman's correlation of 80%, strongly suggesting that the amplification step and library protocol preparation lead to a consistent transcriptional profile. Altogether, we established a robust protocol for assessing the polyadenylated transcriptome derived from limited amounts of total RNA that is applicable to all NGS platforms. Copyright © 2015 Elsevier B.V. All rights reserved.
Approaching the ppb detection limits for copper in water using laser induced breakdown spectroscopy
Tawfik, Walid; Sawaf, Sausan
2014-05-01
Copper concentrations in drinking-water is very important to be monitored which can cause cancer if it exceed about 10 mg/liter. In the present work, we have developed a simple, low laser power method to improve the detection limits of laser induced plasma spectroscopy LIBS for copper in aqueous solutions with different concentrations. In this method a medium density fiberboard (MDF) wood have been used as a substrate that absorbs the liquid sample to transform laser liquid interaction to laser solid interaction. Using the fundamental wavelength of Nd:YAG laser, the constructed plasma emissions were monitored for elemental analysis. The signal-to-noise ratio SNR was optimized using low laser fluence of 32 J cm-2, and detector (CDD camera) gate delay of 0.5 μs. Both the electron temperature and density of the induced plasma were determined using Boltzmann plot and the FWHM of the Cu at 324.7 nm, respectively. The plasma temperature was found to be 1.197 eV, while the plasma density was about 1.66 x 1019 cm-3. The detection limits for Cu at 324.7 nm is found to be 131 ppb comparable to the results by others using complicated system.
Byskov, Jens; Marchal, Bruno; Maluka, Stephen;
2014-01-01
BACKGROUND: Priority-setting decisions are based on an important, but not sufficient set of values and thus lead to disagreement on priorities. Accountability for Reasonableness (AFR) is an ethics-based approach to a legitimate and fair priority-setting process that builds upon four conditions...... of the potential of AFR in supporting priority-setting and other decision-making processes in health systems to achieve better agreed and more sustainable health improvements linked to a mutual democratic learning with potential wider implications....
Biotech Approaches to Overcome the Limitations of Using Transgenic Plants in Organic Farming
Luca Lombardo
2016-05-01
Full Text Available Organic farming prohibits the use of genetically modified organisms (GMOs inasmuch as their genetic material has been altered in a way that does not occur naturally. In actual fact, there is a conventional identity between GMOs and transgenic organisms, so that genetic modification methods such as somatic hybridization and mutagenesis are equalized to conventional breeding. A loophole in this system is represented by more or less innovative genetic engineering approaches under regulatory discussion, such as cisgenesis, oligonucleotide-directed mutagenesis, and antisense technologies, that are redefining the concept of GMOs and might circumvent the requirements of the GMO legislation and, indirectly, of organic farming.
Confronting the challenge of greenline parks: Limits of the traditional administrative approach
Belcher, Elizabeth H.; Douglas Wellman, J.
1991-05-01
The National Park Service, like other natural resource management agencies, has adopted the traditional model of public administration, which emphasizes efficiency, effectiveness, economy, and dichotomy between politics and administration. This approach is particularly ineffective in greenline parks and increasingly inappropriate in traditional areas. In an era of ecological interdependence, relationships with other agencies and jurisdictions and with adjacent as well as noncontiguous landowners are as important as controlling visitors. Recreation managers need to develop more skill in negotiation, cooperation, coordination, and interpersonal communication if they are to preserve and protect park resources.
Valdano, Eugenio; Colizza, Vittoria
2015-01-01
The epidemic threshold of a spreading process indicates the condition for the occurrence of the wide spreading regime, thus representing a predictor of the network vulnerability to the epidemic. Such threshold depends on the natural history of the disease and on the pattern of contacts of the network with its time variation. Based on the theoretical framework introduced in (Valdano et al. PRX 2015) for a susceptible-infectious-susceptible model, we formulate here an infection propagator approach to compute the epidemic threshold accounting for more realistic effects regarding a varying force of infection per contact, the presence of immunity, and a limited time resolution of the temporal network. We apply the approach to two temporal network models and an empirical dataset of school contacts. We find that permanent or temporary immunity do not affect the estimation of the epidemic threshold through the infection propagator approach. Comparisons with numerical results show the good agreement of the analytical ...
Doris Weidemann
2009-01-01
Full Text Available Despite the huge interest in sojourner adjustment, there is still a lack of qualitative as well as of longitudinal research that would offer more detailed insights into intercultural learning processes during overseas stays. The present study aims to partly fill that gap by documenting changes in knowledge structures and general living experiences of fifteen German sojourners in Taiwan in a longitudinal, cultural-psychological study. As part of a multimethod design a structure formation technique was used to document subjective theories on giving/losing face and their changes over time. In a second step results from this study are compared to knowledge-structures of seven long-term German residents in Taiwan, and implications for the conceptualization of intercultural learning will be proposed. Finally, results from both studies serve to discuss the potential and limits of structure formation techniques in the field of intercultural communication research. URN: urn:nbn:de:0114-fqs0901435
Kuzyk, Mark G
2014-01-01
The Thomas Kuhn Reich sum rules and the sum-over-states (SOS) expression for the hyperpolarizabilities are truncated when calculating the fundamental limits of nonlinear susceptibilities. Truncation of the SOS expression can lead to an accurate approximation of the first and second hyperpolarizabilities due to energy denominators, which can make the truncated series converge to within 10% of the full series after only a few excited states are included in the sum. The terms in the sum rule series, however, are weighted by the state energies, so convergence of the series requires that the position matrix elements scale at most in inverse proportion to the square root of the energy. Even if the convergence condition is met, serious pathologies arise, including self inconsistent sum rules and equations that contradict reality. As a result, using the truncated sum rules alone leads to pathologies that make any rigorous calculations impossible, let alone yielding even good approximations. This paper discusses condi...
Comparison of Generative and Discriminative Approaches for Speaker Recognition with Limited Data
J. Silovsky
2009-09-01
Full Text Available This paper presents a comparison of three different speaker recognition methods deployed in a broadcast news processing system. We focus on how the generative and discriminative nature of these methods affects the speaker recognition framework and we also deal with intersession variability compensation techniques in more detail, which are of great interest in broadcast processing domain. Performed experiments are specific particularly for the very limited amount of data used for both speaker enrollment (typically ranging from 30 to 60 seconds and recognition (typically ranging from 5 to 15 seconds. Our results show that the system based on Gaussian Mixture Models (GMMs outperforms both systems based on Support Vector Machines (SVMs but its drawback is higher computational cost.
Validity of equation-of-motion approach to kondo problem in the large N limit
Zhu, Jian-xin [Los Alamos National Laboratory; Ting, C S [UNIV OF HOUSTON; Qi, Yunong [UNIV OF HOUSTON
2008-01-01
The Anderson impurity model for Kondo problem is investigated for arbitrary orbit-spin degeneracy N of the magnetic impurity by the equation of motion method (EOM). By employing a new decoupling scheme, a self-consistent equation for the one-particle Green function is derived and numerically solved in the large-N approximation. For the particle-hole symmetric Anderson model with finite Coulomb interaction U, we show that the Kondo resonance at the impurity site exists for all N {>=} 2. The approach removes the pathology in the standard EOM for N = 2, and has the same level of applicability as non-crossing approximation. For N = 2, an exchange field splits the Kondo resonance into only two peaks, consist with the result from more rigorous numerical renormalization group (NRG) method. The temperature dependence of the Kondo resonance peak is also discussed.
Boboescu, Iulian Zoltan; Gherman, Vasile Daniel; Lakatos, Gergely; Pap, Bernadett; Bíró, Tibor; Maróti, Gergely
2016-03-01
The steadily increase of global energy requirements has brought about a general agreement on the need for novel renewable and environmentally friendly energy sources and carriers. Among the alternatives to a fossil fuel-based economy, hydrogen gas is considered a game-changer. Certain methods of hydrogen production can utilize various low-priced industrial and agricultural wastes as substrate, thus coupling organic waste treatment with renewable energy generation. Among these approaches, different biological strategies have been investigated and successfully implemented in laboratory-scale systems. Although promising, several key aspects need further investigation in order to push these technologies towards large-scale industrial implementation. Some of the major scientific and technical bottlenecks will be discussed, along with possible solutions, including a thorough exploration of novel research combining microbial dark fermentation and algal photoheterotrophic degradation systems, integrated with wastewater treatment and metabolic by-products usage.
Marais, Willem J; Holz, Robert E; Hu, Yu Hen; Kuehn, Ralph E; Eloranta, Edwin E; Willett, Rebecca M
2016-10-10
Atmospheric lidar observations provide a unique capability to directly observe the vertical column of cloud and aerosol scattering properties. Detector and solar-background noise, however, hinder the ability of lidar systems to provide reliable backscatter and extinction cross-section estimates. Standard methods for solving this inverse problem are most effective with high signal-to-noise ratio observations that are only available at low resolution in uniform scenes. This paper describes a novel method for solving the inverse problem with high-resolution, lower signal-to-noise ratio observations that are effective in non-uniform scenes. The novelty is twofold. First, the inferences of the backscatter and extinction are applied to images, whereas current lidar algorithms only use the information content of single profiles. Hence, the latent spatial and temporal information in noisy images are utilized to infer the cross-sections. Second, the noise associated with photon-counting lidar observations can be modeled using a Poisson distribution, and state-of-the-art tools for solving Poisson inverse problems are adapted to the atmospheric lidar problem. It is demonstrated through photon-counting high spectral resolution lidar (HSRL) simulations that the proposed algorithm yields inverted backscatter and extinction cross-sections (per unit volume) with smaller mean squared error values at higher spatial and temporal resolutions, compared to the standard approach. Two case studies of real experimental data are also provided where the proposed algorithm is applied on HSRL observations and the inverted backscatter and extinction cross-sections are compared against the standard approach.
Bruno, J.; Duro, L.; Jordana, S.; Cera, E. [QuantiSci, Barcelona (Spain)
1996-02-01
Solubility limits constitute a critical parameter for the determination of the mobility of radionuclides in the near field and the geosphere, and consequently for the performance assessment of nuclear waste repositories. Mounting evidence from natural system studies indicate that trace elements, and consequently radionuclides, are associated to the dynamic cycling of major geochemical components. We have recently developed a thermodynamic approach to take into consideration the co-precipitation and co-dissolution processes that mainly control this linkage. The approach has been tested in various natural system studies with encouraging results. The Pocos de Caldas natural analogue was one of the sites where a full testing of our predictive geochemical modelling capabilities were done during the analogue project. We have revisited the Pocos de Caldas data and expanded the trace element solubility calculations by considering the documented trace metal/major ion interactions. This has been done by using the co-precipitation/co-dissolution approach. The outcome is as follows: A satisfactory modelling of the behaviour of U, Zn and REEs is achieved by assuming co-precipitation with ferrihydrite. Strontium concentrations are apparently controlled by its co-dissolution from Sr-rich fluorites. From the performance assessment point of view, the present work indicates that calculated solubility limits using the co-precipitation approach are in close agreement with the actual trace element concentrations. Furthermore, the calculated radionuclide concentrations are 2-4 orders of magnitude lower than conservative solubility limits calculated by assuming equilibrium with individual trace element phases. 34 refs, 18 figs, 13 tabs.
Exploring the Obstacles and the Limits of Sustainable Development. A Theoretical Approach
Paula-Carmen Roșca
2017-03-01
Full Text Available The term “sustainable” or “sustainability” is currently used so much and in so many fields that it has become basically part of our everyday lives. It has been connected and linked to almost everything related to our living, to our lifestyle: energy, transport, housing, diet, clothing etc. But what does the term “sustainable” really mean? Many people may have heard about sustainable development or sustainability and may have even tried to have a sustainable living but their efforts might not be enough. The present paper is meant to bring forward a few of the limits of “sustainability” concept. Moreover, it is focused on revealing some arguments from the “other side” along with disagreements regarding some of the principles of “sustainable development” and even critics related to its progress, to its achievements. Another purpose of this paper is to draw attention over some of the issues and obstacles which may threaten the future of sustainability. The paper is also meant to highlight the impact that some stakeholders might have on the evolution of sustainable development due to their financial power, on a global scale.
Yang, Qi; Franco, Christopher M M; Sorokin, Shirley J; Zhang, Wei
2017-02-02
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3-D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers.
Yang, Qi; Franco, Christopher M. M.; Sorokin, Shirley J.; Zhang, Wei
2017-01-01
For sponges (phylum Porifera), there is no reliable molecular protocol available for species identification. To address this gap, we developed a multilocus-based Sponge Identification Protocol (SIP) validated by a sample of 37 sponge species belonging to 10 orders from South Australia. The universal barcode COI mtDNA, 28S rRNA gene (D3–D5), and the nuclear ITS1-5.8S-ITS2 region were evaluated for their suitability and capacity for sponge identification. The highest Bit Score was applied to infer the identity. The reliability of SIP was validated by phylogenetic analysis. The 28S rRNA gene and COI mtDNA performed better than the ITS region in classifying sponges at various taxonomic levels. A major limitation is that the databases are not well populated and possess low diversity, making it difficult to conduct the molecular identification protocol. The identification is also impacted by the accuracy of the morphological classification of the sponges whose sequences have been submitted to the database. Re-examination of the morphological identification further demonstrated and improved the reliability of sponge identification by SIP. Integrated with morphological identification, the multilocus-based SIP offers an improved protocol for more reliable and effective sponge identification, by coupling the accuracy of different DNA markers. PMID:28150727
Yeast biomass production: a new approach in glucose-limited feeding strategy
Érika Durão Vieira
2013-01-01
Full Text Available The aim of this work was to implement experimentally a simple glucose-limited feeding strategy for yeast biomass production in a bubble column reactor based on a spreadsheet simulator suitable for industrial application. In biomass production process using Saccharomyces cerevisiae strains, one of the constraints is the strong tendency of these species to metabolize sugars anaerobically due to catabolite repression, leading to low values of biomass yield on substrate. The usual strategy to control this metabolic tendency is the use of a fed-batch process in which where the sugar source is fed incrementally and total sugar concentration in broth is maintained below a determined value. The simulator presented in this work was developed to control molasses feeding on the basis of a simple theoretical model in which has taken into account the nutritional growth needs of yeast cell and two input data: the theoretical specific growth rate and initial cell biomass. In experimental assay, a commercial baker's yeast strain and molasses as sugar source were used. Experimental results showed an overall biomass yield on substrate of 0.33, a biomass increase of 6.4 fold and a specific growth rate of 0.165 h-1 in contrast to the predicted value of 0.180 h-1 in the second stage simulation.
Multicore in production: advantages and limits of the multiprocess approach in the ATLAS experiment
Binet, S.; Calafiura, P.; Jha, M. K.; Lavrijsen, W.; Leggett, C.; Lesny, D.; Severini, H.; Smith, D.; Snyder, S.; Tatarkhanov, M.; Tsulaia, V.; VanGemmeren, P.; Washbrook, A.
2012-06-01
The shared memory architecture of multicore CPUs provides HEP developers with the opportunity to reduce the memory footprint of their applications by sharing memory pages between the cores in a processor. ATLAS pioneered the multi-process approach to parallelize HEP applications. Using Linux fork() and the Copy On Write mechanism we implemented a simple event task farm, which allowed us to achieve sharing of almost 80% of memory pages among event worker processes for certain types of reconstruction jobs with negligible CPU overhead. By leaving the task of managing shared memory pages to the operating system, we have been able to parallelize large reconstruction and simulation applications originally written to be run in a single thread of execution with little to no change to the application code. The process of validating AthenaMP for production took ten months of concentrated effort and is expected to continue for several more months. Besides validating the software itself, an important and time-consuming aspect of running multicore applications in production was to configure the ATLAS distributed production system to handle multicore jobs. This entailed defining multicore batch queues, where the unit resource is not a core, but a whole computing node; monitoring the output of many event workers; and adapting the job definition layer to handle computing resources with different event throughputs. We will present scalability and memory usage studies, based on data gathered both on dedicated hardware and at the CERN Computer Center.
Angular plasmon response of gold nanoparticles arrays: approaching the Rayleigh limit
Marae-Djouda, Joseph; Caputo, Roberto; Mahi, Nabil; Lévêque, Gaëtan; Akjouj, Abdellatif; Adam, Pierre-Michel; Maurer, Thomas
2017-01-01
The regular arrangement of metal nanoparticles influences their plasmonic behavior. It has been previously demonstrated that the coupling between diffracted waves and plasmon modes can give rise to extremely narrow plasmon resonances. This is the case when the single-particle localized surface plasmon resonance (λLSP) is very close in value to the Rayleigh anomaly wavelength (λRA) of the nanoparticles array. In this paper, we performed angle-resolved extinction measurements on a 2D array of gold nano-cylinders designed to fulfil the condition λRA<λLSP. Varying the angle of excitation offers a unique possibility to finely modify the value of λRA, thus gradually approaching the condition of coupling between diffracted waves and plasmon modes. The experimental observation of a collective dipolar resonance has been interpreted by exploiting a simplified model based on the coupling of evanescent diffracted waves with plasmon modes. Among other plasmon modes, the measurement technique has also evidenced and allowed the study of a vertical plasmon mode, only visible in TM polarization at off-normal excitation incidence. The results of numerical simulations, based on the periodic Green's tensor formalism, match well with the experimental transmission spectra and show fine details that could go unnoticed by considering only experimental data.
Man as the measure of all things: a limiting approach to urban regeneration?
Hugentobler, Margrit
2006-01-01
Urban planning and change in the last century has been guided by concepts of Modernity rooted in the Age of Enlightenment that placed the needs of "rational man" at the core of human endeavors of all kinds. Yet, rather than leading to aesthetically beautiful cities characterized by sustainable resource utilization processes, the anthropocentric approach to urban and economic development has created global problems of depletion of natural resources, massive pollution and growing social imbalances within and between nation states. The widely heralded concept of (economically, environmentally, and socially) sustainable development has not yet produced a fundamental rethinking of our patterns of production and consumption. A multi-systems level framework with which to think about sustainable urban development and regeneration is outlined. It is based on an evolutionary perspective of systems and their emergent qualitatively different properties. A distinction between chemical/physical, biological, human/individual, social and cultural systems levels is made. Broadly framed guiding questions at each system's level are proposed as the basis for the development of sustainability criteria and indicators that can be tailored to any type of project in the planning or evaluation stage. A case study addressing the renewal of urban villages in the mega city of Guangzhou in Southern China illustrates the application potential of the framework to the challenge of urban regeneration.
Angular plasmon response of gold nanoparticles arrays: approaching the Rayleigh limit
Marae-Djouda Joseph
2017-01-01
Full Text Available The regular arrangement of metal nanoparticles influences their plasmonic behavior. It has been previously demonstrated that the coupling between diffracted waves and plasmon modes can give rise to extremely narrow plasmon resonances. This is the case when the single-particle localized surface plasmon resonance (λLSP is very close in value to the Rayleigh anomaly wavelength (λRA of the nanoparticles array. In this paper, we performed angle-resolved extinction measurements on a 2D array of gold nano-cylinders designed to fulfil the condition λRA<λLSP. Varying the angle of excitation offers a unique possibility to finely modify the value of λRA, thus gradually approaching the condition of coupling between diffracted waves and plasmon modes. The experimental observation of a collective dipolar resonance has been interpreted by exploiting a simplified model based on the coupling of evanescent diffracted waves with plasmon modes. Among other plasmon modes, the measurement technique has also evidenced and allowed the study of a vertical plasmon mode, only visible in TM polarization at off-normal excitation incidence. The results of numerical simulations, based on the periodic Green’s tensor formalism, match well with the experimental transmission spectra and show fine details that could go unnoticed by considering only experimental data.
赵凤霞; 王正平; 宋学立; 朱景伟; 孙卉卉; 高相彬; 王海涛
2014-01-01
The heavy metal maximum residue limit standard of agricultural products between China and EU was compared to protect health of consumers and meet the demand of export trade development of agricultural products in China and main hazard of Pb,Cd,Hg,Sn,As,Cr and Ni to human body was discussed in the paper.The Pb MRLs of most agricultural products such as cereal,fruits and vegetables in China is no difference with in EU but the Pb MRLs of most agricultural products such as poultry & meat, aquatic animals and dairy products in China is higher than EU.The Cd MRLs of cereal,beans,fruits, vegetables,livestock liver and kidney is no difference with EU but the Cd MRLs of poultry & meat and aquatic products in China is higher than EU.The Hg MRLs of aquatic products is in accord with EU basically and the Hg content in cereal,vegetables,meats,dairy products,eggs and edible mushrooms is regulated in detail.The Sn MRLs of beverages in China is a little more than EU.Total As and inorganic As content in most agricultural products was regulated in GB2762 - 2012.The Ni MRLs of grease and grease products such as hydrogenated vegetable oil and products containing hydrogenated vegetable oil is 1.0 mg/kg.The suggestions to reduce heavy metal content in agricultural products is proposed according to the current status of high heavy metal content in some agricultural products in China compared with developed countries.%为了保护我国消费者的健康，满足农产品出口贸易发展的要求，通过收集整理，简述了 Pb、Cd、Hg、锡（Sn）、As、Cr 和镍（Ni）等几种重金属对人体的主要危害，并对这几种重金属在我国和欧盟主要农产品中的限量标准进行了详细的对比分析。结果表明：Pb，我国谷物、水果和蔬菜等大部分农产品中的限量标准与欧盟一致，禽畜肉类、水产动物类和乳品类等部分农产品中限量标准高于欧盟；Cd，我国的谷物、豆类、水果、蔬菜、禽畜肝脏
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Nicholas Stacey
2012-11-01
Full Text Available A WHO and UNICEF joint report states that in 2008, 884 million people lacked access to potable drinking water. A life-cycle approach to develop potable water systems may improve the sustainability for such systems, however, a review of the literature shows that such an approach has primarily been used for urban systems located in resourced countries. Although urbanization is increasing globally, over 40 percent of the world’s population is currently rural with many considered poor. In this paper, we present a first step towards using life-cycle assessment to develop sustainable rural water systems in resource-limited countries while pointing out the needs. For example, while there are few differences in costs and environmental impacts for many improved rural water system options, a system that uses groundwater with community standpipes is substantially lower in cost that other alternatives with a somewhat lower environmental inventory. However, a LCA approach shows that from institutional as well as community and managerial perspectives, sustainability includes many other factors besides cost and environment that are a function of the interdependent decision process used across the life cycle of a water system by aid organizations, water user committees, and household users. These factors often present the biggest challenge to designing sustainable rural water systems for resource-limited countries.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
Maximum entropy approach to fuzzy control
Ramer, Arthur; Kreinovich, Vladik YA.
1992-01-01
For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.
January, Kathleen; Conway, Laura J; Deardorff, Matthew; Harrington, Ann; Krantz, Ian D; Loomes, Kathleen; Pipan, Mary; Noon, Sarah E
2016-06-01
Given the clinical complexities of Cornelia de Lange Syndrome (CdLS), the Center for CdLS and Related Diagnoses at The Children's Hospital of Philadelphia (CHOP) and The Multidisciplinary Clinic for Adolescents and Adults at Greater Baltimore Medical Center (GBMC) were established to develop a comprehensive approach to clinical management and research issues relevant to CdLS. Little work has been done to evaluate the general utility of a multispecialty approach to patient care. Previous research demonstrates several advantages and disadvantages of multispecialty care. This research aims to better understand the benefits and limitations of a multidisciplinary clinic setting for individuals with CdLS and related diagnoses. Parents of children with CdLS and related diagnoses who have visited a multidisciplinary clinic (N = 52) and who have not visited a multidisciplinary clinic (N = 69) were surveyed to investigate their attitudes. About 90.0% of multispecialty clinic attendees indicated a preference for multidisciplinary care. However, some respondents cited a need for additional clinic services including more opportunity to meet with other specialists (N = 20), such as behavioral health, and increased information about research studies (N = 15). Travel distance and expenses often prevented families' multidisciplinary clinic attendance (N = 41 and N = 35, respectively). Despite identified limitations, these findings contribute to the evidence demonstrating the utility of a multispecialty approach to patient care. This approach ultimately has the potential to not just improve healthcare for individuals with CdLS but for those with medically complex diagnoses in general. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Moreno de las Heras, Mariano; Diaz Sierra, Ruben; Nicolau, Jose M.; Zavala, Miguel A.
2013-04-01
Slope reclamation from surface mining and road construction usually shows important constraints in water-limited environments. Soil erosion is perceived as a critical process, especially when rill formation occurs, as rills can condition the spatial distribution and availability of soil moisture for plant growth, hence affecting vegetation development. On the other hand, encouraging early vegetation establishment is essential to reduce the risk of degradation in these man-made systems. This work describes a modeling approach focused on stability analysis of water-limited reclaimed slopes, where interactive relationships between rill erosion and vegetation regulate ecosystem stability. Our framework reproduces two main groups of trends along the temporal evolution of reclaimed slopes: successful trends, characterized by widespread vegetation development and the effective control of rill erosion processes; and gullying trends, characterized by the progressive loss of vegetation and a sharp logistic increase in erosion rates. Furthermore, this analytical approach allows the determination of threshold values for both vegetation cover and rill erosion that drive the system's stability, facilitating the identification of critical situations that require specific human intervention (e.g. revegetation or, in very problematic cases, revegetation combined with rill network destruction) to ensure the long-term sustainability of the restored ecosystem. We apply our threshold analysis framework in Mediterranean-dry reclaimed slopes derived form surface coal mining (the Teruel coalfield in central-east Spain), obtaining a good field-based performance. Therefore, we believe that this model is a valuable contribution for the management of water-limited reclaimed systems, as it can play an important role in decision-making during ecosystem restoration and provides a tool for the assessment of restoration success in severely disturbed landscapes.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
El Korso Mohammed
2011-01-01
Full Text Available Abstract The statistical resolution limit (SRL, which is defined as the minimal separation between parameters to allow a correct resolvability, is an important statistical tool to quantify the ultimate performance for parametric estimation problems. In this article, we generalize the concept of the SRL to the multidimensional SRL (MSRL applied to the multidimensional harmonic retrieval model. In this article, we derive the SRL for the so-called multidimensional harmonic retrieval model using a generalization of the previously introduced SRL concepts that we call multidimensional SRL (MSRL. We first derive the MSRL using an hypothesis test approach. This statistical test is shown to be asymptotically an uniformly most powerful test which is the strongest optimality statement that one could expect to obtain. Second, we link the proposed asymptotic MSRL based on the hypothesis test approach to a new extension of the SRL based on the Cramér-Rao Bound approach. Thus, a closed-form expression of the asymptotic MSRL is given and analyzed in the framework of the multidimensional harmonic retrieval model. Particularly, it is proved that the optimal MSRL is obtained for equi-powered sources and/or an equi-distributed number of sensors on each multi-way array.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Lapierre, David; Kochanov, Roman; Kokoouline, Viatcheslav; Tyuterev, Vladimir
2016-01-01
Energies and lifetimes (widths) of vibrational states above the lowest dissociation limit of $^{16}$O$_3$ were determined using a previously-developed efficient approach, which combines hyperspherical coordinates and a complex absorbing potential. The calculations are based on a recently-computed potential energy surface of ozone determined with a spectroscopic accuracy [J. Chem. Phys. {\\bf 139}, 134307 (2013)]. The effect of permutational symmetry on rovibrational dynamics and the density of resonance states in O$_3$ is discussed in detail. Correspondence between quantum numbers appropriate for short- and long-range parts of wave functions of the rovibrational continuum is established. It is shown, by symmetry arguments, that the allowed purely vibrational ($J=0$) levels of $^{16}$O$_3$ and $^{18}$O$_3$, both made of bosons with zero nuclear spin, cannot dissociate on the ground state potential energy surface. Energies and wave functions of bound states of the ozone isotopologue $^{16}$O$_3$ with rotational ...
Bruni Filippo
2015-12-01
Full Text Available As continually greater attention is given to the processes of gamification, the dimension pertaining to evaluation must also be focussed on the purpose of avoiding ineffective forms of banalisation. In reference to the evidence-based approach proposed by Mayer and in highlighting its possibilities and limits, an experiment is herein presented related to teacher training, in which we attempt to unite some traits of the processes of gamification to a first evaluation screen. The data obtained, if they seem on the one hand, indicate an overall positive perception on the part of the attendees, on the other though, they indicate forms of resistance and of saturation with respect to both the excessively competitive mechanisms and the peer evaluation procedures.
Capobianco, Amedeo; Borrelli, Raffaele; Landi, Alessandro; Velardo, Amalia; Peluso, Andrea
2016-07-21
The absorption band shapes of a solvent tunable donor-acceptor dye have been theoretically investigated by using Kubo's generating function approach, with minimum energy geometries and normal coordinates computed at the DFT level of theory. The adopted computational procedure allows us to include in the computation of Franck-Condon factors the whole set of normal modes, without any limitation on excitation quanta, allowing for an almost quantitative reproduction of the absorption band shape when the equilibrium geometries of the ground and the excited states are well predicted by electronic computations. Noteworthy, the functionals that yield more accurate band shapes also provide good prediction of the moment variations upon excitation; because the latter quantities are rarely available, theoretical simulation of band shapes could be a powerful tool for choosing the most appropriate computational method for predictive purposes.
Department of Housing and Urban Development — In accordance with 24 CFR Part 92.252, HUD provides maximum HOME rent limits. The maximum HOME rents are the lesser of: The fair market rent for existing housing for...
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Hameed, Sarah O; White, J Wilson; Miller, Seth H; Nickols, Kerry J; Morgan, Steven G
2016-06-29
Demographic connectivity is fundamental to the persistence and resilience of metapopulations, but our understanding of the link between reproduction and recruitment is notoriously poor in open-coast marine populations. We provide the first evidence of high local retention and limited connectivity among populations spanning 700 km along an open coast in an upwelling system. Using extensive field measurements of fecundity, population size and settlement in concert with a Bayesian inverse modelling approach, we estimated that, on average, Petrolisthes cinctipes larvae disperse only 6.9 km (±25.0 km s.d.) from natal populations, despite spending approximately six weeks in an open-coast system that was once assumed to be broadly dispersive. This estimate differed substantially from our prior dispersal estimate (153.9 km) based on currents and larval duration and behaviour, revealing the importance of employing demographic data in larval dispersal estimates. Based on this estimate, we predict that demographic connectivity occurs predominantly among neighbouring populations less than 30 km apart. Comprehensive studies of larval production, settlement and connectivity are needed to advance an understanding of the ecology and evolution of life in the sea as well as to conserve ecosystems. Our novel approach provides a tractable framework for addressing these questions for species occurring in discrete coastal populations.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Daran Jean-Marc
2011-08-01
Full Text Available Abstract Background In microbial production of non-catabolic products such as antibiotics a loss of production capacity upon long-term cultivation (for example chemostat, a phenomenon called strain degeneration, is often observed. In this study a systems biology approach, monitoring changes from gene to produced flux, was used to study degeneration of penicillin production in a high producing Penicillium chrysogenum strain during prolonged ethanol-limited chemostat cultivations. Results During these cultivations, the biomass specific penicillin production rate decreased more than 10-fold in less than 22 generations. No evidence was obtained for a decrease of the copy number of the penicillin gene cluster, nor a significant down regulation of the expression of the penicillin biosynthesis genes. However, a strong down regulation of the biosynthesis pathway of cysteine, one of the precursors of penicillin, was observed. Furthermore the protein levels of the penicillin pathway enzymes L-α-(δ-aminoadipyl-L-α-cystenyl-D-α-valine synthetase (ACVS and isopenicillin-N synthase (IPNS, decreased significantly. Re-cultivation of fully degenerated cells in unlimited batch culture and subsequent C-limited chemostats did only result in a slight recovery of penicillin production. Conclusions Our findings indicate that the observed degeneration is attributed to a significant decrease of the levels of the first two enzymes of the penicillin biosynthesis pathway, ACVS and IPNS. This decrease is not caused by genetic instability of the penicillin amplicon, neither by down regulation of the penicillin biosynthesis pathway. Furthermore no indications were obtained for degradation of these enzymes as a result of autophagy. Possible causes for the decreased enzyme levels could be a decrease of the translation efficiency of ACVS and IPNS during degeneration, or the presence of a culture variant impaired in the biosynthesis of functional proteins of these enzymes
Thompson, William L. [Bonneville Power Administration, Portland, OR (US). Environment, Fish and Wildlife
2000-11-01
Hankin and Reeves' (1988) approach to estimating fish abundance in small streams has been applied in stream-fish studies across North America. However, as with any method of population estimation, there are important assumptions that must be met for estimates to be minimally biased and reasonably precise. Consequently, I investigated effects of various levels of departure from these assumptions via simulation based on results from an example application in Hankin and Reeves (1988) and a spatially clustered population. Coverage of 95% confidence intervals averaged about 5% less than nominal when removal estimates equaled true numbers within sampling units, but averaged 62% - 86% less than nominal when they did not, with the exception where detection probabilities of individuals were >0.85 and constant across sampling units (95% confidence interval coverage = 90%). True total abundances averaged far (20% - 41%) below the lower confidence limit when not included within intervals, which implies large negative bias. Further, average coefficient of variation was about 1.5 times higher when removal estimates did not equal true numbers within sampling units (C{bar V} = 0.27 [SE = 0.0004]) than when they did (C{bar V} = 0.19 [SE = 0.0002]). A potential modification to Hankin and Reeves' approach is to include environmental covariates that affect detection rates of fish into the removal model or other mark-recapture model. A potential alternative is to use snorkeling in combination with line transect sampling to estimate fish densities. Regardless of the method of population estimation, a pilot study should be conducted to validate the enumeration method, which requires a known (or nearly so) population of fish to serve as a benchmark to evaluate bias and precision of population estimates.
Optical and terahertz spectra analysis by the maximum entropy method.
Vartiainen, Erik M; Peiponen, Kai-Erik
2013-06-01
Phase retrieval is one of the classical problems in various fields of physics including x-ray crystallography, astronomy and spectroscopy. It arises when only an amplitude measurement on electric field can be made while both amplitude and phase of the field are needed for obtaining the desired material properties. In optical and terahertz spectroscopies, in particular, phase retrieval is a one-dimensional problem, which is considered as unsolvable in general. Nevertheless, an approach utilizing the maximum entropy principle has proven to be a feasible tool in various applications of optical, both linear and nonlinear, as well as in terahertz spectroscopies, where the one-dimensional phase retrieval problem arises. In this review, we focus on phase retrieval using the maximum entropy method in various spectroscopic applications. We review the theory behind the method and illustrate through examples why and how the method works, as well as discuss its limitations.
Ulbrich, Susanne E; Wolf, Eckhard; Bauersachs, Stefan
2012-01-01
Ongoing detailed investigations into embryo-maternal communication before implantation reveal that during early embryonic development a plethora of events are taking place. During the sexual cycle, remodelling and differentiation processes in the endometrium are controlled by ovarian hormones, mainly progesterone, to provide a suitable environment for establishment of pregnancy. In addition, embryonic signalling molecules initiate further sequences of events; of these molecules, prostaglandins are discussed herein as specifically important. Inadequate receptivity may impede preimplantation development and implantation, leading to embryonic losses. Because there are multiple factors affecting fertility, receptivity is difficult to comprehend. This review addresses different models and methods that are currently used and discusses their respective potentials and limitations in distinguishing key messages out of molecular twitter. Transcriptome, proteome and metabolome analyses generate comprehensive information and provide starting points for hypotheses, which need to be substantiated using further confirmatory methods. Appropriate in vivo and in vitro models are needed to disentangle the effects of participating factors in the embryo-maternal dialogue and to help distinguish associations from causalities. One interesting model is the study of somatic cell nuclear transfer embryos in normal recipient heifers. A multidisciplinary approach is needed to properly assess the importance of the uterine milieu for embryonic development and to use the large number of new findings to solve long-standing issues regarding fertility.
Peng, Hong-Jie; Liang, Jiyuan; Zhu, Lin; Huang, Jia-Qi; Cheng, Xin-Bing; Guo, Xuefeng; Ding, Weiping; Zhu, Wancheng; Zhang, Qiang
2014-11-25
Hollow nanostructures afford intriguing structural features ranging from large surface area and fully exposed active sites to kinetically favorable mass transportation and tunable surface permeability. The unique properties and potential applications of graphene nanoshells with well-defined small cavities and delicately designed graphene shells are strongly considered. Herein, a mesoscale approach to fabricate graphene nanoshells with a single or few graphene layers and quite small diameters through a catalytic self-limited assembly of nanographene on in situ formed nanoparticles was proposed. The graphene nanoshells with a diameter of ca. 10-30 nm and a pore volume of 1.98 cm(3) g(-1) were employed as hosts to accommodate the sulfur for high-rate lithium-sulfur batteries. A very high initial discharge capacity of 1520 mAh g(-1), corresponding to 91% sulfur utilization rate at 0.1 C, was achieved on a graphene nanoshell/sulfur composite with 62 wt % loading. A very high retention of 70% was maintained when the current density increased from 0.1 C to 2.0 C, and an ultraslow decay rate of 0.06% per cycle during 1000 cycles was detected.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Mao, Yuezhi; Horn, Paul R; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-28
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.
Fundamental limitations in antenna resolution by maximum entropy methods
Bevensee, R.M.
1984-08-01
This paper summarizes work done during the past few years on antenna super-resolution of distant radiating sources, both incoherent with and without additive noise and coherent with and without additive noise.
5 CFR 582.402 - Maximum garnishment limitations.
2010-01-01
... Section 582.402 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS... disposable earnings subject to garnishment to enforce any legal debt other than an order for child support or...), shall not exceed 25 percent of the employee-obligor's aggregate disposable earnings for any workweek....
77 FR 37554 - Calculation of Maximum Obligation Limitation
2012-06-22
... Orderly Liquidation Authority (``OLA'') to resolve a large interconnected financial company upon a... on financial stability in the United States and the use of OLA would avoid or mitigate such adverse... of law. \\7\\ Dodd Frank Act, section 202(a)(1)(A)(iii). The OLA in the Dodd-Frank Act is intended as...
76 FR 72645 - Calculation of Maximum Obligation Limitation
2011-11-25
... that fair value measurement is context dependant and the result of numerous variables, including the... fair value of the total consolidated assets of each covered financial company that are available for... of each covered financial company be measured at their ``fair value.'' The Dodd-Frank Act does not...
Lalonde, Sylvie; Ehrhardt, David W; Loqué, Dominique; Chen, Jin; Rhee, Seung Y; Frommer, Wolf B
2008-02-01
Homotypic and heterotypic protein interactions are crucial for all levels of cellular function, including architecture, regulation, metabolism, and signaling. Therefore, protein interaction maps represent essential components of post-genomic toolkits needed for understanding biological processes at a systems level. Over the past decade, a wide variety of methods have been developed to detect, analyze, and quantify protein interactions, including surface plasmon resonance spectroscopy, NMR, yeast two-hybrid screens, peptide tagging combined with mass spectrometry and fluorescence-based technologies. Fluorescence techniques range from co-localization of tags, which may be limited by the optical resolution of the microscope, to fluorescence resonance energy transfer-based methods that have molecular resolution and can also report on the dynamics and localization of the interactions within a cell. Proteins interact via highly evolved complementary surfaces with affinities that can vary over many orders of magnitude. Some of the techniques described in this review, such as surface plasmon resonance, provide detailed information on physical properties of these interactions, while others, such as two-hybrid techniques and mass spectrometry, are amenable to high-throughput analysis using robotics. In addition to providing an overview of these methods, this review emphasizes techniques that can be applied to determine interactions involving membrane proteins, including the split ubiquitin system and fluorescence-based technologies for characterizing hits obtained with high-throughput approaches. Mass spectrometry-based methods are covered by a review by Miernyk and Thelen (2008; this issue, pp. 597-609). In addition, we discuss the use of interaction data to construct interaction networks and as the basis for the exciting possibility of using to predict interaction surfaces.
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. Motivated by some { the least trimmed squares and maximum trimmed like- lihood estimators { we propose a general trimmed estimator, which uni¯es and extends many existing robust procedures. We derive
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. We generalize the concept of trimming used by many of these robust estima- tors, such as the least trimmed squares and maximum trimmed likelihood, and propose a general trimmed estimator, which renders
Cizek, P.
2007-01-01
High breakdown-point regression estimators protect against large errors and data con- tamination. We generalize the concept of trimming used by many of these robust estima- tors, such as the least trimmed squares and maximum trimmed likelihood, and propose a general trimmed estimator, which renders
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Unification of field theory and maximum entropy methods for learning probability densities.
Kinney, Justin B
2015-09-01
The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Bittel, R.; Mancel, J. [Commissariat a l' Energie Atomique, 92 - Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires, departement de la protection sanitaire
1968-10-01
The most important carriers of radioactive contamination of man are the whole of foodstuffs and not only ingested water or inhaled air. That is the reason why, in accordance with the spirit of the recent recommendations of the ICRP, it is proposed to substitute the idea of maximum levels of contamination of water to the MPC. In the case of aquatic food chains (aquatic organisms and irrigated foodstuffs), the knowledge of the ingested quantities and of the concentration factors food/water permit to determinate these maximum levels, or to find out a linear relation between the maximum levels in the case of two primary carriers of contamination (continental and sea waters). The notion of critical food-consumption, critical radioelements and formula of waste disposal are considered in the same way, taking care to attach the greatest possible importance to local situations. (authors) [French] Les vecteurs essentiels de la contamination radioactive de l'homme sont les aliments dans leur ensemble, et non seulement l'eau ingeree ou l'air inhale. C'est pourquoi, en accord avec l'esprit des recentes recommandations de la C.I.P.R., il est propose de substituer aux CMA la notion de niveaux limites de contamination des eaux. Dans le cas des chaines alimentaires aquatiques (organismes aquatiques et aliments irrigues), la connaissance des quantites ingerees et celle des facteurs de concentration aliments/eau permettent de determiner ces niveaux limites dans le cas de deux vecteurs primaires de contamination (eaux continentales et eaux oceaniques). Les notions de regime alimentaire critique, de radioelement critique et de formule de rejets sont envisagees, dans le meme esprit, avec le souci de tenir compte le plus possible des situations locales. (auteurs)
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin
2016-07-01
Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces modern density functionals.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Heavy-to-Light Form Factors in the Final Hadron Large Energy Limit Covariant Quark Model Approach
Charles, J; Oliver, L; Pène, O; Raynal, J C
1999-01-01
We prove the full covariance of the heavy-to-light weak current matrix elements based on the Bakamjian-Thomas construction of relativistic quark models, in the heavy mass limit for the parent hadron and the large energy limit for the daughter one. Moreover, this quark model representation of the heavy-to-light form factors fulfills the general relations that were recently argued to hold in the corresponding limit of QCD, namely that there are only three independent form factors describing the B -> pi (rho) matrix elements, as well as the factorized scaling law sqrt(M)z(E) of the form factors with respect to the heavy mass M and large energy E. These results constitute another good property of the quark models à la Bakamjian-Thomas, which were previously shown to exhibit covariance and Isgur-Wise scaling in the heavy-to-heavy case.
Viecco, Camilo H.; Camp, L. Jean
Effective defense against Internet threats requires data on global real time network status. Internet sensor networks provide such real time network data. However, an organization that participates in a sensor network risks providing a covert channel to attackers if that organization’s sensor can be identified. While there is benefit for every party when any individual participates in such sensor deployments, there are perverse incentives against individual participation. As a result, Internet sensor networks currently provide limited data. Ensuring anonymity of individual sensors can decrease the risk of participating in a sensor network without limiting data provision.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
Tests of maximum oxygen intake. A critical review.
Shephard, R J
1984-01-01
The determinants of endurance effort vary, depending upon the extent of the muscle mass that is activated. Large muscle work, such as treadmill running, is halted by impending circulatory failure; lack of venous return may compound the basic problem of an excessive cardiac work-load. If the task calls for use of a smaller muscle mass, there is ultimately difficulty in perfusing the active muscles, and glycolysis is halted by an accumulation of acid metabolites. Simple field tests of endurance, such as Cooper's 12-minute run and the Canadian Home Fitness Test, have some value in the rapid screening of large populations, but like other submaximal tests of human performance they lack the precision needed to advise the individual. The directly measured maximum oxygen intake (VO2 max) varies with the type of exercise. The highest values are obtained during uphill treadmill running, but well trained athletes often approach these values during performance of sport-specific tasks. Limitations of methodology and wide interindividual variations of constitutional potential limit the interpretation of maximum oxygen intake data in terms of personal fitness, exercise prescription and the monitoring of training responses. The main practical value of VO2 max measurement is in the functional assessment of patients with cardiorespiratory disease, since changes are then large relative to the precision of the test.
A Weakest-Link Approach for Fatigue Limit of 30CrNiMo8 Steels (Preprint)
2011-03-01
used to predict the fatigue limit of notched and multiaxially loaded specimens of carburized steel [7]. The model was shown to be effective in... Carburized Steel 16mncrs5," Fatigue and Fracture of Engineering Materials and Structures, 28, pp. 983-995. [8] Bomas, H., Linkewitz, T., and Mayr, P., 1999
Westerberg, I.; Guerrero, J.-L.; Beven, K.; Seibert, J.; Halldin, S.; Lundin, L.-C.; Xu, C.-Y.
2009-04-01
The climate of Central America is highly variable both spatially and temporally; extreme events like floods and droughts are recurrent phenomena posing great challenges to regional water-resources management. Scarce and low-quality hydro-meteorological data complicate hydrological modelling and few previous studies have addressed the water-balance in Honduras. In the alluvial Choluteca River, the river bed changes over time as fill and scour occur in the channel, leading to a fast-changing relation between stage and discharge and difficulties in deriving consistent rating curves. In this application of a four-parameter water-balance model, a limits-of-acceptability approach to model evaluation was used within the General Likelihood Uncertainty Estimation (GLUE) framework. The limits of acceptability were determined for discharge alone for each time step, and ideally a simulated result should always be contained within the limits. A moving-window weighted fuzzy regression of the ratings, based on estimated uncertainties in the rating-curve data, was used to derive the limits. This provided an objective way to determine the limits of acceptability and handle the non-stationarity of the rating curves. The model was then applied within GLUE and evaluated using the derived limits. Preliminary results show that the best simulations are within the limits 75-80% of the time, indicating that precipitation data and other uncertainties like model structure also have a significant effect on predictability.
Denton, Philip; Rowe, Philip
2015-01-01
Electronic marking tools that incorporate statement banks have become increasingly prevalent within higher education. In an experiment, printed and emailed feedback was returned to 243 first-year students on a credit-bearing laboratory report assessment. A transmission approach was used, students being provided with comments on their work, but no…
Douma, R.D.; Batista, J.M.; Touw, K.M.; Kiel, J.A.K.W.; Krikken, A.M.; Zhao, Z.; Veiga, T.; Klaassen, P.; Bovenberg, R.A.L.; Daran, J.M.; Heijnen, J.J.; Van Gulik, W.M.
2011-01-01
Background In microbial production of non-catabolic products such as antibiotics a loss of production capacity upon long-term cultivation (for example chemostat), a phenomenon called strain degeneration, is often observed. In this study a systems biology approach, monitoring changes from gene to pro
Trueba, A.; García Lastra, Juan Maria; Garcia-Fernandez, P.
2011-01-01
This work is aimed at clarifying the changes on optical spectra of Cr 3+ impurities due to either a host lattice variation or a hydrostatic pressure, which can hardly be understood by means of the usual Tanabe - Sugano (TS) approach assuming that the Racah parameter, B, grows when covalency decre...
Douma, R.D.; Batista, J.M.; Touw, K.M.; Kiel, J.A.K.W.; Krikken, A.M.; Zhao, Z.; Veiga, T.; Klaassen, P.; Bovenberg, R.A.L.; Daran, J.M.; Heijnen, J.J.; Van Gulik, W.M.
2011-01-01
Background In microbial production of non-catabolic products such as antibiotics a loss of production capacity upon long-term cultivation (for example chemostat), a phenomenon called strain degeneration, is often observed. In this study a systems biology approach, monitoring changes from gene to
E. Khoury
2013-01-01
Full Text Available This paper deals with a gradually deteriorating system operating under an uncertain environment whose state is only known on a finite rolling horizon. As such, the system is subject to constraints. Maintenance actions can only be planned at imposed times called maintenance opportunities that are available on a limited visibility horizon. This system can, for example, be a commercial vehicle with a monitored critical component that can be maintained only in some specific workshops. Based on the considered system, we aim to use the monitoring data and the time-limited information for maintenance decision support in order to reduce its costs. We propose two predictive maintenance policies based, respectively, on cost and reliability criteria. Classical age-based and condition-based policies are considered as benchmarks. The performance assessment shows the value of the different types of information and the best way to use them in maintenance decision making.
Khoury, E.; Deloux, E.; Grall, A.; Bérenguer, C.
2013-01-01
11 pages; International audience; This paper deals with a gradually deteriorating system operating under an uncertain environment whose state is only known on a finite rolling horizon. As such, the system is subject to constraints. Maintenance actions can only be planned at imposed times called maintenance opportunities that are available on a limited visibility horizon.This system can, for example, be a commercial vehicle with a monitored critical component that can be maintained only in som...
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Alam, Muhammad A
2016-01-01
Bifacial tandem cells promise to reduce three fundamental losses (above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction PV systems. The successive filtering of light through the bandgap cascade, and requirement of current continuity make optimization of tandem cells difficult, accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple Markov chain approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can obtained analytically with a few lines of algebra. This powerful approach reproduces, as special cases, all the known results of traditional/bifacial tandem cells, and highlights the asymptotic efficiency gain of these technologies.
Dhayalan, Balamurugan; Mandal, Kalyaneswar; Rege, Nischay; Weiss, Michael A; Eitel, Simon H; Meier, Thomas; Schoenleber, Ralph O; Kent, Stephen B H
2017-01-31
We have systematically explored three approaches based on 9-fluorenylmethoxycarbonyl (Fmoc) chemistry solid phase peptide synthesis (SPPS) for the total chemical synthesis of the key depsipeptide intermediate for the efficient total chemical synthesis of insulin. The approaches used were: stepwise Fmoc chemistry SPPS; the "hybrid method", in which maximally protected peptide segments made by Fmoc chemistry SPPS are condensed in solution; and, native chemical ligation using peptide-thioester segments generated by Fmoc chemistry SPPS. A key building block in all three approaches was a Glu[O-β-(Thr)] ester-linked dipeptide equipped with a set of orthogonal protecting groups compatible with Fmoc chemistry SPPS. The most effective method for the preparation of the 51 residue ester-linked polypeptide chain of ester insulin was the use of unprotected peptide-thioester segments, prepared from peptide-hydrazides synthesized by Fmoc chemistry SPPS, and condensed by native chemical ligation. High-resolution X-ray crystallography confirmed the disulfide pairings and three-dimensional structure of synthetic insulin lispro prepared from ester insulin lispro by this route. Further optimization of these pilot studies could yield an efficient total chemical synthesis of insulin lispro (Humalog) based on peptide synthesis by Fmoc chemistry SPPS. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sorias, Soli
2015-01-01
Efforts to overcome the problems of descriptive and categorical approaches have not yielded results. In the present article, psychiatric diagnosis using Bayesian networks is proposed. Instead of a yes/no decision, Bayesian networks give the probability of diagnostic category inclusion, thereby yielding both a graded, i.e., dimensional diagnosis, and a value of the certainty of the diagnosis. With the use of Bayesian networks in the diagnosis of mental disorders, information about etiology, associated features, treatment outcome, and laboratory results may be used in addition to clinical signs and symptoms, with each of these factors contributing proportionally to their own specificity and sensitivity. Furthermore, a diagnosis (albeit one with a lower probability) can be made even with incomplete, uncertain, or partially erroneous information, and patients whose symptoms are below the diagnostic threshold can be evaluated. Lastly, there is no need of NOS or "unspecified" categories, and comorbid disorders become different dimensions of the diagnostic evaluation. Bayesian diagnoses allow the preservation of current categories and assessment methods, and may be used concurrently with criteria-based diagnoses. Users need not put in extra effort except to collect more comprehensive information. Unlike the Research Domain Criteria (RDoC) project, the Bayesian approach neither increases the diagnostic validity of existing categories nor explains the pathophysiological mechanisms of mental disorders. It, however, can be readily integrated to present classification systems. Therefore, the Bayesian approach may be an intermediate phase between criteria-based diagnosis and the RDoC ideal.
Emilio eVello
2015-12-01
Full Text Available With the rapid rise in global population and the challenges caused by climate changes, the maximization of plant productivity and the development of sustainable agriculture strategies are vital for food security. One of the resources more affected in this new environment will be the limitation of water.In this study, we describe the use of non-invasive technologies exploiting sensors for visible, fluorescent and near-infrared lights to accurately screen survival phenotypes in Arabidopsis thaliana exposed to water-limited conditions. We implemented two drought protocols and a robust analysis methodology that enabled us to clearly assess the wilting or dryness status of the plants at different time points using a phenomics platform. In conclusion, our approach has shown to be very accurate and suitable for experiments where hundred of samples have to be screened making a manual evaluation unthinkable. This approach can be used not only in functional genomics studies but also in agricultural applications.
Malatesta, G.; Mannucci, G.; Demofonti, G. [Centro Sviluppo Materiali S.p.A., Rome (Italy); Cumino, G. [TenarisDalmine (Italy); Izquierdo, A.; Tivelli, M. [Tenaris Group (Mexico); Quintanilla, H. [TENARIS Group (Mexico). TAMSA
2005-07-01
Nowadays specifications require strict Yield to Tensile ratio limitation, nevertheless a fully accepted engineering assessment of its influence on pipeline integrity is still lacking. Probabilistic analysis based on structural reliability approach (Limit State Design) aimed at quantifying the Y/T ratio influence on failure probabilities of offshore pipelines was made. In particular, Tenaris seamless pipe data were used as input for the probabilistic failure analysis. The LSD approach has been applied to two actual deep water design cases that have been on purpose selected, and the most relevant failure modes have been considered. Main result of the work is that the quantitative effect of the Y/T ratio on failure probabilities of a deep water pipeline resulted not so big as expected; it has a minor effect, especially when failure modes are governed by Y only. (author)
Liu, Hongsheng; Cui, Junjia; Jiang, Kaiyong; Zhou, Guangtao
2016-11-01
Hot stamping of high-strength steel (HSS) can significantly improve ultimate tensile strength (UTS) of hot-stamped part and thus meet the increasing demands for weight reduction and safety standards in vehicles. However, the prediction of forming defect such as cracking in hot stamping using traditional forming limit curve (FLC) is still challenging. In this paper, to predict HSS BR1500HS cracking in hot stamping, a temperature-dependent forming limit surface (FLS) is developed by simulations combined with experiments of biaxial tension of the plate with a groove at different temperatures. Different from the FLC, the newly developed FLS in which temperature is included suits the hot stamping of HSS. Considering the interplay among phase transformation, stress and strain, a finite element (FE)-coupled thermo-mechanical model of the hot stamping is developed and implemented under ABAQUS/Explicit platform where the developed FLS is built-in to predict strain distributions and HSS BR1500HS cracking in the hot stamping. Finally, the developed FLS is used to evaluate hot formability of HSS BR1500HS by using a hot stamping experiment for forming a box-shaped part. Results confirm that the developed FLS can accurately predict HSS BR1500HS cracking occurrence in the hot stamping.
Carreno, Joseph J; Lomaestro, Ben; Tietjan, John; Lodise, Thomas P
2017-03-13
This study evaluated the predictive performance of a Bayesian PK estimation method (ADAPT V) to estimate the 24-hour vancomycin area under the curve estimation (AUC) with limited PK sampling in adult obese patients receiving vancomycin for suspected or confirmed Gram-positive infections. This was an IRB-approved prospective evaluation of 12 patients. Patients had a median (95% CI) age of 61 years (39 - 71), creatinine clearance of 86 mL/min (75 - 120), and body mass index of 45 kg/m(2) (40 - 52). For each patient, five PK concentrations were measured and 4 different vancomycin population PK models were used as Bayesian priors to estimate the estimate vancomycin AUC (AUCFULL). Using each PK model as a prior, data-depleted PK subsets were used to estimate the 24-hour AUC (i.e. peak and trough data [AUCPT], midpoint and trough data [AUCMT], and trough only data [AUCT]). The 24-hour AUC derived from the full data set (AUCFULL) was compared to AUC derived from data depleted subsets (AUCPT, AUCMT, AUCT) for each model. For the 4 sets of analyses, AUCFULL estimates ranged from 437 to 489 mg-h/L. The AUCPT provided the best approximation of the AUCFULL; AUCMT and AUCT tended to overestimate AUCFULL Further prospective studies are needed to evaluate the impact of AUC monitoring in clinical practice but the findings from this study suggest the vancomycin AUC can be estimated good precision and accuracy with limited PK sampling using Bayesian PK estimation software.
Pugazhendhi, S; Sathya, P; Palanisamy, P K; Gopalakrishnan, R
2016-06-01
In this work, we have successfully synthesized highly biocompatible and functionalized Dioscorea alata (D. alata) mediated silver nanoparticles with different quantities of its extract for the evaluation of proficient bactericidal activity and optical limiting behavior. The crystalline nature of the synthesized silver nanoparticles was confirmed by powder X-ray powder diffraction (XRD) analysis and furthermore confirmed from SAED pattern of HRTEM Analysis. The Surface Plasmon Resonance band was measured and monitored by UV-Visible spectral studies. The functional groups present in the extract for the reduction and stabilization of the nanoparticles were analyzed by Fourier transform infrared spectroscopy (FTIR) technique. Surface morphology and size of particles were determined by high-resolution transmission electron microscopy analysis (HRTEM). The elemental analysis was made by Energy Dispersive X-ray Spectroscopy (EDX). The synthesized silver nanoparticles (AgNPs) in colloidal form were found to exhibit third order optical nonlinearity as studied by closed aperture Z-scan technique and open aperture technique using 532nm Nd:YAG (SHG) CW laser beam (COHERENT-Compass 215M-50 diode-pumped) output as source. The negative nonlinearity observed was well utilized for the study of optical limiting behavior of the silver nanoparticles. D. alata mediated silver nanoparticles possess very good antimicrobial activity which was confirmed by agar well diffusion assay method.
Dorn, H
2003-01-01
Projecting on a suitable subset of coordinates, a picture is constructed in which the conformal boundary of AdS sub 5 xS sup 5 and that of the plane wave resulting in the Penrose limit are located at the same line. In a second line of arguments all AdS sub 5 xS sup 5 and plane wave geodesics are constructed in their integrated form. Performing the Penrose limit, the approach of null geodesics reaching the conformal boundary of AdS sub 5 xS sup 5 to that of the plane wave is studied in detail. At each point these null geodesics of AdS sub 5 xS sup 5 form a cone which degenerates in the limit. (author)
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the...
Grignon, Jessica S; Ledikwe, Jenny H; Makati, Ditsapelo; Nyangah, Robert; Sento, Baraedi W; Semo, Bazghina-Werq
2014-01-01
To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs.
Graff, Jennifer Whitney
Currently the world energy usage has nearly tripled since 1950 and is projected to grow at a rate of 1.5% per year and predicted to at least double from the beginning of the millennium to 2050. The United States alone is currently consuming more energy than it can produce (≈ 97 Quadrillion BTU's consumed in 2011).(1) Presently, fossil fuels make up over 85% of our energy landscape, including both the stationary grid (like coal and nuclear power plants) and the mobile grid (automobiles using gas and oil). This presents a major demand for developing methods of saving, storing, and renewing energy. Answers to these existing energy demands must come from a variety of renewable sources including: solar, wind, biomass, geothermal and others. But currently, most renewable sources are only a small part of the big energy picture. One approach to this exponentially growing problem, lies within high efficiency (15%-20%) thermoelectric (TE) materials which address small, yet very important and specific, parts of a bigger problem. Specifically, Co4Sb12-based skutterudites, an increasingly favorable thermoelectric material for mid to high temperature applications (currently used in General Motors TE Generator devices). These materials have the ability to be 'tuned' or controlled thermally and electrically through doping and filling mechanisms, as you will see in this dissertation. However, one of the major drawbacks of TE materials is the difficulty in optimizing both electrical and thermal properties simultaneously. Typically, different control parameters are used in order to enhance the electrical and thermal properties individually. It is very rare to observe optimization of both in a TE material via one control parameter. However, the work presented herein successfully augments all TE properties, with one control variable, by using an approach that can be applied to all doped skutterudites and clathrate materials. Skutterudites are novel materials in that they are a binary
16 CFR 1505.8 - Maximum acceptable material temperatures.
2010-01-01
... Association, 155 East 44th Street, New York, NY 10017. Material Degrees C. Degrees F. Capacitors (1) (1) Class... capacitor has no marked temperature limit, the maximum acceptable temperature will be assumed to be 65...
Karwan Fatah-Black
2013-03-01
Full Text Available This article considers what the migration circuits to and from Suriname can tell us about Dutch early modern colonisation in the Atlantic world. Did the Dutch have an Atlantic empire that can be studied by treating it as an integrated space, as suggested by New Imperial Historians, or did colonisation rely on circuits outside Dutch control, stretching beyond its imperial space? An empire-centred approach has dominated the study of Suriname’s history and has largely glossed over the routes taken by European migrants to and from the colony. When the empirecentred perspective is transcended it becomes possible to see that colonists arrived in Suriname from a range of different places around the Atlantic and the European hinterland. The article takes an Atlantic or global perspective to demonstrate the choices available to colonists and the networks through which they moved.
Alan eTalevi
2015-09-01
Full Text Available Multi-target drugs have raised considerable interest in the last decade owing to their advantages in the treatment of complex diseases and health conditions linked to drug resistance issues. Prospective drug repositioning to treat comorbid conditions is an additional, overlooked application of multi-target ligands. While medicinal chemists usually rely on some version of the lock and key paradigm to design novel therapeutics, modern pharmacology has recognized that the long-term effects of a given drug on a biological system may depend not only on the specific ligand-target recognition events but also on the influence of the chronic administration of a drug on the cell gene signature. The design of multi-target agents also poses challenging restrictions on the either the topology or flexibility of the candidate drugs which are briefly discussed in the present article. Finally, computational strategies to approach the identification of novel multi-target agents are overviewed.
Niero, Monia; Hauschild, Michael Zwicky; Olsen, Stig Irving
2016-01-01
Both Life Cycle Assessment (LCA) with its “Cradle to Grave” approach and the Cradle to Cradle®(C2C) design framework based on the eco-effectiveness concept can support the implementation of circular economy. Based on the insights gained in the packaging sector, we perform a Strengths, Weaknesses......, Opportunities, and Threats (SWOT) analysis of the combined use of LCA and “C2C tools”, i.e. the C2C design protocol and the C2C certified TM product standard, in the implementation of circularity strategies at the product level. Moreover, we discuss the challenges which need to be addressed in order to move...... from a relative to an absolute environmental sustainability perspective at the company level, and define a framework for implementing circularity strategies at the company level, considering an absolute environonmental sustainability perspective and the business dimension....
Karwan Fatah-Black
2013-03-01
Full Text Available This article considers what the migration circuits to and from Suriname can tell us about Dutch early modern colonisation in the Atlantic world. Did the Dutch have an Atlantic empire that can be studied by treating it as an integrated space, as suggested by New Imperial Historians, or did colonisation rely on circuits outside Dutch control, stretching beyond its imperial space? An empire-centred approach has dominated the study of Suriname’s history and has largely glossed over the routes taken by European migrants to and from the colony. When the empirecentred perspective is transcended it becomes possible to see that colonists arrived in Suriname from a range of different places around the Atlantic and the European hinterland. The article takes an Atlantic or global perspective to demonstrate the choices available to colonists and the networks through which they moved.
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-03-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach.
Ruta, Sergiu; Hovorka, Ondrej; Huang, Pin-Wei; Wang, Kangkang; Ju, Ganping; Chantrell, Roy
2017-01-01
The generic problem of extracting information on intrinsic particle properties from the whole class of interacting magnetic fine particle systems is a long standing and difficult inverse problem. As an example, the Switching Field Distribution (SFD) is an important quantity in the characterization of magnetic systems, and its determination in many technological applications, such as recording media, is especially challenging. Techniques such as the first order reversal curve (FORC) methods, were developed to extract the SFD from macroscopic measurements. However, all methods rely on separating the contributions to the measurements of the intrinsic SFD and the extrinsic effects of magnetostatic and exchange interactions. We investigate the underlying physics of the FORC method by applying it to the output predictions of a kinetic Monte-Carlo model with known input parameters. We show that the FORC method is valid only in cases of weak spatial correlation of the magnetisation and suggest a more general approach. PMID:28338056
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Grignon JS
2014-05-01
Full Text Available Jessica S Grignon,1,2 Jenny H Ledikwe,1,2 Ditsapelo Makati,2 Robert Nyangah,2 Baraedi W Sento,2 Bazghina-werq Semo1,2 1Department of Global Health, University of Washington, Seattle, WA, USA; 2International Training and Education Center for Health, Gaborone, Botswana Abstract: To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. Keywords: human resources, health policy, health worker, HIV/AIDS, PEPFAR
Stanek, Aleksander; Stefaniak, Tomasz; Makarewicz, Wojciech; Kaska, Lukasz; Podgórczyk, Hanna; Hellman, Andrzej; Lachinski, Andrzej
2005-02-01
The preoperative detection of accessory spleen (AS) is still a very important and serious problem. The aim of the study was to assess the reasons for failure and the long-term results of laparoscopic splenectomy (LS) in patients with idiopathic thrombocytopenic purpura (ITP). Fifty-eight ITP patients underwent LS between June 1998 and December 2002. There were 42 women and 16 men. Preoperatively, we performed computed tomography (CT) and sonography to evaluate the size of the spleen and possibly to recognize the presence of the accessory spleens, which were found preoperatively in three cases. Intraoperatively, ASs were found in the course of laparoscopy in six cases overall, three preoperatively false negative. During follow-up (median time 31 months), in three patients the low platelet count was recognized, respectively after 5 months and 1.5 and 1.8 years. In all those cases scintigraphy was performed and in one case the residual accessory spleen, missed both in preoperative examination and during laparoscopy, was revealed. In two other patients, in spite of thrombocytopenia, no residual spleens were found. We conclude that the problem of accessory spleens can be managed by careful videoscopic examination of the abdominal cavity during splenectomy, while the use of preoperative imaging techniques in detection of accessory spleens is still limited by the insufficient sensitivity of the examination.
甘延标; 许爱国; 张广财; 李英骏
2011-01-01
We further develop the lattice Boltzmann （LB） model [Physica A 382 （2007） 502] for compressible flows from two aspects. Firstly, we modify the Bhatnagar Gross Krook （BGK） collision term in the LB equation, which makes the model suitable for simulating flows with different Prandtl numbers. Secondly, the flux limiter finite difference （FLFD） scheme is employed to calculate the convection term of the LB equation, which makes the unphysical oscillations at discontinuities be effectively suppressed and the numerical dissipations be significantly diminished. The proposed model is validated by recovering results of some well-known benchmarks, including （i） The thermal Couette flow; （ii） One- and two-dlmenslonal FLiemann problems. Good agreements are obtained between LB results and the exact ones or previously reported solutions. The flexibility, together with the high accuracy of the new model, endows the proposed model considerable potential for tracking some long-standing problems and for investigating nonlinear nonequilibrium complex systems.
Rednikov, Alexey; Hollander, Nicolas; Hernando Revilla, Marta; Colinet, Pierre
2014-11-01
A model of nucleate pool boiling is considered, and more concretely the growth dynamics of a single spherical-cap vapor bubble on a flat superheated substrate in a large volume of an equally superheated liquid. An asymptotic scheme is developed valid in the limit of small contact angles. These are basically supposed to be the evaporation-induced ones and hence finite even in the case of a perfectly wetting liquid implied here. The consideration generally involves four regions: i) microregion, where the contact line singularities are resolved and the evaporation-induced contact angles are established, ii) Cox-Voinov region, iii) foot of the bubble, and iv) macroregion. It is only in the latter region, which remarkably appears to leading order in the form of the exterior of a sphere touching a planar surface in one point (hence a fixed geometry even for variable contact angles), that the full Navier-Stokes and heat equations are to be (numerically) resolved. ESA & BELSPO PRODEX, F.R.S.-FNRS.
Halici, Zekai; Polat, Beyzagul; Cadirci, Elif; Topcu, Atilla; Karakus, Emre; Kose, Duygu; Albayrak, Abdulmecit; Bayir, Yasin
2016-10-25
Previously blocking the renin angiotensin system (RAAS) has been effective in the prevention of gastric damage. Therefore, the aim of this study was to investigate the effects of aliskiren, and thus, direct renin blockage, in indomethacin-induced gastric damage model. Effects of aliskiren were evaluated in indomethacin-induced gastric damage model on Albino Wistar rats. Effects of famotidine has been investigated as standard antiulcer agent. Stereological analyses for ulcer area determination, biochemical analyses for oxidative status determination and molecular analyses for tissue cytokine and cyclooxygenase determination were performed on stomach tissues. In addition, to clarify antiulcer effect mechanism of aliskiren pylorus ligation-induced gastric acid secretion model was applied on rats. Aliskiren was able to inhibit indomethacin-induced ulcer formation. It also inhibited renin, and thus, decreased over-produced Angiotensin-II during ulcer formation. Aliskiren improved the oxidative status and cytokine profile of the stomach, which was most probably impaired by increased Angiotensin II concentration. Aliskiren also increased gastroprotective prostaglandin E2 concentration. Finally, aliskiren did not change the gastric acidity in pylorus ligation model. Aliskiren exerted its protective effects on stomach tissue by decreasing inflammatory cytokines and oxidative stress as a result of inhibiting the RAAS, at a rate-limiting step, as well as its end product, angiotensin II. Aliskiren also significantly increased protective factors such as PGE2, but not affect aggressive factors such as gastric acidity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Graefener, G
2008-01-01
The mass loss from Wolf-Rayet (WR) stars is of fundamental importance for the final fate of massive stars and their chemical yields. Its Z-dependence is discussed in relation to the formation of long-duration Gamma Ray Bursts (GRBs) and the yields from early stellar generations. However, the mechanism of formation of WR-type stellar winds is still under debate. We present the first fully self-consistent atmosphere/wind models for late-type WN stars. We investigate the mechanisms leading to their strong mass loss, and examine the dependence on stellar parameters, in particular on the metallicity Z. We identify WNL stars as very massive stars close to the Eddington limit, potentially still in the phase of central H-burning. Due to their high L/M ratios, these stars develop optically thick, radiatively driven winds. These winds show qualitatively different properties than the thin winds of OB stars. The resultant mass loss depends strongly on Z, but also on the Eddington factor, and the stellar temperature. We c...
Christophe Gutfrind
2016-05-01
Full Text Available The purpose of this article is to describe the design of a limited stroke actuator and the corresponding prototype to drive a Low Pressure (LP Exhaust Gas Recirculation (EGR valve for use in Internal Combustion Engines (ICEs. The direct drive actuator topology is an axial flux machine with two air gaps in order to minimize the rotor inertia and a bipolar surface-mounted permanent magnet in order to respect an 80° angular stroke. Firstly, the actuator will be described and optimized under constraints of a 150 ms time response, a 0.363 N·m minimal torque on an angular range from 0° to 80° and prototyping constraints. Secondly, the finite element method (FEM using the FLUX-3D® software (CEDRAT, Meylan, France will be used to check the actuator performances with consideration of the nonlinear effect of the iron material. Thirdly, a prototype will be made and characterized to compare its measurement results with the analytical model and the FEM model results. With these electromechanical behavior measurements, a numerical model is created with Simulink® in order to simulate an EGR system with this direct drive actuator under all operating conditions. Last but not least, the energy consumption of this machine will be estimated to evaluate the efficiency of the proposed EGR electromechanical system.
Gutfrind, Christophe; Dufour, Laurent; Liebart, Vincent; Vannier, Jean-Claude; Vidal, Pierre
2016-05-20
The purpose of this article is to describe the design of a limited stroke actuator and the corresponding prototype to drive a Low Pressure (LP) Exhaust Gas Recirculation (EGR) valve for use in Internal Combustion Engines (ICEs). The direct drive actuator topology is an axial flux machine with two air gaps in order to minimize the rotor inertia and a bipolar surface-mounted permanent magnet in order to respect an 80° angular stroke. Firstly, the actuator will be described and optimized under constraints of a 150 ms time response, a 0.363 N·m minimal torque on an angular range from 0° to 80° and prototyping constraints. Secondly, the finite element method (FEM) using the FLUX-3D(®) software (CEDRAT, Meylan, France) will be used to check the actuator performances with consideration of the nonlinear effect of the iron material. Thirdly, a prototype will be made and characterized to compare its measurement results with the analytical model and the FEM model results. With these electromechanical behavior measurements, a numerical model is created with Simulink(®) in order to simulate an EGR system with this direct drive actuator under all operating conditions. Last but not least, the energy consumption of this machine will be estimated to evaluate the efficiency of the proposed EGR electromechanical system.
De Backer, A; Martinez, G T; Rosenauer, A; Van Aert, S
2013-11-01
In the present paper, a statistical model-based method to count the number of atoms of monotype crystalline nanostructures from high resolution high-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) images is discussed in detail together with a thorough study on the possibilities and inherent limitations. In order to count the number of atoms, it is assumed that the total scattered intensity scales with the number of atoms per atom column. These intensities are quantitatively determined using model-based statistical parameter estimation theory. The distribution describing the probability that intensity values are generated by atomic columns containing a specific number of atoms is inferred on the basis of the experimental scattered intensities. Finally, the number of atoms per atom column is quantified using this estimated probability distribution. The number of atom columns available in the observed STEM image, the number of components in the estimated probability distribution, the width of the components of the probability distribution, and the typical shape of a criterion to assess the number of components in the probability distribution directly affect the accuracy and precision with which the number of atoms in a particular atom column can be estimated. It is shown that single atom sensitivity is feasible taking the latter aspects into consideration.
P. Fernández-Robredo
2014-01-01
Full Text Available Age-related macular degeneration (AMD is the leading cause of blindness in the Western world. With an ageing population, it is anticipated that the number of AMD cases will increase dramatically, making a solution to this debilitating disease an urgent requirement for the socioeconomic future of the European Union and worldwide. The present paper reviews the limitations of the current therapies as well as the socioeconomic impact of the AMD. There is currently no cure available for AMD, and even palliative treatments are rare. Treatment options show several side effects, are of high cost, and only treat the consequence, not the cause of the pathology. For that reason, many options involving cell therapy mainly based on retinal and iris pigment epithelium cells as well as stem cells are being tested. Moreover, tissue engineering strategies to design and manufacture scaffolds to mimic Bruch’s membrane are very diverse and under investigation. Both alternative therapies are aimed to prevent and/or cure AMD and are reviewed herein.
James Wood; William Quinlan
2008-09-30
The goal of this project was to develop and execute a novel drilling and completion program in the Antrim Shale near the western shoreline of Northern Michigan. The target was the gas in the Lower Antrim Formation (Upper Devonian). Another goal was to see if drilling permits could be obtained from the Michigan DNR that would allow exploitation of reserves currently off-limits to exploration. This project met both of these goals: the DNR (Michigan Department of Natural Resources) issued permits that allow drilling the shallow subsurface for exploration and production. This project obtained drilling permits for the original demonstration well AG-A-MING 4-12 HD (API: 21-009-58153-0000) and AG-A-MING 4-12 HD1 (API: 21-009-58153-0100) as well as for similar Antrim wells in Benzie County, MI, the Colfax 3-28 HD and nearby Colfax 2-28 HD which were substituted for the AG-A-MING well. This project also developed successful techniques and strategies for producing the shallow gas. In addition to the project demonstration well over 20 wells have been drilled to date into the shallow Antrim as a result of this project's findings. Further, fracture stimulation has proven to be a vital step in improving the deliverability of wells to deem them commercial. Our initial plan was very simple; the 'J-well' design. We proposed to drill a vertical or slant well 30.48 meters (100 feet) below the glacial drift, set required casing, then angle back up to tap the resource lying between the base to the drift and the conventional vertical well. The 'J'-well design was tested at Mancelona Township in Antrim County in February of 2007 with the St. Mancelona 2-12 HD 3.
Manakhov, Anton; Michlíček, Miroslav; Felten, Alexandre; Pireaux, Jean-Jacques; Nečas, David; Zajíčková, Lenka
2017-02-01
The quantitative analysis of the chemistry at the surface of functional plasma polymers is highly important for the optimization of their deposition conditions and, therefore, for their subsequent applications. The chemical derivatization of amine and carboxyl-anhydride layers is a well-known technique already applied by many researchers, notwithstanding the known drawback of the derivatization procedures like side or uncomplete reactions that could lead to "unreliable" results. In this work, X-ray photoelectron spectroscopy (XPS) combined with depth profiling with argon clusters is applied for the first time to study derivatized amine and carboxyl-anhydride plasma polymer layers. It revealed an additional important parameter affecting the derivatization reliability, namely the permeation of the derivatizing molecule through the target analysed layer, i.e. the composite effect of the probe molecule size and the layer porosity. Amine-rich films prepared by RF low pressure plasma polymerization of cyclopropylamine were derivatized with trifluoromethyl benzaldehide (TFBA) and it was observed by that the XPS-determined NH2 concentration depth profile is rapidly decreasing over top ten nanometers of the layer. The anhydride-rich films prepared by atmospheric plasma co-polymerization of maleic anhydride and C2H2 have been reacted with, parafluoroaniline and trifluoroethyl amine. The decrease of the F signal in top surface layer of the anhydride films derivatized by the "large" parafluoroaniline was observed similarly as for the amine films but the derivatization with the smaller trifluoroethylamine (TFEA) led to a more homogenous depth profile. The data analysis suggests that the size of the derivatizing molecule is the main factor, showing that the very limited permeation of the TFBA molecule can lead to underestimated densities of primary amines if the XPS analysis is solely carried out at a low take-off angle. In contrast, TFEA is found to be an efficient
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Wootton, Richard; Wu, Wei-I; Bonnardot, Laurent
2013-10-01
Collegium Telemedicus (CT) offers a new approach to the problem of starting a store-and-forward telemedicine network for use in low resource settings. The CT organization provides a no-cost template to allow groups to start a network without delay, together with a peer-support environment for those operating the networks. A new group needs only to supply a Guarantor (who accepts responsibility for the work of the network) and a Coordinator (who operates the telemedicine network, allocating cases and ensuring that they are responded to). Communication takes place via secure messaging, which has several advantages over plain email, e.g. all the data are stored centrally, which means that they can be read from a hand-held device such as a smart phone, but do not need to be stored on that device. Users can access the system with a standard web browser. In the first three months, seven networks were established on the CT system by university groups in the US, the UK, Australia and New Zealand, and by a large, multinational humanitarian organisation. In the most active network, there were 86 telemedicine cases in the first three months, i.e. an average submission rate of 7 cases/week. The CT system appears to fulfil its aim of assisting doctors who wish to help colleagues in other countries by improving their access to specialist opinions, while allowing them to maintain control over the new network's use and development. The long term aim of the CT organization is to provide a means of improving the quality of health care at the point of delivery in low resource settings.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Van Niekerk, L.; Adams, J. B.; Bate, G. C.; Forbes, A. T.; Forbes, N. T.; Huizinga, P.; Lamberth, S. J.; MacKay, C. F.; Petersen, C.; Taljaard, S.; Weerts, S. P.; Whitfield, A. K.; Wooldridge, T. H.
2013-09-01
Population and development pressures increase the need for proactive strategic management on a regional or country-wide scale - reactively protecting ecosystems on an estuary-by-estuary basis against multiple pressures is 'resource hungry' and not feasible. Proactive management requires a strategic assessment of health so that the most suitable return on investment can be made. A country-wide assessment of the nearly 300 functional South African estuaries examined both key pressures (freshwater inflow modification, water quality, artificial breaching of temporarily open/closed systems, habitat modification and exploitation of living resources) and health state. The method used to assess the type and level of the different pressures, as well as the ecological health status of a large number of estuaries in a data limited environment is described in this paper. Key pressures and the ecological condition of estuaries on a national scale are summarised. The template may also be used to provide guidance to coastal researchers attempting to inform management in other developing countries. The assessment was primarily aimed at decision makers both inside and outside the biodiversity sector. A key starting point was to delineate spatially the estuary functional zone (area) for every system. In addition, available data on pressures impacting estuaries on a national scale were collated. A desktop national health assessment, based on an Estuarine Health Index developed for South African ecological water requirement studies, was then applied systematically. National experts, all familiar with the index evaluated the estuaries in their region. Individual estuarine health assessment scores were then translated into health categories that reflect the overall status of South Africa's estuaries. The results showed that estuaries in the warm-temperate biogeographical zone are healthier than those in the cool-temperate and subtropical zones, largely reflecting the country
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Bandoro, J.; Sica, R. J.; Argall, S.
2012-12-01
An important aspect of solar terrestrial relations is the coupling between the lower and upper atmosphere-ionosphere system. The coupling is evident in the general circulation of the atmosphere, where waves generate in the lower atmosphere play an important role in the dynamics of the upper atmosphere, which feeds back on the lower atmosphere's circulation. To address coupling problems requires measurements over the broadest range of heights possible. A recently developed retrieval method for temperature profiles from Rayleigh-scatter lidar measurements using an inversion approach allows the upward extension of the altitude range of temperature by 10 to 15 km over the conventional method, thus producing the equivalent of increasing the systems power-aperture product by 4 times [1]. The method requires no changes to the lidar's hardware and thus, can be applied to the body of existing measurements. In addition, since the uncertainties of the retrieved temperature profile are found by a Monte Carlo error analysis, it is possible to isolate systematic and random uncertainties to model the effect of each one on the final uncertainty product for the temperature profile. This unambiguous separation of uncertainties was not previously possible as only the propagation of the statistical uncertainties are typically reported. For the Purple Crow Lidar, corrections for saturation (e.g. non-linearity) in the photocount returns, ozone extinction and background removal all contribute to the overall systematic uncertainty. Results of individually varying each systematic correction and the effect on the final temperature uncertainty through Monte Carlo realizations are presented to determine the importance for each one. For example, it was found that treatment of the background correction as a systematic versus statistical uncertainty gave results in agreement with each other. This new method is then applied to measurements obtained by the Purple Crow lidar' Rayleigh
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Apostol, Izydor; Miller, Karen J; Ratto, Joseph; Kelner, Drew N
2009-02-01
Several different techniques suggested by the International Conference on Harmonization (ICH) Q2R1 guideline were used to assess the signal and concentration at the limit of detection (LOD) and limit of quantitation (LOQ) for a purity method. These approaches were exemplified with a capillary isoelectrofocusing (cIEF) method, which has been developed to quantify the distribution of the charge isoforms of a monoclonal antibody. The charge isoforms are the result of incomplete posttranslational processing of C-terminal lysine residues of the heavy chain by carboxypeptidase. Results showed no significant discrepancy between LOD/LOQ obtained by the different techniques. Validation experiments corroborated the calculated LOQ. The results indicate that any single technique can provide meaningful values for the LOD and LOQ. Finally, important points to consider when applying these techniques to purity methods are discussed.
Xu, Chengcheng; Wang, Wei; Liu, Pan; Li, Zhibin
2015-12-01
This study aimed to develop a real-time crash risk model with limited data in China by using Bayesian meta-analysis and Bayesian inference approach. A systematic review was first conducted by using three different Bayesian meta-analyses, including the fixed effect meta-analysis, the random effect meta-analysis, and the meta-regression. The meta-analyses provided a numerical summary of the effects of traffic variables on crash risks by quantitatively synthesizing results from previous studies. The random effect meta-analysis and the meta-regression produced a more conservative estimate for the effects of traffic variables compared with the fixed effect meta-analysis. Then, the meta-analyses results were used as informative priors for developing crash risk models with limited data. Three different meta-analyses significantly affect model fit and prediction accuracy. The model based on meta-regression can increase the prediction accuracy by about 15% as compared to the model that was directly developed with limited data. Finally, the Bayesian predictive densities analysis was used to identify the outliers in the limited data. It can further improve the prediction accuracy by 5.0%.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
无
2002-01-01
A systematic approach was adopted to investigate the nutrient limiting factors in gray-brown purple soils and yellow soils derived from limestone in Chongqing, China, to study balanced fertilization for corn, sweet potato and wheat in rotation. The results showed that N, P and K were deficient in both soils, Cu, Mn, S andZn in the gray-brown purple soils and Ca, Mg, Mo and Zn for the yellow soils. Balanced fertilizer application increased yields of corn, sweet potato and wheat by 28.4%, 28.7% and 4.4%, respectively, as compared to the local farmers' practice. The systematic approach can be considered as one of the most efficient and reliable methods in fertility study.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Barboza, Luciano Vitoria [Sul-riograndense Federal Institute for Education, Science and Technology (IFSul), Pelotas, RS (Brazil)], E-mail: luciano@pelotas.ifsul.edu.br
2009-07-01
This paper presents an overview about the maximum load ability problem and aims to study the main factors that limit this load ability. Specifically this study focuses its attention on determining which electric system buses influence directly on the power demand supply. The proposed approach uses the conventional maximum load ability method modelled by an optimization problem. The solution of this model is performed using the Interior Point methodology. As consequence of this solution method, the Lagrange multipliers are used as parameters that identify the probable 'bottlenecks' in the electric power system. The study also shows the relationship between the Lagrange multipliers and the cost function in the Interior Point optimization interpreted like sensitivity parameters. In order to illustrate the proposed methodology, the approach was applied to an IEEE test system and to assess its performance, a real equivalent electric system from the South- Southeast region of Brazil was simulated. (author)
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
24 CFR 941.306 - Maximum project cost.
2010-04-01
...) project costs that are subject to the TDC limit (i.e., Housing Construction Costs and Community Renewal Costs); and (2) project costs that are not subject to the TDC limit (i.e., Additional Project Costs... expended for the project, and this becomes the maximum project cost for purposes of the ACC. (b) TDC...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Quan, H T
2014-06-01
We study the maximum efficiency of a heat engine based on a small system. It is revealed that due to the finiteness of the system, irreversibility may arise when the working substance contacts with a heat reservoir. As a result, there is a working-substance-dependent correction to the Carnot efficiency. We derive a general and simple expression for the maximum efficiency of a Carnot cycle heat engine in terms of the relative entropy. This maximum efficiency approaches the Carnot efficiency asymptotically when the size of the working substance increases to the thermodynamic limit. Our study extends Carnot's result of the maximum efficiency to an arbitrary working substance and elucidates the subtlety of thermodynamic laws in small systems.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Simonov, Alexandr N.
2014-08-19
Many electrode processes that approach the "reversible" (infinitely fast) limit under voltammetric conditions have been inappropriately analyzed by comparison of experimental data and theory derived from the "quasi-reversible" model. Simulations based on "reversible" and "quasi-reversible" models have been fitted to an extensive series of a.c. voltammetric experiments undertaken at macrodisk glassy carbon (GC) electrodes for oxidation of ferrocene (Fc0/+) in CH3CN (0.10 M (n-Bu)4NPF6) and reduction of [Ru(NH 3)6]3+ and [Fe(CN)6]3- in 1 M KCl aqueous electrolyte. The confidence with which parameters such as standard formal potential (E0), heterogeneous electron transfer rate constant at E0 (k0), charge transfer coefficient (α), uncompensated resistance (Ru), and double layer capacitance (CDL) can be reported using the "quasi- reversible" model has been assessed using bootstrapping and parameter sweep (contour plot) techniques. Underparameterization, such as that which occurs when modeling CDL with a potential independent value, results in a less than optimal level of experiment-theory agreement. Overparameterization may improve the agreement but easily results in generation of physically meaningful but incorrect values of the recovered parameters, as is the case with the very fast Fc0/+ and [Ru(NH3)6]3+/2+ processes. In summary, for fast electrode kinetics approaching the "reversible" limit, it is recommended that the "reversible" model be used for theory-experiment comparisons with only E0, R u, and CDL being quantified and a lower limit of k 0 being reported; e.g., k0 ≥ 9 cm s-1 for the Fc0/+ process. © 2014 American Chemical Society.
Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R.
2016-03-01
A pragmatic method based on the molecular tailoring approach (MTA) for estimating the complete basis set (CBS) limit at Møller-Plesset second order perturbation (MP2) theory accurately for large molecular clusters with limited computational resources is developed. It is applied to water clusters, (H2O)n (n = 7, 8, 10, 16, 17, and 25) optimized employing aug-cc-pVDZ (aVDZ) basis-set. Binding energies (BEs) of these clusters are estimated at the MP2/aug-cc-pVNZ (aVNZ) [N = T, Q, and 5 (whenever possible)] levels of theory employing grafted MTA (GMTA) methodology and are found to lie within 0.2 kcal/mol of the corresponding full calculation MP2 BE, wherever available. The results are extrapolated to CBS limit using a three point formula. The GMTA-MP2 calculations are feasible on off-the-shelf hardware and show around 50%-65% saving of computational time. The methodology has a potential for application to molecular clusters containing ˜100 atoms.
Chapman, Kathryn; Sewell, Fiona; Allais, Linda; Delongeas, Jean-Luc; Donald, Elizabeth; Festag, Matthias; Kervyn, Sophie; Ockert, Deborah; Nogues, Vicente; Palmer, Helen; Popovic, Marija; Roosen, Wendy; Schoenmakers, Ankie; Somers, Kevin; Stark, Claudia; Stei, Peter; Robinson, Sally
2013-10-01
Short term toxicity studies are conducted in animals to provide information on major adverse effects typically at the maximum tolerated dose (MTD). Such studies are important from a scientific and ethical perspective as they are used to make decisions on progression of potential candidate drugs, and to set dose levels for subsequent regulatory studies. The MTD is usually determined by parameters such as clinical signs, reductions in body weight and food consumption. However, these assessments are often subjective and there are no published criteria to guide the selection of an appropriate MTD. Even where an objective measurement exists, such as body weight loss (BWL), there is no agreement on what level constitutes an MTD. A global initiative including 15 companies, led by the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), has shared data on BWL in toxicity studies to assess the impact on the animal and the study outcome. Information on 151 studies has been used to develop an alert/warning system for BWL in short term toxicity studies. The data analysis supports BWL limits for short term dosing (up to 7days) of 10% for rat and dog and 6% for non-human primates (NHPs).
Luan Yihui
2009-09-01
Full Text Available Abstract Background Many aspects of biological functions can be modeled by biological networks, such as protein interaction networks, metabolic networks, and gene coexpression networks. Studying the statistical properties of these networks in turn allows us to infer biological function. Complex statistical network models can potentially more accurately describe the networks, but it is not clear whether such complex models are better suited to find biologically meaningful subnetworks. Results Recent studies have shown that the degree distribution of the nodes is not an adequate statistic in many molecular networks. We sought to extend this statistic with 2nd and 3rd order degree correlations and developed a pseudo-likelihood approach to estimate the parameters. The approach was used to analyze the MIPS and BIOGRID yeast protein interaction networks, and two yeast coexpression networks. We showed that 2nd order degree correlation information gave better predictions of gene interactions in both protein interaction and gene coexpression networks. However, in the biologically important task of predicting functionally homogeneous modules, degree correlation information performs marginally better in the case of the MIPS and BIOGRID protein interaction networks, but worse in the case of gene coexpression networks. Conclusion Our use of dK models showed that incorporation of degree correlations could increase predictive power in some contexts, albeit sometimes marginally, but, in all contexts, the use of third-order degree correlations decreased accuracy. However, it is possible that other parameter estimation methods, such as maximum likelihood, will show the usefulness of incorporating 2nd and 3rd degree correlations in predicting functionally homogeneous modules.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Linnet, Kristian
2005-01-01
Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...
L. M. Miller
2011-02-01
Full Text Available The availability of wind power for renewable energy extraction is ultimately limited by how much kinetic energy is generated by natural processes within the Earth system and by fundamental limits of how much of the wind power can be extracted. Here we use these considerations to provide a maximum estimate of wind power availability over land. We use several different methods. First, we outline the processes associated with wind power generation and extraction with a simple power transfer hierarchy based on the assumption that available wind power will not geographically vary with increased extraction for an estimate of 68 TW. Second, we set up a simple momentum balance model to estimate maximum extractability which we then apply to reanalysis climate data, yielding an estimate of 21 TW. Third, we perform general circulation model simulations in which we extract different amounts of momentum from the atmospheric boundary layer to obtain a maximum estimate of how much power can be extracted, yielding 18–34 TW. These three methods consistently yield maximum estimates in the range of 18–68 TW and are notably less than recent estimates that claim abundant wind power availability. Furthermore, we show with the general circulation model simulations that some climatic effects at maximum wind power extraction are similar in magnitude to those associated with a doubling of atmospheric CO_{2}. We conclude that in order to understand fundamental limits to renewable energy resources, as well as the impacts of their utilization, it is imperative to use a "top-down" thermodynamic Earth system perspective, rather than the more common "bottom-up" engineering approach.
Efficiency at Maximum Power of Interacting Molecular Machines
Golubeva, Natalia; Imparato, Alberto
2012-01-01
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....
Maximum frequency of the decametric radiation from Jupiter
Barrow, C. H.; Alexander, J. K.
1980-01-01
The upper frequency limits of Jupiter's decametric radio emission are found to be essentially the same when observed from the earth or, with considerably higher sensitivity, from the Voyager spacecraft close to Jupiter. This suggests that the maximum frequency is a real cut-off corresponding to a maximum gyrofrequency of about 38-40 MHz at Jupiter. It no longer appears to be necessary to specify different cut-off frequencies for the Io and non-Io emission as the maximum frequencies are roughly the same in each case.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Gzyl, Henryk
2007-01-01
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
Semidefinite Programming for Approximate Maximum Likelihood Sinusoidal Parameter Estimation
2009-01-01
We study the convex optimization approach for parameter estimation of several sinusoidal models, namely, single complex/real tone, multiple complex sinusoids, and single two-dimensional complex tone, in the presence of additive Gaussian noise. The major difficulty for optimally determining the parameters is that the corresponding maximum likelihood (ML) estimators involve finding the global minimum or maximum of multimodal cost functions because the frequencies are nonlinear in the observed s...
马东升; 胡佑德; 戴凤智
2002-01-01
In an actual control system, it is often difficult to find out where the faults are if only based on the outside fault phenomena, acquired frequently from a fault system. So the fault diagnosis by outside fault phenomena is considered. Based on the theory of fuzzy recognition and fault diagnosis, this method only depends on experience and statistical data to set up fuzzy query relationship between the outside phenomena (fault characters) and the fault sources (fault patterns). From this relationship the most probable fault sources can be obtained, to attain the goal of quick diagnosis. Based on the above approach, the standard fuzzy relationship matrix is stored in the computer as a system database. And experiment data are given to show the fault diagnosis results. The important parameters can be on-line sampled and analyzed, and when faults occur, faults can be found, the alarm is given and the controller output is regulated.%为了解决实际控制系统中仅通过系统的故障现象难以确定系统故障元的难题,采用基于模糊识别和故障诊断理论的最大概率法,该方法仅仅依靠经验和统计数据,在外部故障现象和系统故障元之间建立模糊查询关系,从这一关系中可以获得最大故障概率点.将一个标准模糊关系矩阵作为数据库存储在计算机中,并给出了一个系统故障诊断的实验结果.通过以上方法,只要对系统的重要参数进行在线采集和分析,当发生故障时,就可以给出可能的故障元的故障概率,并发出警报.
Hu, Jian Zhi; Rommereim, Donald N.; Wind, Robert A.; Minard, Kevin R.; Sears, Jesse A.
2006-11-01
A simple approach is reported that yields high resolution, high sensitivity ¹H NMR spectra of biofluids with limited mass supply. This is achieved by spinning a capillary sample tube containing a biofluid at the magic angle at a frequency of about 80Hz. A 2D pulse sequence called ¹H PASS is then used to produce a high-resolution ¹H NMR spectrum that is free from magnetic susceptibility induced line broadening. With this new approach a high resolution ¹H NMR spectrum of biofluids with a volume less than 1.0 µl can be easily achieved at a magnetic field strength as low as 7.05T. Furthermore, the methodology facilitates easy sample handling, i.e., the samples can be directly collected into inexpensive and disposable capillary tubes at the site of collection and subsequently used for NMR measurements. In addition, slow magic angle spinning improves magnetic field shimming and is especially suitable for high throughput investigations. In this paper first results are shown obtained in a magnetic field of 7.05T on urine samples collected from mice using a modified commercial NMR probe.
Yue, Yu Ryan; Wang, Xiao-Feng
2016-05-10
This paper is motivated from a retrospective study of the impact of vitamin D deficiency on the clinical outcomes for critically ill patients in multi-center critical care units. The primary predictors of interest, vitamin D2 and D3 levels, are censored at a known detection limit. Within the context of generalized linear mixed models, we investigate statistical methods to handle multiple censored predictors in the presence of auxiliary variables. A Bayesian joint modeling approach is proposed to fit the complex heterogeneous multi-center data, in which the data information is fully used to estimate parameters of interest. Efficient Monte Carlo Markov chain algorithms are specifically developed depending on the nature of the response. Simulation studies demonstrate the outperformance of the proposed Bayesian approach over other existing methods. An application to the data set from the vitamin D deficiency study is presented. Possible extensions of the method regarding the absence of auxiliary variables, semiparametric models, as well as the type of censoring are also discussed.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Greco, Susan L; Belova, Anna; Huang, Jin
2016-09-01
We developed an approach to estimate the public health benefits resulting from transportation projects or environmental actions that reduce mobile source fine particulate matter (PM2.5 ) in select urban areas worldwide when input data are limited or when a rapid order-of-magnitude assessment is needed. For a given reduction in direct PM2.5 emissions, we can use this approach to quantify (1) the subsequent reduction in ambient primary PM2.5 concentration in the urban area; (2) the public health benefits associated with mortality risk reductions, measured in terms of avoided premature deaths; and (3) the economic value of the reduced mortality risk. To illustrate our approach, we estimated the impact of a 100-metric-ton reduction in primary PM2.5 mobile source emissions in the year 2010 for 42 large, global cities. Our estimates of public health benefits and their economic value varied by city, as did the sensitivity to key assumptions and inputs. The estimated number of premature deaths avoided per 100-metric-ton reduction in PM2.5 emissions ranged from 12 to 202. City-level variability in these estimates was driven by the magnitude of the reduction in ambient PM2.5 concentration, the size of the urban population, and the baseline PM2.5 concentration. The economic value of mortality risk reductions per 100-metric-ton reduction in PM2.5 emissions ranged from $2 million to $328 million in 2010 U.S. dollars. Income per capita was the most important driver of the variability in the economic values across countries.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Sylvetsky, Nitai; Peterson, Kirk A.; Karton, Amir; Martin, Jan M. L.
2016-06-01
In the context of high-accuracy computational thermochemistry, the valence coupled cluster with all singles and doubles (CCSD) correlation component of molecular atomization energies presents the most severe basis set convergence problem, followed by the (T) component. In the present paper, we make a detailed comparison, for an expanded version of the W4-11 thermochemistry benchmark, between, on the one hand, orbital-based CCSD/AV{5,6}Z + d and CCSD/ACV{5,6}Z extrapolation, and on the other hand CCSD-F12b calculations with cc-pVQZ-F12 and cc-pV5Z-F12 basis sets. This latter basis set, now available for H-He, B-Ne, and Al-Ar, is shown to be very close to the basis set limit. Apparent differences (which can reach 0.35 kcal/mol for systems like CCl4) between orbital-based and CCSD-F12b basis set limits disappear if basis sets with additional radial flexibility, such as ACV{5,6}Z, are used for the orbital calculation. Counterpoise calculations reveal that, while total atomization energies with V5Z-F12 basis sets are nearly free of BSSE, orbital calculations have significant BSSE even with AV(6 + d)Z basis sets, leading to non-negligible differences between raw and counterpoise-corrected extrapolated limits. This latter problem is greatly reduced by switching to ACV{5,6}Z core-valence basis sets, or simply adding an additional zeta to just the valence orbitals. Previous reports that all-electron approaches like HEAT (high-accuracy extrapolated ab-initio thermochemistry) lead to different CCSD(T) limits than "valence limit + CV correction" approaches like Feller-Peterson-Dixon and Weizmann-4 (W4) theory can be rationalized in terms of the greater radial flexibility of core-valence basis sets. For (T) corrections, conventional CCSD(T)/AV{Q,5}Z + d calculations are found to be superior to scaled or extrapolated CCSD(T)-F12b calculations of similar cost. For a W4-F12 protocol, we recommend obtaining the Hartree-Fock and valence CCSD components from CCSD-F12b/cc-pV{Q,5}Z-F12
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity
Ortiz, A; Puso, M A; Sukumar, N
2009-09-04
Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.
[Study on the maximum entropy principle and population genetic equilibrium].
Zhang, Hong-Li; Zhang, Hong-Yan
2006-03-01
A general mathematic model of population genetic equilibrium about one locus was constructed based on the maximum entropy principle by WANG Xiao-Long et al. They proved that the maximum solve of the model was just the frequency distribution that a population reached Hardy-Weinberg genetic equilibrium. It can suggest that a population reached Hardy-Weinberg genetic equilibrium when the genotype entropy of the population reached the maximal possible value, and that the frequency distribution of the maximum entropy was equivalent to the distribution of Hardy-Weinberg equilibrium law about one locus. They further assumed that the frequency distribution of the maximum entropy was equivalent to all genetic equilibrium distributions. This is incorrect, however. The frequency distribution of the maximum entropy was only equivalent to the distribution of Hardy-Weinberg equilibrium with respect to one locus or several limited loci. The case with regard to limited loci was proved in this paper. Finally we also discussed an example where the maximum entropy principle was not the equivalent of other genetic equilibria.
Local image statistics: maximum-entropy constructions and perceptual salience.
Victor, Jonathan D; Conte, Mary M
2012-07-01
The space of visual signals is high-dimensional and natural visual images have a highly complex statistical structure. While many studies suggest that only a limited number of image statistics are used for perceptual judgments, a full understanding of visual function requires analysis not only of the impact of individual image statistics, but also, how they interact. In natural images, these statistical elements (luminance distributions, correlations of low and high order, edges, occlusions, etc.) are intermixed, and their effects are difficult to disentangle. Thus, there is a need for construction of stimuli in which one or more statistical elements are introduced in a controlled fashion, so that their individual and joint contributions can be analyzed. With this as motivation, we present algorithms to construct synthetic images in which local image statistics--including luminance distributions, pair-wise correlations, and higher-order correlations--are explicitly specified and all other statistics are determined implicitly by maximum-entropy. We then apply this approach to measure the sensitivity of the human visual system to local image statistics and to sample their interactions.
Continuous maximum flow segmentation method for nanoparticle interaction analysis.
Marak, L; Tankyevych, O; Talbot, H
2011-10-01
In recent years, tomographic three-dimensional reconstruction approaches using electrons rather than X-rays have become popular. Such images produced with a transmission electron microscope make it possible to image nanometre-scale materials in three-dimensional. However, they are also noisy, limited in contrast and most often have a very poor resolution along the axis of the electron beam. The analysis of images stemming from such modalities, whether fully or semiautomated, is therefore more complicated. In particular, segmentation of objects is difficult. In this paper, we propose to use the continuous maximum flow segmentation method based on a globally optimal minimal surface model. The use of this fully automated segmentation and filtering procedure is illustrated on two different nanoparticle samples and provide comparisons with other classical segmentation methods. The main objectives are the measurement of the attraction rate of polystyrene beads to silica nanoparticle (for the first sample) and interaction of silica nanoparticles with large unilamellar liposomes (for the second sample). We also illustrate how precise measurements such as contact angles can be performed.
A Maximum Entropy Approach to Identifying Sentence Boundaries
Reynar, J C; Reynar, Jeffrey C.; Ratnaparkhi, Adwait
1997-01-01
We present a trainable model for identifying sentence boundaries in raw text. Given a corpus annotated with sentence boundaries, our model learns to classify each occurrence of ., ?, and ! as either a valid or invalid sentence boundary. The training procedure requires no hand-crafted rules, lexica, part-of-speech tags, or domain-specific information. The model can therefore be trained easily on any genre of English, and should be trainable on any other Roman-alphabet language. Performance is comparable to or better than the performance of similar systems, but we emphasize the simplicity of retraining for new domains.
Maximum energy yield approach for CPV tracker design
Aldaiturriaga, E.; González, O.; Castro, M.
2012-10-01
Foton HC Systems has developed a new CPV tracker model, specially focused on its tracking efficiency and the effect of the tracker control techniques on the final energy yield of the system. This paper presents the theoretical work carried out into determining the energy yield for a CPV system, and illustrates the steps involved in calculating and understanding how energy consumption for tracking is opposed to tracker pointing errors. Additionally, the expressions to compute the optimum parameters are presented and discussed.
The Maximum Patch Method for Directional Dark Matter Detection
Henderson, Shawn; Fisher, Peter
2008-01-01
Present and planned dark matter detection experiments search for WIMP-induced nuclear recoils in poorly known background conditions. In this environment, the maximum gap statistical method provides a way of setting more sensitive cross section upper limits by incorporating known signal information. We give a recipe for the numerical calculation of upper limits for planned directional dark matter detection experiments, that will measure both recoil energy and angle, based on the gaps between events in two-dimensional phase space.
Quantum-dot Carnot engine at maximum power.
Esposito, Massimiliano; Kawai, Ryoichi; Lindenberg, Katja; Van den Broeck, Christian
2010-04-01
We evaluate the efficiency at maximum power of a quantum-dot Carnot heat engine. The universal values of the coefficients at the linear and quadratic order in the temperature gradient are reproduced. Curzon-Ahlborn efficiency is recovered in the limit of weak dissipation.
33 CFR 401.3 - Maximum vessel dimensions.
2010-07-01
..., and having dimensions that do not exceed the limits set out in the block diagram in appendix I of this... 33 Navigation and Navigable Waters 3 2010-07-01 2010-07-01 false Maximum vessel dimensions. 401.3 Section 401.3 Navigation and Navigable Waters SAINT LAWRENCE SEAWAY DEVELOPMENT CORPORATION, DEPARTMENT...
Maximum likelihood for genome phylogeny on gene content.
Zhang, Hongmei; Gu, Xun
2004-01-01
With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
The subsequence weight distribution of summed maximum length digital sequences
Weathers, G. D.; Graf, E. R.; Wallace, G. R.
1974-01-01
An attempt is made to develop mathematical formulas to provide the basis for the design of pseudorandom signals intended for applications requiring accurate knowledge of the statistics of the signals. The analysis approach involves calculating the first five central moments of the weight distribution of subsequences of hybrid-sum sequences. The hybrid-sum sequence is formed from the modulo-two sum of k maximum length sequences and is an extension of the sum sequences formed from two maximum length sequences that Gilson (1966) evaluated. The weight distribution of the subsequences serves as an approximation to the filtering process. The basic reason for the analysis of hybrid-sum sequences is to establish a large group of sequences with good statistical properties. It is shown that this can be accomplished much more efficiently using the hybrid-sum approach rather than forming the group strictly from maximum length sequences.
A Maximum Entropy Estimator for the Aggregate Hierarchical Logit Model
Pedro Donoso
2011-08-01
Full Text Available A new approach for estimating the aggregate hierarchical logit model is presented. Though usually derived from random utility theory assuming correlated stochastic errors, the model can also be derived as a solution to a maximum entropy problem. Under the latter approach, the Lagrange multipliers of the optimization problem can be understood as parameter estimators of the model. Based on theoretical analysis and Monte Carlo simulations of a transportation demand model, it is demonstrated that the maximum entropy estimators have statistical properties that are superior to classical maximum likelihood estimators, particularly for small or medium-size samples. The simulations also generated reduced bias in the estimates of the subjective value of time and consumer surplus.
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
Gin, S.; Jollivet, P.; Barba Rossa, G.; Tribet, M.; Mougnaud, S.; Collin, M.; Fournier, M.; Cadel, E.; Cabie, M.; Dupuy, L.
2017-04-01
Significant efforts have been made into understanding the dissolution of silicate glasses and minerals, but there is still debate about the formation processes and the properties of surface layers. Here, we investigate glass coupons of ISG glass - a 6 oxide borosilicate glass of nuclear interest - altered at 90 °C in conditions close to saturation and for durations ranging from 1 to 875 days. Altered glass coupons were characterized from atomic to macroscopic levels to better understand how surface layers become protective. With this approach, it was shown that a rough interface, whose physical characteristics have been modeled, formed in a few days and then propagated into the pristine material at a rate controlled by the reactive transport of water within the growing alteration layer. Several observations such as stiff interfacial B, Na, and Ca profiles and damped profiles within the rest of the alteration layer are not consistent with the classical inter-diffusion model, or with the interfacial dissolution-precipitation model. A new paradigm is proposed to explain these features. Inter-diffusion, a process based on water ingress into the glass and ion-exchange, may only explain the formation of the rough interface in the early stage of glass corrosion. A thin layer of altered glass is formed by this process, and as the layer grows, the accessibility of water to the reactive interface becomes rate-limiting. As a consequence, only the most easily accessible species are dissolved. The others remain undissolved in the alteration layer, probably fixed in highly hydrolysis resistant clusters. A new estimation of water diffusivity in the glass when covered by the passivating layer was determined from the shift between B and H profiles, and was 10-23 m2.s-1, i.e. approximately 3 orders of magnitude lower than water diffusivity in the pristine material. Overall, in the absence of secondary crystalline phases that could consume the major components of the alteration
Byskov, Jens; Marchal, Bruno; Maluka, Stephen; Zulu, Joseph M; Bukachi, Salome A; Hurtig, Anna-Karin; Blystad, Astrid; Kamuzora, Peter; Michelo, Charles; Nyandieka, Lillian N; Ndawi, Benedict; Bloch, Paul; Olsen, Oystein E
2014-08-20
Priority-setting decisions are based on an important, but not sufficient set of values and thus lead to disagreement on priorities. Accountability for Reasonableness (AFR) is an ethics-based approach to a legitimate and fair priority-setting process that builds upon four conditions: relevance, publicity, appeals, and enforcement, which facilitate agreement on priority-setting decisions and gain support for their implementation. This paper focuses on the assessment of AFR within the project REsponse to ACcountable priority setting for Trust in health systems (REACT). This intervention study applied an action research methodology to assess implementation of AFR in one district in Kenya, Tanzania, and Zambia, respectively. The assessments focused on selected disease, program, and managerial areas. An implementing action research team of core health team members and supporting researchers was formed to implement, and continually assess and improve the application of the four conditions. Researchers evaluated the intervention using qualitative and quantitative data collection and analysis methods. The values underlying the AFR approach were in all three districts well-aligned with general values expressed by both service providers and community representatives. There was some variation in the interpretations and actual use of the AFR in the decision-making processes in the three districts, and its effect ranged from an increase in awareness of the importance of fairness to a broadened engagement of health team members and other stakeholders in priority setting and other decision-making processes. District stakeholders were able to take greater charge of closing the gap between nationally set planning and the local realities and demands of the served communities within the limited resources at hand. This study thus indicates that the operationalization of the four broadly defined and linked conditions is both possible and seems to be responding to an actual demand. This
2014-01-01
Background Priority-setting decisions are based on an important, but not sufficient set of values and thus lead to disagreement on priorities. Accountability for Reasonableness (AFR) is an ethics-based approach to a legitimate and fair priority-setting process that builds upon four conditions: relevance, publicity, appeals, and enforcement, which facilitate agreement on priority-setting decisions and gain support for their implementation. This paper focuses on the assessment of AFR within the project REsponse to ACcountable priority setting for Trust in health systems (REACT). Methods This intervention study applied an action research methodology to assess implementation of AFR in one district in Kenya, Tanzania, and Zambia, respectively. The assessments focused on selected disease, program, and managerial areas. An implementing action research team of core health team members and supporting researchers was formed to implement, and continually assess and improve the application of the four conditions. Researchers evaluated the intervention using qualitative and quantitative data collection and analysis methods. Results The values underlying the AFR approach were in all three districts well-aligned with general values expressed by both service providers and community representatives. There was some variation in the interpretations and actual use of the AFR in the decision-making processes in the three districts, and its effect ranged from an increase in awareness of the importance of fairness to a broadened engagement of health team members and other stakeholders in priority setting and other decision-making processes. Conclusions District stakeholders were able to take greater charge of closing the gap between nationally set planning and the local realities and demands of the served communities within the limited resources at hand. This study thus indicates that the operationalization of the four broadly defined and linked conditions is both possible and seems to
Svyatskiy, Daniil [Los Alamos National Laboratory; Shashkov, Mikhail [Los Alamos National Laboratory; Kuzmin, D [DORTMUND UNIV
2008-01-01
A new approach to the design of constrained finite element approximations to second-order elliptic problems is introduced. This approach guarantees that the finite element solution satisfies the discrete maximum principle (DMP). To enforce these monotonicity constrains the sufficient conditions for elements of the stiffness matrix are formulated. An algebraic splitting of the stiffness matrix is employed to separate the contributions of diffusive and antidiffusive numerical fluxes, respectively. In order to prevent the formation of spurious undershoots and overshoots, a symmetric slope limiter is designed for the antidiffusive part. The corresponding upper and lower bounds are defined using an estimate of the steepest gradient in terms of the maximum and minimum solution values at surrounding nodes. The recovery of nodal gradients is performed by means of a lumped-mass L{sub 2} projection. The proposed slope limiting strategy preserves the consistency of the underlying discrete problem and the structure of the stiffness matrix (symmetry, zero row and column sums). A positivity-preserving defect correction scheme is devised for the nonlinear algebraic system to be solved. Numerical results and a grid convergence study are presented for a number of anisotropic diffusion problems in two space dimensions.
47 CFR 90.1215 - Power limits.
2010-10-01
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Power limits. 90.1215 Section 90.1215... § 90.1215 Power limits. The transmitting power of stations operating in the 4940-4990 MHz band must not exceed the maximum limits in this section. (a)(1) The maximum conducted output power should not...
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Nanoscale heat engine beyond the Carnot limit.
Roßnagel, J; Abah, O; Schmidt-Kaler, F; Singer, K; Lutz, E
2014-01-24
We consider a quantum Otto cycle for a time-dependent harmonic oscillator coupled to a squeezed thermal reservoir. We show that the efficiency at maximum power increases with the degree of squeezing, surpassing the standard Carnot limit and approaching unity exponentially for large squeezing parameters. We further propose an experimental scheme to implement such a model system by using a single trapped ion in a linear Paul trap with special geometry. Our analytical investigations are supported by Monte Carlo simulations that demonstrate the feasibility of our proposal. For realistic trap parameters, an increase of the efficiency at maximum power of up to a factor of 4 is reached, largely exceeding the Carnot bound.
Site Specific Probable Maximum Precipitation Estimates and Professional Judgement
Hayes, B. D.; Kao, S. C.; Kanney, J. F.; Quinlan, K. R.; DeNeale, S. T.
2015-12-01
State and federal regulatory authorities currently rely upon the US National Weather Service Hydrometeorological Reports (HMRs) to determine probable maximum precipitation (PMP) estimates (i.e., rainfall depths and durations) for estimating flooding hazards for relatively broad regions in the US. PMP estimates for the contributing watersheds upstream of vulnerable facilities are used to estimate riverine flooding hazards while site-specific estimates for small water sheds are appropriate for individual facilities such as nuclear power plants. The HMRs are often criticized due to their limitations on basin size, questionable applicability in regions affected by orographic effects, their lack of consist methods, and generally by their age. HMR-51 for generalized PMP estimates for the United States east of the 105th meridian, was published in 1978 and is sometimes perceived as overly conservative. The US Nuclear Regulatory Commission (NRC), is currently reviewing several flood hazard evaluation reports that rely on site specific PMP estimates that have been commercially developed. As such, NRC has recently investigated key areas of expert judgement via a generic audit and one in-depth site specific review as they relate to identifying and quantifying actual and potential storm moisture sources, determining storm transposition limits, and adjusting available moisture during storm transposition. Though much of the approach reviewed was considered a logical extension of HMRs, two key points of expert judgement stood out for further in-depth review. The first relates primarily to small storms and the use of a heuristic for storm representative dew point adjustment developed for the Electric Power Research Institute by North American Weather Consultants in 1993 in order to harmonize historic storms for which only 12 hour dew point data was available with more recent storms in a single database. The second issue relates to the use of climatological averages for spatially
Information Entropy Production of Spatio-Temporal Maximum Entropy Distributions
Cofre, Rodrigo
2015-01-01
Spiking activity from populations of neurons display causal interactions and memory effects. Therefore, they are expected to show some degree of irreversibility in time. Motivated by the spike train statistics, in this paper we build a framework to quantify the degree of irreversibility of any maximum entropy distribution. Our approach is based on the transfer matrix technique, which enables us to find an homogeneous irreducible Markov chain that shares the same maximum entropy measure. We provide relevant examples in the context of spike train statistics
Maximum power point tracking for optimizing energy harvesting process
Akbari, S.; Thang, P. C.; Veselov, D. S.
2016-10-01
There has been a growing interest in using energy harvesting techniques for powering wireless sensor networks. The reason for utilizing this technology can be explained by the sensors limited amount of operation time which results from the finite capacity of batteries and the need for having a stable power supply in some applications. Energy can be harvested from the sun, wind, vibration, heat, etc. It is reasonable to develop multisource energy harvesting platforms for increasing the amount of harvesting energy and to mitigate the issue concerning the intermittent nature of ambient sources. In the context of solar energy harvesting, it is possible to develop algorithms for finding the optimal operation point of solar panels at which maximum power is generated. These algorithms are known as maximum power point tracking techniques. In this article, we review the concept of maximum power point tracking and provide an overview of the research conducted in this area for wireless sensor networks applications.
The evolution of maximum body size of terrestrial mammals.
Smith, Felisa A; Boyer, Alison G; Brown, James H; Costa, Daniel P; Dayan, Tamar; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; McCain, Christy; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D
2010-11-26
The extinction of dinosaurs at the Cretaceous/Paleogene (K/Pg) boundary was the seminal event that opened the door for the subsequent diversification of terrestrial mammals. Our compilation of maximum body size at the ordinal level by sub-epoch shows a near-exponential increase after the K/Pg. On each continent, the maximum size of mammals leveled off after 40 million years ago and thereafter remained approximately constant. There was remarkable congruence in the rate, trajectory, and upper limit across continents, orders, and trophic guilds, despite differences in geological and climatic history, turnover of lineages, and ecological variation. Our analysis suggests that although the primary driver for the evolution of giant mammals was diversification to fill ecological niches, environmental temperature and land area may have ultimately constrained the maximum size achieved.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Pushing concentration of stationary solar concentrators to the limit.
Winston, Roland; Zhang, Weiya
2010-04-26
We give the theoretical limit of concentration allowed by nonimaging optics for stationary solar concentrators after reviewing sun- earth geometry in direction cosine space. We then discuss the design principles that we follow to approach the maximum concentration along with examples including a hollow CPC trough, a dielectric CPC trough, and a 3D dielectric stationary solar concentrator which concentrates sun light four times (4x), eight hours per day year around.
Loescher, D.H. [Sandia National Labs., Albuquerque, NM (United States). Systems Surety Assessment Dept.; Noren, K. [Univ. of Idaho, Moscow, ID (United States). Dept. of Electrical Engineering
1996-09-01
The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.
张贝; 李卫东; 杨勇; 汪善勤; 蔡崇法
2011-01-01
The Bayesian maximum entropy ( BME) approach has emerged in recent years as a new spatio-temporal geostatistics methods. By capitalizing on various sources of information and data, BME introduces an epistemological framework which produces predictive maps that are more accurate and in many cases computationally more efficient than those derived with traditional techniques. It is a general approach that does not need to make assumptions regarding linear valuation, spatial homogeneity or normal distribution. BME can integrate a priori knowledge and soft data without losing any useful information they contain and improve accuracy of the analysis. This paper first introduces the basic theory of BME and stages of BME estimation, and then briefly describes its development and application in soil and environmental sciences. Finally the application of this method is also summarized and prospected. After years of development and practice, the BME method has been proved to be a mature outstanding approach, which has a broad prospect of application in evaluation of resources and environment.%贝叶斯最大熵(Bayesian Maximum Entropy,BME)地统计学方法是近年来出现的一种时空地 统计学新方法.相对于传统的克里金方法,该法具有坚实的认识论框架和方法学基础.它不需要作线性估 值、空间匀质和正态分布的假设,能够融入先验知识和软数据,并且不会损失其中蕴含的有用信息,提高了分 析精度.本文首先介绍了BME的基本理论及其估值方法,随后简单描述了该方法的理论发展过程及其在土 壤和环境科学上的应用情况,最后对该方法的应用做了总结与展望.经过国外研究者多年的开发和实践, BME方法已经被证明是一个理论上较为成熟,能够应用到实际研究中的优秀地统计学方法,在资源环境评估 上有着广泛的应用前景.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
The optimal polarizations for achieving maximum contrast in radar images
Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.
1988-01-01
There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.